• Title/Summary/Keyword: Specific force

Search Result 879, Processing Time 0.034 seconds

Optimization of Ultrasound-Assisted Pretreatment for Accelerating Rehydration of Adzuki Bean (Vigna angularis)

  • Hyengseop Kim;Changgeun Lee;Eunghee Kim;Youngje Jo;Jiyoon Park;Choongjin Ban;Seokwon Lim
    • Journal of Microbiology and Biotechnology
    • /
    • v.34 no.4
    • /
    • pp.846-853
    • /
    • 2024
  • Adzuki bean (Vigna angularis), which provides plant-based proteins and functional substances, requires a long soaking time during processing, which limits its usefulness to industries and consumers. To improve this, ultrasonic treatment using high pressure and shear force was judged to be an appropriate pretreatment method. This study aimed to determine the optimal conditions of ultrasound treatment for the improved hydration of adzuki beans using the response surface methodology (RSM). Independent variables chosen to regulate the hydration process of the adzuki beans were the soaking time (2-14 h, X1), treatment intensity (150-750 W, X2), and treatment time (1-10 min, X3). Dependent variables chosen to assess the differences in the beans post-immersion were moisture content, water activity, and hardness. The optimal conditions for treatment deduced through RSM were a soaking time of 12.9 h, treatment intensity of 600 W, and treatment time of 8.65 min. In this optimal condition, the values predicted for the dependent variables were a moisture content of 58.32%, water activity of 0.9979 aw, and hardness of 14.63 N. Upon experimentation, the results obtained were a moisture content of 58.28 ± 0.56%, water activity of 0.9885 ± 0.0040 aw, and hardness of 13.01 ± 2.82 g, confirming results similar to the predicted values. Proper ultrasound treatment caused cracks in the hilum, which greatly affects the water absorption of adzuki beans, accelerating the rate of hydration. These results are expected to help determine economically efficient processing conditions for specific purposes, in addition to solving industrial problems associated with the low hydration rate of adzuki beans.

Magnetic Nanochain-Based Smart Drug Delivery System with Remote Tunable Drug Release by a Magnetic Field

  • Byunghoon Kang;Moo-Kwang Shin;Seungmin Han;Ilyoung Oh;Eunjung Kim;Joseph Park;Hye Young Son;Taejoon Kang;Juyeon Jung;Yong-Min Huh;Seungjoo Haam;Eun-Kyung Lim
    • BioChip Journal
    • /
    • v.16
    • /
    • pp.280-290
    • /
    • 2020
  • Considerable attention is given to drug delivery technology that efficiently delivers appropriate levels of drug molecules to diseased sites with significant therapeutic efficacy. Nanotechnology has been used to develop various strategies for targeted drug delivery, while controlling the release of drugs because of its many benefits. Here, a delivery system was designed to control drug release by external magnetic fields using porous silica and magnetic nanoparticles. Magnetic nanochains (MNs) of various lengths (MN-1: 1.4 ± 0.8 ㎛, MN-2: 2.2 ± 1.1 ㎛, and MN-3: 5.3 ± 2.0 ㎛) were synthesized by controlling the exposure time of the external magnetic force in magnetic nanoaggregates (MNCs). Mesoporous silica-coated magnetic nanochains (MSMNs) (MSMN-1, MSMN-2, and MSMN-3) were prepared by forming a porous silica layer through sol-gel polymerization. These MSMNs could load the drug doxorubicin (DOX) into the silica layer (DOX-MSMNs) and control the release behavior of the DOX through an external rotating magnetic field. Simulations and experiments were used to verify the motion and drug release behavior of the MSMNs. Furthermore, a bio-receptor (aptamer, Ap) was introduced onto the surface of the DOX-MSMNs (Ap-DOX-MSMNs) that could recognize specific cancer cells. The Ap-DOX-MSMNs demonstrated a strong therapeutic effect on cancer cells that was superior to that of the free DOX. The potent ability of these MSMNs as an external stimulus-responsive drug delivery system was proven.

A novel approach for the definition and detection of structural irregularity in reinforced concrete buildings

  • S.P. Akshara;M. Abdul Akbar;T.M. Madhavan Pillai;Renil Sabhadiya;Rakesh Pasunuti
    • Structural Monitoring and Maintenance
    • /
    • v.11 no.2
    • /
    • pp.101-126
    • /
    • 2024
  • To avoid irregularities in buildings, design codes worldwide have introduced detailed guidelines for their check and rectification. However, the criteria used to define and identify each of the plan and vertical irregularities are specific and may vary between codes of different countries, thus making their implementation difficult. This short communication paper proposes a novel approach for quantifying different types of structural irregularities using a common parameter named as unified identification factor, which is exclusively defined for the columns based on their axial loads and tributary areas. The calculation of the identification factor is demonstrated through the analysis of rectangular and circular reinforced concrete models using ETABS v18.0.2, which are further modified to generate plan irregular (torsional irregularity, cut-out in floor slab and non-parallel lateral force system) and vertical irregular (mass irregularity, vertical geometric irregularity and floating columns) models. The identification factor is calculated for all the columns of a building and the range within which the value lies is identified. The results indicate that the range will be very wide for an irregular building when compared to that with a regular configuration, thus implying a strong correlation of the identification factor with the structural irregularity. Further, the identification factor is compared for different columns within a floor and between floors for each building model. The findings suggest that the value will be abnormally high or low for a column in the vicinity of an irregularity. The proposed factor could thus be used in the preliminary structural design phase, so as to eliminate the complications that might arise due to the geometry of the structure when subjected to lateral loads. The unified approach could also be incorporated in future revisions of codes, as a replacement for the numerous criteria currently used for classifying different types of irregularities.

The International Legality of the North Korean Missile Test (북한미사일 실험의 국제법상 위법성에 관한 연구)

  • Shin, Hong-Kyun
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.24 no.2
    • /
    • pp.211-234
    • /
    • 2009
  • North Korea conducted the launcher test, which, as North Korea claimed, belonged to the sovereign rights for the purpose of peaceful utilization and exploration of the outer space. The launching was allegedly done for the sole purpose of putting the satellite into earth orbit, while international community stressed the fact that the orbiting of satellite was not confirmed and that the technology used was not distinct from the purpose of building intercontinental ballistic missile. UN Security Council adopted the resolutions which took the effect that the launching was deemed as the missile launching, not the mere launcher test. North Korea declared the moratorium of suspending its test activity. Controversial issues have been raised regarding whether the launcher itself has the legal status of enjoying the freedom of space flight based upon the 1967 Outer Space Treaty. The resolutions, however, has put forward a binding instrument forbidding the launching. UN Security Council resolutions, however, should be read not as defining the missile test illegal, in that the language of resolutions, such as 'demand', should be considered as not formulating a sort of obligatory act or inact. On the other hand, the resolutions should be read as having binding force with respect to any activity relating to the weapons of mass destruction. The resolution 1718 is written in more specific language such as 'decides that the DPRK shall suspend all activities related to its ballistic missile programme and in this context re-establish its pre-existing commitments to a moratorium on missile launching'. Therefore, the lauching activity of the North Korea is banned by the UN Security Council resolution. It should be noted that the resolution does not include any specific provisions defining the space of activity of the North Korea as illegal. But, the legal effect of the moratorium is not denied as to its launching itself, which is corresponding to the missile test clearisibanned in accordance with the resolutions.

  • PDF

Comparing Misconceptions of Scientifically-Gifted and General Elementary Students in Physics Classes (초등학교 과학 영재와 일반 학생의 물리 오개념 비교)

  • Kwon, Sung-Gi;Kim, Ji-Eun
    • Journal of Korean Elementary Science Education
    • /
    • v.25 no.spc5
    • /
    • pp.476-484
    • /
    • 2007
  • The purpose of this study is to examine the misconception profiles of the scientifically-gifted and non-gifted children in terms of basic physics concepts and to compare them in terms of the types of differences in misconception as well as in their understanding of the concepts themselves. The subjects of this study were 75 scientifically-gifted children attending the Educational Center of Gifted Children in DNUE and 148 non-gifted children in elementary schools in Daegu city. For the purposes of this study, the basic concepts of physics (heat, electromagnetism, force, and light) which should be learned in an elementary school were selected with a review of related previous research and with an analysis of the 7th science curriculum. Next, a questionnaire was made which was made up of 20 multiple choice statement based items. Analysis of the results of the statement sections in the test, it was hoped, would reveal the difference between the scientifically-gifted and the non-gifted children's understanding, while the responses in the multiple choice items would suggest the differences between the two groups in terms of the misconceptions regarding physics concepts. The results of this study are as follows: First, although both the gifted and non-gifted children showed a low level of understanding of the concepts of heat, electromagnetism, force, and light, the gifted children' level of understanding of those physics concepts was proved to be significantly higher than the non-gifted, so it seems that the scientifically-gifted children have fundamentally understood the concepts in physics and have a higher level of understanding of them. Additionally, both the scientifically-gifted and non-gifted children' level of understanding of all the concepts was lower in the order of electromagnetism, heat, force, and light. This shows that both the scientifically-gifted and the non-gifted children have no difference in the level of understanding of any specific physics concept, but have similar levels of difficulty in every concept. Second, both the scientifically-gifted and non-gifted children showed similar types of misconceptions. However, the scientifically-gifted children had fewer misconceptions than the non-gifted. We suggest that scientifically-gifted children's misconceptions were not fixed yet, so there remained a possibility of them being corrected easily with appropriate instruction.

  • PDF

Design and Full Size Flexural Test of Spliced I-type Prestressed Concrete Bridge Girders Having Holes in the Web (분절형 복부 중공 프리스트레스트 콘크리트 교량 거더의 설계 및 실물크기 휨 실험 분석)

  • Han, Man Yop;Choi, Sokhwan;Jeon, Yong-Sik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.31 no.3A
    • /
    • pp.235-249
    • /
    • 2011
  • A new form of I-type PSC bridge girder, which has hole in the web, is proposed in this paper. Three different concepts were combined and implemented in the design. First of all, a girder was precast at a manufacturing plant as divided pieces and assembled at the construction site using post-tensioning method, and the construction period at the site will be reduced dramatically. In this way, the quality of concrete can be assured at the manufacturing factory and concrete curing can be well controlled, and the spliced girder segments can be moved to the construction site without a transportation problem. Secondly, a numerous number of holes was made in the web of the girder. This reduces the self-weight of the girder. But more important thing related to the holes is that about half of the total anchorages can be moved from the girder ends into individual holes. The magnitude of negative moment developed at girder ends will be reduced. Also, since the longitudinal compressive stresses are reduced at ends, thick end diaphragm is not necessary. Thirdly, Prestressing force was introduced into the member through multiple stages. This concept of multi-stage prestressing method overcomes the prestressing force limit restrained by the allowable stresses at each loading stage, and maximizes the magnitude of applicable prestressing force. It makes the girder longer and shallower. Two 50 meter long full scale girders were fabricated and tested. One of them was non-spliced, or monolithic girder, made as one piece from the beginning, and the other one was assembled using post-tensioning method from five pieces of segments. It was found from the result that monolithic and spliced girder show similar load-deflection relationships and crack patterns. Girders satisfied specific girder design specification in flexural strength, deflection, and live load deflection control limit. Both spliced and monolithic holed web post-tensioned girders can be used to achieve span lengths of more than 50m with the girder height of 2 m.

Quality improvement and aging effect of beef by low-temperature treatment of non-preferred parts of beef (비선호 부위 소고기의 저온처리에 의한 품질향상 및 소고기의 숙성효과)

  • Hyun Kyoung Kim;Soon Cheol Kim;Hyeon Jin Kim;Yeong Mi Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.753-760
    • /
    • 2023
  • In this study, quality improvement of beef was attempted according to the low temperature treatment and aging period of grade 1 compared to grade 1++ beef. The fat content and shear force of beef grade 1++ were 13.03% and 114.26N, but beef grade 1 was 3.21% and 149.67N. Meanwhile, after low-temperature treatment of grade 1 beef at -26℃ for 12 hours and low-temperature aging at 0 ℃ for 14 days, the shear force was greatly reduced to 87.85N, improving overall preference, softness, dripping gravy, flavor, and chewing texture. The essential free amino acid content was as low as 22.17mg/100g in grade 1++ beef, but the contents were high at 41.31~45.11mg/100g in three samples of grade 1, and there was no change in content according to cold treatment conditions. As a specific component of beef, Taurine was 30.94~34.41mg/100g, and the difference in content was small according to beef grade, but Anserine and Creatine were low at 19.68mg/100g and 70.01mg/100g in beef grade 1++ and high at 26.38~31.23mg/100g and 154.09~167.26mg/10g in beef grade 1. The content ratio of oleic acid (c18:1)/stearic acid (c18:0) as an monounsaturated fatty acid/saturated fatty acid ratio was low at 5.29 for grade 1++ beef, but high at 6.13~6.78 for grade 1 beef. In addition, there was no trend in the content ratio of these fatty acids according to the low-temperature treatment conditions and aging period in beef grade 1. As a result of this study, it was possible to improve the quality of beef grade 1 by low-temperature treatment at -26 ℃ for 12 hours and then aging at 0 ℃ for 14 days.

A Conceptual Review of the Transaction Costs within a Distribution Channel (유통경로내의 거래비용에 대한 개념적 고찰)

  • Kwon, Young-Sik;Mun, Jang-Sil
    • Journal of Distribution Science
    • /
    • v.10 no.2
    • /
    • pp.29-41
    • /
    • 2012
  • This paper undertakes a conceptual review of transaction cost to broaden the understanding of the transaction cost analysis (TCA) approach. More than 40 years have passed since Coase's fundamental insight that transaction, coordination, and contracting costs must be considered explicitly in explaining the extent of vertical integration. Coase (1937) forced economists to identify previously neglected constraints on the trading process to foster efficient intrafirm, rather than interfirm, transactions. The transaction cost approach to economic organization study regards transactions as the basic units of analysis and holds that understanding transaction cost economy is central to organizational study. The approach applies to determining efficient boundaries, as between firms and markets, and to internal transaction organization, including employment relations design. TCA, developed principally by Oliver Williamson (1975,1979,1981a) blends institutional economics, organizational theory, and contract law. Further progress in transaction costs research awaits the identification of critical dimensions in which transaction costs differ and an examination of the economizing properties of alternative institutional modes for organizing transactions. The crucial investment distinction is: To what degree are transaction-specific (non-marketable) expenses incurred? Unspecialized items pose few hazards, since buyers can turn toalternative sources, and suppliers can sell output intended for one order to other buyers. Non-marketability problems arise when specific parties' identities have important cost-bearing consequences. Transactions of this kind are labeled idiosyncratic. The summarized results of the review are as follows. First, firms' distribution decisions often prompt examination of the make-or-buy question: Should a marketing activity be performed within the organization by company employees or contracted to an external agent? Second, manufacturers introducing an industrial product to a foreign market face a difficult decision. Should the product be marketed primarily by captive agents (the company sales force and distribution division) or independent intermediaries (outside sales agents and distribution)? Third, the authors develop a theoretical extension to the basic transaction cost model by combining insights from various theories with the TCA approach. Fourth, other such extensions are likely required for the general model to be applied to different channel situations. It is naive to assume the basic model appliesacross markedly different channel contexts without modifications and extensions. Although this study contributes to scholastic research, it is limited by several factors. First, the theoretical perspective of TCA has attracted considerable recent interest in the area of marketing channels. The analysis aims to match the properties of efficient governance structures with the attributes of the transaction. Second, empirical evidence about TCA's basic propositions is sketchy. Apart from Anderson's (1985) study of the vertical integration of the selling function and John's (1984) study of opportunism by franchised dealers, virtually no marketing studies involving the constructs implicated in the analysis have been reported. We hope, therefore, that further research will clarify distinctions between the different aspects of specific assets. Another important line of future research is the integration of efficiency-oriented TCA with organizational approaches that emphasize specific assets' conceptual definition and industry structure. Finally, research of transaction costs, uncertainty, opportunism, and switching costs is critical to future study.

  • PDF

An integrated Method of New Casuistry and Specified Principlism as Nursing Ethics Methodology (새로운 간호윤리학 방법론;통합된 사례방법론)

  • Um, Young-Rhan
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.3 no.1
    • /
    • pp.51-64
    • /
    • 1997
  • The purpose of the study was to introduce an integrated approach of new Casuistry and specified principlism in resolving ethical problems and studying nursing ethics. In studying clinical ethics and nursing ethics, there is no systematic research method. While nurses often experience ethical dilemmas in practice, much of previous research on nursing ethics has focused merely on describing the existing problems. In addition, ethists presented theoretical analysis and critics rather than providing the specific problems solving strategies. There is a need in clinical situations for an integrated method which can provide the objective description for existing problem situations as well as specific problem solving methods. We inherit two distinct ways of discussing ethical issues. One of these frames these issues in terms of principles, rules, and other general ideas; the other focuses on the specific features of particular kinds of moral cases. In the first way general ethical rules relate to specific moral cases in a theoretical manner, with universal rules serving as "axioms" from which particular moral judgments are deduced as theorems. In the seconds, this relation is frankly practical. with general moral rules serving as "maxims", which can be fully understood only in terms of the paradigmatic cases that define their meaning and force. Theoretical arguments are structured in ways that free them from any dependence on the circumstances of their presentation and ensure them a validity of a kind that is not affected by the practical context of use. In formal arguments particular conclusions are deduced from("entailed by") the initial axioms or universal principles that are the apex of the argument. So the truth or certainty that attaches to those axioms flows downward to the specific instances to be "proved". In the language of formal logic, the axioms are major premises, the facts that specify the present instance are minor premises, and the conclusion to be "proved" is deduced (follows necessarily) from the initial presises. Practical arguments, by contrast, involve a wider range of factors than formal deductions and are read with an eye to their occasion of use. Instead of aiming at strict entailments, they draw on the outcomes of previous experience, carrying over the procedures used to resolve earlier problems and reapply them in new problmatic situations. Practical arguments depend for their power on how closely the present circumstances resemble those of the earlier precedent cases for which this particular type of argument was originally devised. So. in practical arguments, the truths and certitudes established in the precedent cases pass sideways, so as to provide "resolutions" of later problems. In the language of rational analysis, the facts of the present case define the gounds on which any resolution must be based; the general considerations that carried wight in similar situations provide warrants that help settle future cases. So the resolution of any problem holds good presumptively; its strengh depends on the similarities between the present case and the prededents; and its soundness can be challenged (or rebutted) in situations that are recognized ans exceptional. Jonsen & Toulmin (1988), and Jonsen (1991) introduce New Casuistry as a practical method. The oxford English Dictionary defines casuistry quite accurately as "that part of ethics which resolves cases of conscience, applying the general rules of religion and morality to particular instances in which circumstances alter cases or in which there appears to be a conflict of duties." They modified the casuistry of the medieval ages to use in clinical situations which is characterized by "the typology of cases and the analogy as an inference method". A case is the unit of analysis. The structure of case was made with interaction of situation and moral rules. The situation is what surrounds or stands around. The moral rule is the essence of case. The analogy can be objective because "the grounds, the warrants, the theoretical backing, the modal qualifiers" are identified in the cases. The specified principlism was the method that Degrazia (1992) integrated the principlism and the specification introduced by Richardson (1990). In this method, the principle is specified by adding information about limitations of the scope and restricting the range of the principle. This should be substantive qualifications. The integrated method is an combination of the New Casuistry and the specified principlism. For example, the study was "Ethical problems experienced by nurses in the care of terminally ill patients"(Um, 1994). A semi-structured in-depth interview was conducted for fifteen nurses who mainly took care of terminally ill patients. The first stage, twenty one cases were identified as relevant to the topic, and then were classified to four types of problems. For instance, one of these types was the patient's refusal of care. The second stage, the ethical problems in the case were defined, and then the case was analyzed. This was to analyze the reasons, the ethical values, and the related ethical principles in the cases. Then the interpretation was synthetically done by integration of the result of analysis and the situation. The third stage was the ordering phase of the cases, which was done according to the result of the interpretation and the common principles in the cases. The first two stages describe the methodology of new casuistry, and the final stage was for the methodology of the specified principlism. The common principles were the principle of autonomy and the principle of caring. The principle of autonomy was specified; when competent patients refused care, nurse should discontinue the care to respect for the patients' decision. The principle of caring was also specified; when the competent patients refused care, nurses should continue to provide the care in spite of the patients' refusal to preserve their life. These specification may lead the opposite behavior, which emphasizes the importance of nurse's will and intentions to make their decision in the clinical situations.

  • PDF

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.