• Title/Summary/Keyword: One Time Key

Search Result 1,288, Processing Time 0.033 seconds

An Exploratory Study on Channel Equity of Electronic Goods (가전제품 소비자의 Channel Equity에 관한 탐색적 연구)

  • Suh, Yong-Gu;Lee, Eun-Kyung
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.3
    • /
    • pp.1-25
    • /
    • 2008
  • Ⅰ. Introduction Retailers in the 21st century are being told that future retailers are those who can execute seamless multi-channel access. The reason is that retailers should be where shoppers want them, when they want them anytime, anywhere and in multiple formats. Multi-channel access is considered one of the top 10 trends of all business in the next decade (Patricia T. Warrington, et al., 2007) And most firms use both direct and indirect channels in their markets. Given this trend, we need to evaluate a channel equity more systematically than before as this issue is expected to get more attention to consumers as well as to brand managers. Consumers are becoming very much confused concerning the choice of place where they shop for durable goods as there are at least 6-7 retail options. On the other hand, manufacturers have to deal with category killers, their dealers network, Internet shopping malls, and other avenue of distribution channels and they hope their retail channel behave like extensions of their own companies. They would like their products to be foremost in the retailer's mind-the first to be proposed and effectively communicated to potential customers. To enable this hope to come reality, they should know each channel's advantages and disadvantages from consumer perspectives. In addition, customer satisfaction is the key determinant of retail customer loyalty. However, there are only a few researches regarding the effects of shopping satisfaction and perceptions on consumers' channel choices and channels. The purpose of this study was to assess Korean consumers' channel choice and satisfaction towards channels they prefer to use in the case of electronic goods shopping. Korean electronic goods retail market is one of good example of multi-channel shopping environments. As the Korea retail market has been undergoing significant structural changes since it had opened to global retailers in 1996, new formats such as hypermarkets, Internet shopping malls and category killers have arrived for the last decade. Korean electronic goods shoppers have seven major channels : (1)category killers (2) hypermarket (3) manufacturer dealer shop (4) Internet shopping malls (5) department store (6) TV home-shopping (7) speciality shopping arcade. Korean retail sector has been modernized with amazing speed for the last decade. Overall summary of major retail channels is as follows: Hypermarket has been number 1 retailer type in sales volume from 2003 ; non-store retailing has been number 2 from 2007 ; department store is now number 3 ; small scale category killers are growing rapidly in the area of electronics and office products in particular. We try to evaluate each channel's equity using a consumer survey. The survey was done by telephone interview with 1000 housewife who lives nationwide. Sampling was done according to 2005 national census and average interview time was 10 to 15 minutes. Ⅱ. Research Summary We have found that seven major retail channels compete with each other within Korean consumers' minds in terms of price and service. Each channel seem to have its unique selling points. Department stores were perceived as the best electronic goods shopping destinations due to after service. Internet shopping malls were perceived as the convenient channel owing to price checking. Category killers and hypermarkets were more attractive in both price merits and location conveniences. On the other hand, manufacturers dealer networks were pulling customers mainly by location and after service. Category killers and hypermarkets were most beloved retail channel for Korean consumers. However category killers compete mainly with department stores and shopping arcades while hypermarkets tend to compete with Internet and TV home shopping channels. Regarding channel satisfaction, the top 3 channels were service-driven retailers: department stores (4.27); dealer shop (4.21); and Internet shopping malls (4.21). Speciality shopping arcade(3.98) were the least satisfied channels among Korean consumers. Ⅲ. Implications We try to identify the whole picture of multi-channel retail shopping environments and its implications in the context of Korean electronic goods. From manufacturers' perspectives, multi-channel may cause channel conflicts. Furthermore, inter-channel competition draws much more attention as hypermarkets and category killers have grown rapidly in recent years. At the same time, from consumers' perspectives, 'buy where' is becoming an important buying decision as it would decide the level of shopping satisfaction. We need to develop the concept of 'channel equity' to manage multi-channel distribution effectively. Firms should measure and monitor their prime channel equity in regular basis to maximize their channel potentials. Prototype channel equity positioning map has been developed as follows. We expect more studies to develop the concept of 'channel equity' in the future.

  • PDF

A Study on Profitability of the Allianced Discount Program with Credit Cards and Loyalty Cards in Food & Beverage Industry (제휴카드 할인프로그램이 외식업의 수익성에 미치는 영향)

  • Shin, Young Sik;Cha, Kyoung Cheon
    • Asia Marketing Journal
    • /
    • v.12 no.4
    • /
    • pp.55-78
    • /
    • 2011
  • Recently strategic alliance between business firms has become prevalent to overcome increasing competitive threats and to supplement resource limitation of individual firms. As one of allianced sales promotion activities, a new type of discount program, so called "Alliance Card Discount", is introduced with the partnership of credit cards and loyalty cards. The program mainly pursues short-term sales growth by larger discount scheme while spends less through cost share among alliance partners. Thus this program can be regarded as cost efficient discount promotion. But because there is no solid evidence that it can really deliver profitable sales growth, an empirical study for its effects on sales and profit should be conducted. This study has two basic research questions concerning the effects of allianced discount program ; 1)the possibility of sales increase 2) the profitability of the discount driven sales. In F&B industry, sales increase mainly comes from increased guest count. Especially in family restaurants, to increase the number of guests we need to enlarge the size of visitor group (number of visitors for one group) because customers visit by group in a special occasion. And because they pay the bill by group(table), the increase of sales per table is a key measure for sales improvement. The past researches for price & discount sensitivity and reference discount rate explain that price sensitive consumers have narrow reference discount zone and make rational purchase decision. Differently from all time discount scheme of regular sales promotions, the alliance card discount program only provides the right to get discount like discount coupon. And because it is usually once a month opportunity given by the past month usage level, customers tend to perceive alliance card discount as a rare chance to get. So that we can expect customers try to maximize the discount effect when they use the limited discount opportunity. Considering group visiting practice and low visit frequency of family restaurants, the way to maximize discount effect should be the increase the size of visit group. And their sensitivity to discount and rational consumption behavior defer the additional spending for ordering high price menu, even though they get considerable amount of savings from the discount. From the analysis of sales data paid by alliance discount cards for four months, we found the below. 1) The relation between discount rate and number of guest per table is positive : 25% discount results one additional guest 2) The relation between discount rate and the spending per guest is negative. 3) However, total profit amount per table is increased when discount rate is increased. 4) Reward point accumulation & redemption did not show any significant relationship with the increase of number of guests. These results suggest that the allianced discount program substantially contributes to sales increase and profit improvement by increasing the number of guests per table. Though the spending per guest is decreased by discount rate increase, the total amount of profit per table is improved. It seems the incremental profit by increased guest count offsets the profit decrease. Additional intriguing finding is the point reward system does not have any significant impact on the increase of number of guest, even if the point accumulation & redemption of loyalty program are usually regarded as another saving offers by customers. In sum, because it is proved that allianced discount program with credit cards and loyalty cards is effective to both sales drive and profit increase, the alliance card program could be recommended as strategically buyable program.

  • PDF

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

Standard Chemotherapy with Excluding Isoniazid in a Murine Model of Tuberculosis (마우스 결핵 모델에서 Isoniazid를 제외한 표준치료의 예비 연구)

  • Shim, Tae Sun;Lee, Eun Gae;Choi, Chang Min;Hong, Sang-Bum;Oh, Yeon-Mok;Lim, Chae-Man;Lee, Sang Do;Koh, Younsuck;Kim, Woo Sung;Kim, Dong Soon;Cho, Sang-Nae;Kim, Won Dong
    • Tuberculosis and Respiratory Diseases
    • /
    • v.65 no.3
    • /
    • pp.177-182
    • /
    • 2008
  • Background: Isoniazid (INH, H) is a key drug of the standard first-line regimen for the treatment of tuberculosis (TB), yet some reports have suggested that treatment efficacy was maintained even though INH was omitted from the treatment regimen. Methods: One hundred forty C57BL/6 mice were infected with the H37Rv strain of M. tuberculosis with using a Glas-Col aerosol generation device, and this resulted in depositing about 100 bacilli in the lung. Four weeks after infection, anti-TB treatment was initiated with varying regimens for 4-8 weeks; Group 1: no treatment (control), Group 2 (4HREZ): 4 weeks of INH, rifampicin (R), pyrazinamide (Z) and ethambutol (E), Group 3: 1HREZ/3REZ, Group 4: 4REZ, Group 5: 4HREZ/4HRE, Group 6: 1HREZ/3REZ/4RE, and Group 7: 4REZ/4RE. The lungs and spleens were harvested at several time points until 28 weeks after infection, and the colony-forming unit (CFU) counts were determined. Results: The CFU counts increased steadily after infection in the control group. In the 4-week treatment groups (Group 2-4), even though the culture was negative at treatment completion, the bacilli grew again at the 12-week and 20-week time points after completion of treatment. In the 8-week treatment groups (Groups 5-7), the bacilli did not grow in the lung at 4 weeks after treatment initiation and thereafter. In the spleens of Group 7 in which INH was omitted from the treatment regimen, the culture was negative at 4-weeks after treatment initiation and thereafter. However, in Groups 5 and 6 in which INH was taken continuously or intermittently, the bacilli grew in the spleen at some time points after completion of treatment. Conclusion: TThe exclusion of INH from the standard first-line regimen did not affect the treatment outcome in a murine model of TB in the early stage of disease. Further studies using a murine model of chronic TB are necessary to clarify the role of INH in the standard first-line regimen for treating TB.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Delineating Transcription Factor Networks Governing Virulence of a Global Human Meningitis Fungal Pathogen, Cryptococcus neoformans

  • Jung, Kwang-Woo;Yang, Dong-Hoon;Maeng, Shinae;Lee, Kyung-Tae;So, Yee-Seul;Hong, Joohyeon;Choi, Jaeyoung;Byun, Hyo-Jeong;Kim, Hyelim;Bang, Soohyun;Song, Min-Hee;Lee, Jang-Won;Kim, Min Su;Kim, Seo-Young;Ji, Je-Hyun;Park, Goun;Kwon, Hyojeong;Cha, Sooyeon;Meyers, Gena Lee;Wang, Li Li;Jang, Jooyoung;Janbon, Guilhem;Adedoyin, Gloria;Kim, Taeyup;Averette, Anna K.;Heitman, Joseph;Cheong, Eunji;Lee, Yong-Hwan;Lee, Yin-Won;Bahn, Yong-Sun
    • 한국균학회소식:학술대회논문집
    • /
    • 2015.05a
    • /
    • pp.59-59
    • /
    • 2015
  • Cryptococcus neoformans causes life-threatening meningoencephalitis in humans, but the treatment of cryptococcosis remains challenging. To develop novel therapeutic targets and approaches, signaling cascades controlling pathogenicity of C. neoformans have been extensively studied but the underlying biological regulatory circuits remain elusive, particularly due to the presence of an evolutionarily divergent set of transcription factors (TFs) in this basidiomycetous fungus. In this study, we constructed a high-quality of 322 signature-tagged gene deletion strains for 155 putative TF genes, which were previously predicted using the DNA-binding domain TF database (http://www.transcriptionfactor.org/). We tested in vivo and in vitro phenotypic traits under 32 distinct growth conditions using 322 TF gene deletion strains. At least one phenotypic trait was exhibited by 145 out of 155 TF mutants (93%) and approximately 85% of the TFs (132/155) have been functionally characterized for the first time in this study. Through high-coverage phenome analysis, we discovered myriad novel TFs that play critical roles in growth, differentiation, virulence-factor (melanin, capsule, and urease) formation, stress responses, antifungal drug resistance, and virulence. Large-scale virulence and infectivity assays in insect (Galleria mellonella) and mouse host models identified 34 novel TFs that are critical for pathogenicity. The genotypic and phenotypic data for each TF are available in the C. neoformans TF phenome database (http://tf.cryptococcus.org). In conclusion, our phenome-based functional analysis of the C. neoformans TF mutant library provides key insights into transcriptional networks of basidiomycetous fungi and ubiquitous human fungal pathogens.

  • PDF

Development of Systematic Process for Estimating Commercialization Duration and Cost of R&D Performance (기술가치 평가를 위한 기술사업화 기간 및 비용 추정체계 개발)

  • Jun, Seoung-Pyo;Choi, Daeheon;Park, Hyun-Woo;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.139-160
    • /
    • 2017
  • Technology commercialization creates effective economic value by linking the company's R & D processes and outputs to the market. This technology commercialization is important in that a company can retain and maintain a sustained competitive advantage. In order for a specific technology to be commercialized, it goes through the stage of technical planning, technology research and development, and commercialization. This process involves a lot of time and money. Therefore, the duration and cost of technology commercialization are important decision information for determining the market entry strategy. In addition, it is more important information for a technology investor to rationally evaluate the technology value. In this way, it is very important to scientifically estimate the duration and cost of the technology commercialization. However, research on technology commercialization is insufficient and related methodology are lacking. In this study, we propose an evaluation model that can estimate the duration and cost of R & D technology commercialization for small and medium-sized enterprises. To accomplish this, this study collected the public data of the National Science & Technology Information Service (NTIS) and the survey data provided by the Small and Medium Business Administration. Also this study will develop the estimation model of commercialization duration and cost of R&D performance on using these data based on the market approach, one of the technology valuation methods. Specifically, this study defined the process of commercialization as consisting of development planning, development progress, and commercialization. We collected the data from the NTIS database and the survey of SMEs technical statistics of the Small and Medium Business Administration. We derived the key variables such as stage-wise R&D costs and duration, the factors of the technology itself, the factors of the technology development, and the environmental factors. At first, given data, we estimates the costs and duration in each technology readiness level (basic research, applied research, development research, prototype production, commercialization), for each industry classification. Then, we developed and verified the research model of each industry classification. The results of this study can be summarized as follows. Firstly, it is reflected in the technology valuation model and can be used to estimate the objective economic value of technology. The duration and the cost from the technology development stage to the commercialization stage is a critical factor that has a great influence on the amount of money to discount the future sales from the technology. The results of this study can contribute to more reliable technology valuation because it estimates the commercialization duration and cost scientifically based on past data. Secondly, we have verified models of various fields such as statistical model and data mining model. The statistical model helps us to find the important factors to estimate the duration and cost of technology Commercialization, and the data mining model gives us the rules or algorithms to be applied to an advanced technology valuation system. Finally, this study reaffirms the importance of commercialization costs and durations, which has not been actively studied in previous studies. The results confirm the significant factors to affect the commercialization costs and duration, furthermore the factors are different depending on industry classification. Practically, the results of this study can be reflected in the technology valuation system, which can be provided by national research institutes and R & D staff to provide sophisticated technology valuation. The relevant logic or algorithm of the research result can be implemented independently so that it can be directly reflected in the system, so researchers can use it practically immediately. In conclusion, the results of this study can be a great contribution not only to the theoretical contributions but also to the practical ones.

The Study of the Aternative Boadcasting System: in the Case of the Channel 4 in Britain (대안적 방송제작시스템 연구 : 영국 채널4의 외주제작시스템을 중심으로)

  • Eun, Hye-Chung
    • Korean journal of communication and information
    • /
    • v.17
    • /
    • pp.85-111
    • /
    • 2001
  • In this article, Channel 4 in Britain is the main theme since its alternative broadcasting system can shed the light to the Korean case. Korea is getting into the era of multimedia and including webcastings there are over thousands channels are available. However the infra-structure fur the broadcasting contents never seems to be matured to match its need. Instead Korean production system is rather vertically integrated into the Networks(KBS, MBC and SBS) which oligopolise the broadcasting in terms of supply. Even though 'Program Quota Regulation' has been established under the new Broadcasting Art(1999), the old habits die hard and still the independent producers have the unfair relationships with the Networks. Under this circumstance, Channel 4 can be the good example to show how well the alternative system can serve to the diversity of broadcasting and the taste of the minority. Channel 4 took almost 20 years to establish since there were enormous amount of debates about its public missions, ideal broadcasting system, whom it should serve for, etc. between all the social sectors including the independent producers. The social agreement was reached on the point that the new broadcaster should not produce but publish and it is called the 'publishing broadcaster'. In this sense, it can be managed effectively with comparatively little fund and at the same time, it can always have all different sorts of contents as well as genres very freely through 'commissioning process' or buying programs from even the most innovative producers. The 'commissioning process' is one of the key points which makes the Channel 4 so unique. The commissioning process is literally open to anybody, in particular, to the small scale producers with much innovative ideas. Channel 4 will support financially as well as with facilities and human resource to produce the program once after their program idea is accepted by the commissioning editor. Even better side of Channel 4 is about their financial success. From the beginning, the 'funding formula' helped in great deal to make the Channel 4 doing all sorts of innovative experiments. The history of 'funding formula' and its contribution are explained in the article, too. With all this effort, the article is hoped to bring discussion about the alternative broadcasting system which might help to prepare the new era of broadcasting.

  • PDF

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

Effects of Coenzyme Q10 on the Expression of Genes involved in Lipid Metabolism in Laying Hens (Coenzyme Q10 첨가 급여가 산란계의 지방대사 연관 유전자 발현에 미치는 영향)

  • Jang, In Surk;Moon, Yang Soo
    • Korean Journal of Poultry Science
    • /
    • v.43 no.1
    • /
    • pp.47-54
    • /
    • 2016
  • The aim of this study was to investigate the expression patterns of key genes involved in lipid metabolism in response to dietary Coenzyme Q10 (CoQ10) in hens. A total of 36 forty week-old Lohmann Brown were randomly allocated into 3 groups consisting of 4 replicates of 3 birds. Laying hens were subjected to one of following treatments: Control (BD, basal diet), T1 (BD+ CoQ10 100 mg/kg diet) and T2 (BD+ micellar of CoQ10 100 mg/kg diet). Birds were fed ad libitum a basal diet or the basal diet supplemented with CoQ10 for 5 weeks. Total RNA was extracted from the liver for quantitative RT-PCR. The mRNA levels of HMG-CoA reductase(HMGCR) and sterol regulatory element-binding proteins(SREBP)2 were decreased more than 30~50% in the liver of birds fed a basal diet supplemented with CoQ10 (p<0.05). These findings suggest that dietary CoQ10 can reduce cholesterol levels by the suppression of the hepatic HMGCR and SREBP2 genes. The gene expressions of liver X receptor (LXR) and SREBP1 were down regulated due to the addition of CoQ10 to the feed (p<0.05). The homeostasis of cholesterol can be regulated by LXR and SREBP1 in cholesterol-low-conditions. The supplement of CoQ10 caused a decreased expression of lipid metabolism-related genes including $PPAR{\gamma}$, XBP1, FASN, and GLUTs in the liver of birds (p<0.05). These data suggest that CoQ10 might be used as a dietary supplement to reduce cholesterol levels and to regulate lipid homeostasis in laying hens.