• Title/Summary/Keyword: Through Step

Search Result 4,687, Processing Time 0.038 seconds

Understanding User Motivations and Behavioral Process in Creating Video UGC: Focus on Theory of Implementation Intentions (Video UGC 제작 동기와 행위 과정에 관한 이해: 구현의도이론 (Theory of Implementation Intentions)의 적용을 중심으로)

  • Kim, Hyung-Jin;Song, Se-Min;Lee, Ho-Geun
    • Asia pacific journal of information systems
    • /
    • v.19 no.4
    • /
    • pp.125-148
    • /
    • 2009
  • UGC(User Generated Contents) is emerging as the center of e-business in the web 2.0 era. The trend reflects changing roles of users in production and consumption of contents on websites and helps us to understand new strategies of websites such as web portals and social network websites. Nowadays, we consume contents created by other non-professional users for both utilitarian (e.g., knowledge) and hedonic values (e.g., fun). Also, contents produced by ourselves (e.g., photo, video) are posted on websites so that our friends, family, and even the public can consume those contents. This means that non-professionals, who used to be passive audience in the past, are now creating contents and share their UGCs with others in the Web. Accessible media, tools, and applications have also reduced difficulty and complexity in the process of creating contents. Realizing that users create plenty of materials which are very interesting to other people, media companies (i.e., web portals and social networking websites) are adjusting their strategies and business models accordingly. Increased demand of UGC may lead to website visits which are the source of benefits from advertising. Therefore, they put more efforts into making their websites open platforms where UGCs can be created and shared among users without technical and methodological difficulties. Many websites have increasingly adopted new technologies such as RSS and openAPI. Some have even changed the structure of web pages so that UGC can be seen several times to more visitors. This mainstream of UGCs on websites indicates that acquiring more UGCs and supporting participating users have become important things to media companies. Although those companies need to understand why general users have shown increasing interest in creating and posting contents and what is important to them in the process of productions, few research results exist in this area to address these issues. Also, behavioral process in creating video UGCs has not been explored enough for the public to fully understand it. With a solid theoretical background (i.e., theory of implementation intentions), parts of our proposed research model mirror the process of user behaviors in creating video contents, which consist of intention to upload, intention to edit, edit, and upload. In addition, in order to explain how those behavioral intentions are developed, we investigated influences of antecedents from three motivational perspectives (i.e., intrinsic, editing software-oriented, and website's network effect-oriented). First, from the intrinsic motivation perspective, we studied the roles of self-expression, enjoyment, and social attention in forming intention to edit with preferred editing software or in forming intention to upload video contents to preferred websites. Second, we explored the roles of editing software for non-professionals to edit video contents, in terms of how it makes production process easier and how it is useful in the process. Finally, from the website characteristic-oriented perspective, we investigated the role of a website's network externality as an antecedent of users' intention to upload to preferred websites. The rationale is that posting UGCs on websites are basically social-oriented behaviors; thus, users prefer a website with the high level of network externality for contents uploading. This study adopted a longitudinal research design; we emailed recipients twice with different questionnaires. Guided by invitation email including a link to web survey page, respondents answered most of questions except edit and upload at the first survey. They were asked to provide information about UGC editing software they mainly used and preferred website to upload edited contents, and then asked to answer related questions. For example, before answering questions regarding network externality, they individually had to declare the name of the website to which they would be willing to upload. At the end of the first survey, we asked if they agreed to participate in the corresponding survey in a month. During twenty days, 333 complete responses were gathered in the first survey. One month later, we emailed those recipients to ask for participation in the second survey. 185 of the 333 recipients (about 56 percentages) answered in the second survey. Personalized questionnaires were provided for them to remind the names of editing software and website that they reported in the first survey. They answered the degree of editing with the software and the degree of uploading video contents to the website for the past one month. To all recipients of the two surveys, exchange tickets for books (about 5,000~10,000 Korean Won) were provided according to the frequency of participations. PLS analysis shows that user behaviors in creating video contents are well explained by the theory of implementation intentions. In fact, intention to upload significantly influences intention to edit in the process of accomplishing the goal behavior, upload. These relationships show the behavioral process that has been unclear in users' creating video contents for uploading and also highlight important roles of editing in the process. Regarding the intrinsic motivations, the results illustrated that users are likely to edit their own video contents in order to express their own intrinsic traits such as thoughts and feelings. Also, their intention to upload contents in preferred website is formed because they want to attract much attention from others through contents reflecting themselves. This result well corresponds to the roles of the website characteristic, namely, network externality. Based on the PLS results, the network effect of a website has significant influence on users' intention to upload to the preferred website. This indicates that users with social attention motivations are likely to upload their video UGCs to a website whose network size is big enough to realize their motivations easily. Finally, regarding editing software characteristic-oriented motivations, making exclusively-provided editing software more user-friendly (i.e., easy of use, usefulness) plays an important role in leading to users' intention to edit. Our research contributes to both academic scholars and professionals. For researchers, our results show that the theory of implementation intentions is well applied to the video UGC context and very useful to explain the relationship between implementation intentions and goal behaviors. With the theory, this study theoretically and empirically confirmed that editing is a different and important behavior from uploading behavior, and we tested the behavioral process of ordinary users in creating video UGCs, focusing on significant motivational factors in each step. In addition, parts of our research model are also rooted in the solid theoretical background such as the technology acceptance model and the theory of network externality to explain the effects of UGC-related motivations. For practitioners, our results suggest that media companies need to restructure their websites so that users' needs for social interaction through UGC (e.g., self-expression, social attention) are well met. Also, we emphasize strategic importance of the network size of websites in leading non-professionals to upload video contents to the websites. Those websites need to find a way to utilize the network effects for acquiring more UGCs. Finally, we suggest that some ways to improve editing software be considered as a way to increase edit behavior which is a very important process leading to UGC uploading.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

Collision of New and Old Control Ideologies, Witnessed through the Moving of Jeong-regun (Tomb of Queen Sindeok) and Repair of Gwangtong-gyo (정릉(貞陵) 이장과 광통교(廣通橋) 개수를 통해 본 조선 초기 지배 이데올로기의 대립)

  • Nam, Hohyun
    • Korean Journal of Heritage: History & Science
    • /
    • v.53 no.4
    • /
    • pp.234-249
    • /
    • 2020
  • The dispute involving the construction of the Tomb of Queen Sindeok (hereinafter "Jeongreung"), King Taejo's wife in Seoul, and the moving of that tomb, represents the most clearly demonstrated case for the collision of new and old ideologies between political powers in the early period of Joseon. Jeongreung, the tomb of Queen Sindeok from the Kang Clan, was built inside the capital fortress, but in 1409, King Taejong forced the tomb to be moved outside the capital, and the stone relics remaining at the original location were used to build the stone bridge, Gwangtong-gyo. In an unofficial story, King Taejong moved the tomb outside the capital and used the stone items there to make the Cheonggyecheon Gwang-gyo so that the people would step upon the area in order to curse Lady Kang. In the final year of King Taejo, Lady Kang and King Taejong were in a politically conflictual relationship, but they were close to being political partners until King Taejo became the king. Sillok records pertaining to the establishment of Jeongreung or Gwangtong-gyo in fact state things more plainly, indicating that the moving of Jeongreung was a result of following the sangeon (a written statement to the king) of Uijeongbu (the highest administrative agency in Joseon), which stated that having the tomb of a king or queen in the capital was inappropriate, and since it was close to the official quarter of envoys, it had to be moved. The assertion that it was aimed at degrading Jeongreung in order to repair Gwangtong-gyo thus does not reflect the factual relationship. This article presents the possibility that the use of stone items from Jeongreung to repair Gwangtong-gyo reflected an emerging need for efficient material procurement that accompanied a drastic increase in demand for materials required in civil works both in- and outside the capital. The cause for constructing Jeongreung within the capital and the cause of moving the tomb outside the capital would therefore be attributable to the heterogeneity of the ideological backgrounds of King Taejo and King Taejong. King Taejo was the ruler of the Confucius state, as he reigned through the Yeokseong Revolution, but he constructed the tomb and Hongcheon-sa, the temple in the capital for his wife Queen Sindeok. In this respect, it is considered that, with the power of Buddhism, there was an attempt to rally supporters and gather the force needed to establish the authority of Queen Sindeok. Yi Seong-gye, who was raised in the Dorugachi clan of Yuan, lived as a military man in the border area, and so he would not have had a high level of understanding in Confucian scholarship. Rather, he was a man of the old system with its 'Buddhist" tendency. On the other hand, King Taejong Yi Bang-won was an elite Confucian student who passed the national examination at the end of the Goryeo era, and he is also known to have held a profound understanding of Neo-Confucianism. To state it differently, it would be reasonable to say that the understanding of symbolic implications for the capital would be more profound in a Confucian state. Since the national system that was ruled by laws had been established following the Three-Kingdom era, the principle of burial outside of the capital that would have seen a grave constructed on the outskirts of the capital was not upheld, without exception. Jeongreung was built inside the capital due to the strong individual desire of King Taejo, but since he was a Confucian scholar prior to becoming king, it would not have been accepted as desirable. After taking the throne, King Taejong took the initiative to begin overhauling the capital in order to reflect his intent to clearly realize Confucian ideology emphasizing 'Yechi' ("ruling with good manners") with the scenic view of the Capital's Hanyang river. It would be reasonable to conclude accordingly that the moving of Jeongreung was undertaken in the context of such a historic background.

A prognosis discovering lethal-related genes in plants for target identification and inhibitor design (식물 치사관련 유전자를 이용하는 신규 제초제 작용점 탐색 및 조절물질 개발동향)

  • Hwang, I.T.;Lee, D.H.;Choi, J.S.;Kim, T.J.;Kim, B.T.;Park, Y.S.;Cho, K.Y.
    • The Korean Journal of Pesticide Science
    • /
    • v.5 no.3
    • /
    • pp.1-11
    • /
    • 2001
  • New technologies will have a large impact on the discovery of new herbicide site of action. Genomics, combinatorial chemistry, and bioinformatics help take advantage of serendipity through tile sequencing of huge numbers of genes or the synthesis of large numbers of chemical compounds. There are approximately $10^{30}\;to\;10^{50}$ possible molecules in molecular space of which only a fraction have been synthesized. Combining this potential with having access to 50,000 plant genes in the future elevates tile probability of discovering flew herbicidal site of actions. If 0.1, 1.0 or 10% of total genes in a typical plant are valid for herbicide target, a plant with 50,000 genes would provide about 50, 500, and 5,000 targets, respectively. However, only 11 herbicide targets have been identified and commercialized. The successful design of novel herbicides depends on careful consideration of a number of factors including target enzyme selections and validations, inhibitor designs, and the metabolic fates. Biochemical information can be used to identify enzymes which produce lethal phenotypes. The identification of a lethal target site is an important step to this approach. An examination of the characteristics of known targets provides of crucial insight as to the definition of a lethal target. Recently, antisense RNA suppression of an enzyme translation has been used to determine the genes required for toxicity and offers a strategy for identifying lethal target sites. After the identification of a lethal target, detailed knowledge such as the enzyme kinetics and the protein structure may be used to design potent inhibitors. Various types of inhibitors may be designed for a given enzyme. Strategies for the selection of new enzyme targets giving the desired physiological response upon partial inhibition include identification of chemical leads, lethal mutants and the use of antisense technology. Enzyme inhibitors having agrochemical utility can be categorized into six major groups: ground-state analogues, group specific reagents, affinity labels, suicide substrates, reaction intermediate analogues, and extraneous site inhibitors. In this review, examples of each category, and their advantages and disadvantages, will be discussed. The target identification and construction of a potent inhibitor, in itself, may not lead to develop an effective herbicide. The desired in vivo activity, uptake and translocation, and metabolism of the inhibitor should be studied in detail to assess the full potential of the target. Strategies for delivery of the compound to the target enzyme and avoidance of premature detoxification may include a proherbicidal approach, especially when inhibitors are highly charged or when selective detoxification or activation can be exploited. Utilization of differences in detoxification or activation between weeds and crops may lead to enhance selectivity. Without a full appreciation of each of these facets of herbicide design, the chances for success with the target or enzyme-driven approach are reduced.

  • PDF

Continuous Process for the Etching, Rinsing and Drying of MEMS Using Supercritical Carbon Dioxide (초임계 이산화탄소를 이용한 미세전자기계시스템의 식각, 세정, 건조 연속 공정)

  • Min, Seon Ki;Han, Gap Su;You, Seong-sik
    • Korean Chemical Engineering Research
    • /
    • v.53 no.5
    • /
    • pp.557-564
    • /
    • 2015
  • The previous etching, rinsing and drying processes of wafers for MEMS (microelectromechanical system) using SC-$CO_2$ (supercritical-$CO_2$) consists of two steps. Firstly, MEMS-wafers are etched by organic solvent in a separate etching equipment from the high pressure dryer and then moved to the high pressure dryer to rinse and dry them using SC-$CO_2$. We found that the previous two step process could be applied to etch and dry wafers for MEMS but could not confirm the reproducibility through several experiments. We thought the cause of that was the stiction of structures occurring due to vaporization of the etching solvent during moving MEMS wafer to high pressure dryer after etching it outside. In order to improve the structure stiction problem, we designed a continuous process for etching, rinsing and drying MEMS-wafers using SC-$CO_2$ without moving them. And we also wanted to know relations of states of carbon dioxide (gas, liquid, supercritical fluid) to the structure stiction problem. In the case of using gas carbon dioxide (3 MPa, $25^{\circ}C$) as an etching solvent, we could obtain well-treated MEMS-wafers without stiction and confirm the reproducibility of experimental results. The quantity of rinsing solvent used could be also reduced compared with the previous technology. In the case of using liquid carbon dioxide (3 MPa, $5^{\circ}C$, we could not obtain well-treated MEMS-wafers without stiction due to the phase separation of between liquid carbon dioxide and etching co-solvent(acetone). In the case of using SC-$CO_2$ (7.5 Mpa, $40^{\circ}C$), we had as good results as those of the case using gas-$CO_2$. Besides the processing time was shortened compared with that of the case of using gas-$CO_2$.

A Study on the Safety of Evacuation according to Evacuation Delay Time and Fire Door Openness: Based on Residence Types (피난 지연시간의 적용과 방화문 개방 정도에 따른 피난 안전성 확보에 관한 고찰 : 주거형태를 중심으로)

  • Seo, Dong-Gil;Kim, Mi-Seon;Gu, Seon-Hwan;Song, Young-Joo
    • Fire Science and Engineering
    • /
    • v.34 no.2
    • /
    • pp.156-165
    • /
    • 2020
  • In this paper, the application of evacuation delay time (Cognition time + initiation time) and examine the degree of opening of fire doors in households for evaluating evacuation safety and suggest a realistic alternative. In order to proceed with this study, first of all, the preliminary investigation on evacuation safety evacuation of residential-type buildings (Apartment, urban living houses, etc.) among the performance-oriented design targets of Gwangju Metropolitan City, which was implemented until June 2018. Then, for the two representative types that are commonly used among the previously surveyed buildings, evacuation delay time is applied to W1, W2, and respectively simulating the opening of the doors is applied to th full open, 1/4 open, the leakage gap and evacuation safety evaluation was performed. As a result of evaluating evacuation safety was found that it is difficult to secure evacuation safety regardless of evacuation delay time W1 and W2 when the fire door is fully open and 1/4 open, Only when the leakage gap is applied evacuation safety was ensured even if evacuation delay time W2 was applied. Therefore, when a residential building is subject to performance-oriented design, evaluating the application of W2 rather than W1 is considered for evacuation delay time to reflect concern about privacy infringement due to CCTV installation, etc. In order to secure the Smoke blocking performance of the fire door and to improve the performance-oriented design, I would like to propose to consider the method of applying a leak gap to the degree of opening of the fire door. Through this, it is expected that the performance-oriented design will be a step further by performing evacuation safety evaluation with more realistic data.

Latent topics-based product reputation mining (잠재 토픽 기반의 제품 평판 마이닝)

  • Park, Sang-Min;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.39-70
    • /
    • 2017
  • Data-drive analytics techniques have been recently applied to public surveys. Instead of simply gathering survey results or expert opinions to research the preference for a recently launched product, enterprises need a way to collect and analyze various types of online data and then accurately figure out customer preferences. In the main concept of existing data-based survey methods, the sentiment lexicon for a particular domain is first constructed by domain experts who usually judge the positive, neutral, or negative meanings of the frequently used words from the collected text documents. In order to research the preference for a particular product, the existing approach collects (1) review posts, which are related to the product, from several product review web sites; (2) extracts sentences (or phrases) in the collection after the pre-processing step such as stemming and removal of stop words is performed; (3) classifies the polarity (either positive or negative sense) of each sentence (or phrase) based on the sentiment lexicon; and (4) estimates the positive and negative ratios of the product by dividing the total numbers of the positive and negative sentences (or phrases) by the total number of the sentences (or phrases) in the collection. Furthermore, the existing approach automatically finds important sentences (or phrases) including the positive and negative meaning to/against the product. As a motivated example, given a product like Sonata made by Hyundai Motors, customers often want to see the summary note including what positive points are in the 'car design' aspect as well as what negative points are in thesame aspect. They also want to gain more useful information regarding other aspects such as 'car quality', 'car performance', and 'car service.' Such an information will enable customers to make good choice when they attempt to purchase brand-new vehicles. In addition, automobile makers will be able to figure out the preference and positive/negative points for new models on market. In the near future, the weak points of the models will be improved by the sentiment analysis. For this, the existing approach computes the sentiment score of each sentence (or phrase) and then selects top-k sentences (or phrases) with the highest positive and negative scores. However, the existing approach has several shortcomings and is limited to apply to real applications. The main disadvantages of the existing approach is as follows: (1) The main aspects (e.g., car design, quality, performance, and service) to a product (e.g., Hyundai Sonata) are not considered. Through the sentiment analysis without considering aspects, as a result, the summary note including the positive and negative ratios of the product and top-k sentences (or phrases) with the highest sentiment scores in the entire corpus is just reported to customers and car makers. This approach is not enough and main aspects of the target product need to be considered in the sentiment analysis. (2) In general, since the same word has different meanings across different domains, the sentiment lexicon which is proper to each domain needs to be constructed. The efficient way to construct the sentiment lexicon per domain is required because the sentiment lexicon construction is labor intensive and time consuming. To address the above problems, in this article, we propose a novel product reputation mining algorithm that (1) extracts topics hidden in review documents written by customers; (2) mines main aspects based on the extracted topics; (3) measures the positive and negative ratios of the product using the aspects; and (4) presents the digest in which a few important sentences with the positive and negative meanings are listed in each aspect. Unlike the existing approach, using hidden topics makes experts construct the sentimental lexicon easily and quickly. Furthermore, reinforcing topic semantics, we can improve the accuracy of the product reputation mining algorithms more largely than that of the existing approach. In the experiments, we collected large review documents to the domestic vehicles such as K5, SM5, and Avante; measured the positive and negative ratios of the three cars; showed top-k positive and negative summaries per aspect; and conducted statistical analysis. Our experimental results clearly show the effectiveness of the proposed method, compared with the existing method.

A Comparative Study on the Effective Deep Learning for Fingerprint Recognition with Scar and Wrinkle (상처와 주름이 있는 지문 판별에 효율적인 심층 학습 비교연구)

  • Kim, JunSeob;Rim, BeanBonyka;Sung, Nak-Jun;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.17-23
    • /
    • 2020
  • Biometric information indicating measurement items related to human characteristics has attracted great attention as security technology with high reliability since there is no fear of theft or loss. Among these biometric information, fingerprints are mainly used in fields such as identity verification and identification. If there is a problem such as a wound, wrinkle, or moisture that is difficult to authenticate to the fingerprint image when identifying the identity, the fingerprint expert can identify the problem with the fingerprint directly through the preprocessing step, and apply the image processing algorithm appropriate to the problem. Solve the problem. In this case, by implementing artificial intelligence software that distinguishes fingerprint images with cuts and wrinkles on the fingerprint, it is easy to check whether there are cuts or wrinkles, and by selecting an appropriate algorithm, the fingerprint image can be easily improved. In this study, we developed a total of 17,080 fingerprint databases by acquiring all finger prints of 1,010 students from the Royal University of Cambodia, 600 Sokoto open data sets, and 98 Korean students. In order to determine if there are any injuries or wrinkles in the built database, criteria were established, and the data were validated by experts. The training and test datasets consisted of Cambodian data and Sokoto data, and the ratio was set to 8: 2. The data of 98 Korean students were set up as a validation data set. Using the constructed data set, five CNN-based architectures such as Classic CNN, AlexNet, VGG-16, Resnet50, and Yolo v3 were implemented. A study was conducted to find the model that performed best on the readings. Among the five architectures, ResNet50 showed the best performance with 81.51%.

Application of the CRISPR/Cas System for Point-of-care Diagnosis of Cattle Disease (현장에서 가축질병을 진단하기 위한 CRISPR/Cas 시스템의 활용)

  • Lee, Wonhee;Lee, Yoonseok
    • Journal of Life Science
    • /
    • v.30 no.3
    • /
    • pp.313-319
    • /
    • 2020
  • Recently, cattle epidemic diseases are caused by a pathogen such as a virus or bacterium. Such diseases can spread through various pathways, such as feed intake, respiration, and contact between livestock. Diagnosis based on the ELISA (Enzyme-linked immunosorbent assay) and PCR (Polymerase chain reaction) methods has limitations because these traditional diagnostic methods are time consuming assays that require multiple steps and dedicated equipment. In this review, we propose the use of the CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) Cas system based on DNA and RNA levels for early point-of-care diagnosis in cattle. In the CRISPR/Cas system, Cas effectors are classified into two classes and six subtypes. The Cas effectors included in class 2 are typically Cas9 in type II, Cas12 in type V (Cas12a and Cas12b) and Cas13 in type VI (Cas13a and Cas13b). The CRISPR/Cas system uses reporter molecules that are attached to the ssDNA strands. When the Cas enzyme cuts the ssDNA, these reporters either fluoresce or change color, indicating the presence of a specific disease marker. There are several steps in the development of a CRISPR/Cas system. The first is to select the Cas enzyme depending on DNA or RNA from pathogens (viruses or bacteria). Based on that, the next step is to integrate the optimal amplification, transducing method, and signal reporter. The CRISPR/Cas system is a powerful diagnostic tool using a gene-editing method, which is faster, better, and cheaper than traditional methods. This system could be used for early diagnosis of epidemic cattle diseases and help to control their spread.

Predicting Regional Soybean Yield using Crop Growth Simulation Model (작물 생육 모델을 이용한 지역단위 콩 수량 예측)

  • Ban, Ho-Young;Choi, Doug-Hwan;Ahn, Joong-Bae;Lee, Byun-Woo
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_2
    • /
    • pp.699-708
    • /
    • 2017
  • The present study was to develop an approach for predicting soybean yield using a crop growth simulation model at the regional level where the detailed and site-specific information on cultivation management practices is not easily accessible for model input. CROPGRO-Soybean model included in Decision Support System for Agrotechnology Transfer (DSSAT) was employed for this study, and Illinois which is a major soybean production region of USA was selected as a study region. As a first step to predict soybean yield of Illinois using CROPGRO-Soybean model, genetic coefficients representative for each soybean maturity group (MG I~VI) were estimated through sowing date experiments using domestic and foreign cultivars with diverse maturity in Seoul National University Farm ($37.27^{\circ}N$, $126.99^{\circ}E$) for two years. The model using the representative genetic coefficients simulated the developmental stages of cultivars within each maturity group fairly well. Soybean yields for the grids of $10km{\times}10km$ in Illinois state were simulated from 2,000 to 2,011 with weather data under 18 simulation conditions including the combinations of three maturity groups, three seeding dates and two irrigation regimes. Planting dates and maturity groups were assigned differently to the three sub-regions divided longitudinally. The yearly state yields that were estimated by averaging all the grid yields simulated under non-irrigated and fully-Irrigated conditions showed a big difference from the statistical yields and did not explain the annual trend of yield increase due to the improved cultivation technologies. Using the grain yield data of 9 agricultural districts in Illinois observed and estimated from the simulated grid yield under 18 simulation conditions, a multiple regression model was constructed to estimate soybean yield at agricultural district level. In this model a year variable was also added to reflect the yearly yield trend. This model explained the yearly and district yield variation fairly well with a determination coefficients of $R^2=0.61$ (n = 108). Yearly state yields which were calculated by weighting the model-estimated yearly average agricultural district yield by the cultivation area of each agricultural district showed very close correspondence ($R^2=0.80$) to the yearly statistical state yields. Furthermore, the model predicted state yield fairly well in 2012 in which data were not used for the model construction and severe yield reduction was recorded due to drought.