• Title/Summary/Keyword: 비언어적 정보

Search Result 649, Processing Time 0.026 seconds

Modelling the Effects of Temperature and Photoperiod on Phenology and Leaf Appearance in Chrysanthemum (온도와 일장에 따른 국화의 식물계절과 출엽 예측 모델 개발)

  • Seo, Beom-Seok;Pak, Ha-Seung;Lee, Kyu-Jong;Choi, Doug-Hwan;Lee, Byun-Woo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.18 no.4
    • /
    • pp.253-263
    • /
    • 2016
  • Chrysanthemum production would benefit from crop growth simulations, which would support decision-making in crop management. Chrysanthemum is a typical short day plant of which floral initiation and development is sensitive to photoperiod. We developed a model to predict phenological development and leaf appearance of chrysanthemum (cv. Baekseon) using daylength (including civil twilight period), air temperature, and management options like light interruption and ethylene treatment as predictor variables. Chrysanthemum development stage (DVS) was divided into juvenile (DVS=1.0), juvenile to budding (DVS=1.33), and budding to flowering (DVS=2.0) phases for which different strategies and variables were used to predict the development toward the end of each phenophase. The juvenile phase was assumed to be completed at a certain leaf number which was estimated as 15.5 and increased by ethylene application to the mother plant before cutting and the transplanted plant after cutting. After juvenile phase, development rate (DVR) before budding and flowering were calculated from temperature and day length response functions, and budding and flowering were completed when the integrated DVR reached 1.33 and 2.0, respectively. In addition the model assumed that leaf appearance terminates just before budding. This model predicted budding date, flowering date, and leaf appearance with acceptable accuracy and precision not only for the calibration data set but also for the validation data set which are independent of the calibration data set.

A Basic Study on NCS Development and Professional Training Activation for DP Operators (DP운항사 NCS개발 및 전문인력양성 활성화 방안에 관한 기초연구)

  • Kim, E-Wan;Lee, Jin-Woo;Lee, Chang-Hee;Yea, Byeong-Deok
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.23 no.6
    • /
    • pp.628-638
    • /
    • 2017
  • In response to difficult employment conditions in the maritime industry and a desire to expand their career options, domestic mates are persuing DP operator training at institutions both domestically and abroad based on their shipboard experience. However, since the offshore plant service industry has not yet been established in Korea, those seeking to enter this field have difficulty acquiring qualifications and most seek work overseas for offshore shipping companies. Individuals wishing to work as DP operators are likely to face more conservative recruitment processes with overseas offshore shipping companies, focusing on career language restrictions as they will be non-native speakers relative to the foreign company, difficulty living in a multi-cultural environment, and lack of systematic information on essential job requirements. For these reasons, domestic mates have difficulty seeking jobs. Therefore, this study analyzes the capabilities and qualification required to be a DP operator to provide basic data for developing NCS standards representing a minimum level of competency. These standards can be applied by the government to develop plans for professional training for DP operators. In study, job classifications, competency standards and career development paths for DP operators have been proposed along with joint use of DP training vessels, to train specialized DP instructors. An NCS export model led by the government to activate professional training for DP operators is also presented.

The research for the yachting development of Korean Marina operation plans (요트 발전을 위한 한국형 마리나 운영방안에 관한 연구)

  • Jeong Jong-Seok;Hugh Ihl
    • Journal of Navigation and Port Research
    • /
    • v.28 no.10 s.96
    • /
    • pp.899-908
    • /
    • 2004
  • The rise of income and introduction of 5 day a week working system give korean people opportunities to enjoy their leisure time. And many korean people have much interest in oceanic sports such as yachting and also oceanic leisure equipments. With the popularization and development of the equipments, the scope of oceanic activities has been expanding in Korea just as in the advanced oceanic countries. However, The current conditions for the sports in Korea are not advanced and even worse than underdeveloped countries. In order to develop the underdeveloped resources of Korean marina, we need to customize the marina models of advanced nations to serve the specific needs and circumstances of Korea As such we have carried out a comparative analysis of how Austrailia, Newzealand, Singapore, japan and Malaysia operate their marina, reaching the following conclusions. Firstly, in marina operations, in order to protect personal property rights and to preserve the environment, we must operate membership and non-membership, profit and non-profit schemes separately, yet without regulating the dress code entering or leaving the club house. Secondly, in order to accumulate greater value added, new sporting events should be hosted each year. There is also the need for an active use of volunteers, the generation of greater interest in yacht tourism, and the simplification of CIQ procedures for foreign yachts as well as the provision of language services. Thirdly, a permanent yacht school should be established, and classes should be taught by qualified instructors. Beginners, intermediary, and advanced learner classes should be managed separately with special emphasis on the dinghy yacht program for children. Fourthly, arrival and departure at the moorings must be regulated autonomically, and there must be systematic measures for the marina to be able, in part, to compensate for loss and damages to equipment, security and surveillance after usage fees have been paid for. Fifthly, marine safety personnel must be formed in accordance with Korea's current circumstances from civilian organizations in order to be used actively in benchmarking, rescue operations, and oceanic searches at times of disaster at sea.

Active Inferential Processing During Comprehension in Poor Readers (미숙 독자들에 있어 이해 도중의 능동적 추리의 처리)

  • Zoh Myeong-Han;Ahn Jeung-Chan
    • Korean Journal of Cognitive Science
    • /
    • v.17 no.2
    • /
    • pp.75-102
    • /
    • 2006
  • Three experiments were conducted using a verification task to examine good and poor readers' generation of causal inferences(with because sentences) and contrastive inferences(with although sentences). The unfamiliar, critical verification statement was either explicitly mentioned or was implied. In Experiment 1, both good and poor readers responded accurately to the critical statement, suggesting that both groups had the linguistic knowledge necessary to the required inferences. Differences were found, however, in the groups' verification latencies. Poor, but not good, readers responded faster to explicit than to implicit verification statements for both because and although sentences. In Experiment 2, poor readers were induced to generate causal inferences for the because experimental sentences by including fillers that were apparently counterfactual unless a causal inference was made. In Experiment 3, poor readers were induced to generate contrastive inferences for the although sentences by including fillers that could only be resolved by making a contrastive inference. Verification latencies for the critical statements showed that poor readers made causal inferences in Experiment 2 and contrastive inferences in Experiment 3 doting comprehension. These results were discussed in terms of context effect: Specific encoding operations performed on anomaly backgrounded in another passage would form part of the context that guides the ongoing activity in processing potentially relevant subsequent text.

  • PDF

Bioactivities and Isolation of Functional Compounds from Decay-Resistant Hardwood Species (고내후성 활엽수종의 추출성분을 이용한 신기능성 물질의 분리 및 생리활성)

  • 배영수;이상용;오덕환;최돈하;김영균
    • Journal of Korea Foresty Energy
    • /
    • v.19 no.2
    • /
    • pp.93-101
    • /
    • 2000
  • Wood of Robinia pseudoacacia and bark of Populus alba$\times$P. glandulosa, Fraxinus rhynchophylla and Ulmus davidiana var. japonica were collected and extracted with acetone-water(7:3, v/v) in glass jar to examine whether its bioactive compounds exist. The concentrated extracts were fractionated with hexane, chloroform, ethylacetate and water, and then freeze-dried for column chromatography and bioactive tests. The isolated compounds were sakuranetin-5-O-$\beta$-D-glucopyranoside from Populus alba $\times$Pl glandulosa, 4--ethyoxy-(+)-leucorobinetinidin frm R. pseudoacacia and fraxetion from F. rhynchophylla and were characterized by $^1H$ and$^{13}C $ NMR and positive FAB-MS. Decay-resistant activity was expressed by weight loss ratio and hyphae growth inhibition in the wood dust agar medium inoculated wood rot fungi. R. pseudoacacia showed best anti-decaying property in both test and its methanol untreated samples, indicating higher activity than methanol treated samples in hyphae grwoth test. In antioxidative test, $\alpha$-tocopherol, one of natural antioxidants, and BHT, one of synthetic antioxidants, were used as references to cmpare with the antioxidant activities of the extacted fractions. Ethylacetate fraction of F. rhynchophylla bark indicated the hightest activity in this test and all fractions of R. pseudiacacia extractives also indicated higher activities compared with the other fractions. In the isolated compounds, aesculetin isolated from F. rhynchophylla bark showed best activity and followed by robonetinidin from R. pseudoacaica.

  • PDF

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

COMPLIANCE STUDY OF METHYLPHENIDATE IR IN THE TREATMENT OF ADHD (주의력결핍과잉행동장애 치료 약물 Methylphenidate IR의 순응도 연구)

  • Hwang, Jun-Wan;Cho, Soo-Churl;Kim, Boong-Nyun
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.15 no.2
    • /
    • pp.160-167
    • /
    • 2004
  • Objectives : There have been very few studies on the compliance of methylphenidate-immediate releasing form(MPH-IR), which is the most frequently used drug in Korea, in Attention Deficit Hyperactivity Disorder(ADHD). This study was conducted to investigate the compliance rate and the related factors in the one year pharmacotherapy process via OPD for children with ADHD. Method : Total 100 ADHD patients were selected randomly among patients who have been treated with MPH-IR from September in 2002 to December in 2002. All the selected patients were diagnosed with DSM-IV-ADHD criteria and fulfilled the inclusion criteria. In March, 2003(at the time of 6 month treatment), all the patients and parents received the questionnaire for the compliance and satisfaction for MPH-IR treatment. In October 2003(at time of 1 year treatment), we, investigators evaluated the socio-demographic variables, developmental data, medical data, family data, comorbid disorders, treatment variables, and compliance rate. Through these very comprehensive data, The compliance rate at the time of mean 1 year treatment and the related factors were investigated. Result : 1) In the questionnaire for compliance and satisfaction for MPND treatment, the 60% of respondents(parents) reported more than moderate degree of satisfaction in the effectiveness of MPND. Their compliance rate for the morning prescription was 81%, but the rate of afternoon prescription was 43%. 2) In the evaluation at the time of 1 year treatment(October 2003), the 38% of parents were dropped out from the OPD treatment. The mean compliance rate for the 1 year treatment was 62%. the 38% of parents were dropped out from the OPD treatment. The mean compliance rate for the 1year treatment was 62%. 3) Compared with the noncompliant group(drop-out group), compliant group showed higher total, verbal and performance IQ scores. In the treatment variables, higher reposponder rate(clinician rating), higher medication dosage and more compliance rate in afternoon prescription were found in the compliant group compared with the noncompliant group. There were no statistical differences in the demographic variables(age, sex, SES, parental education level), medical data, developmental profiles and academic function. Conclusion : To our knowledge, this is the first report about the compliance rate of the MPH-IR treatment for the children with ADHD. The compliance rate at the time of mean 1year treatment was 62%, which was comparable with other studies performed in foreign countries, especially States. In this study, the compliance related factors were IQ score, clinical treatment response, dosage of MPH-IR, and early compliance for the afternoon prescription. These results suggest that clinician plan the strategies for the promotion of the early compliance for the after prescription and enhancement of overall treatment response.

  • PDF

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.