• Title/Summary/Keyword: 사용자인지도

Search Result 35,970, Processing Time 0.062 seconds

Development Strategy for New Climate Change Scenarios based on RCP (온실가스 시나리오 RCP에 대한 새로운 기후변화 시나리오 개발 전략)

  • Baek, Hee-Jeong;Cho, ChunHo;Kwon, Won-Tae;Kim, Seong-Kyoun;Cho, Joo-Young;Kim, Yeongsin
    • Journal of Climate Change Research
    • /
    • v.2 no.1
    • /
    • pp.55-68
    • /
    • 2011
  • The Intergovernmental Panel on Climate Change(IPCC) has identified the causes of climate change and come up with measures to address it at the global level. Its key component of the work involves developing and assessing future climate change scenarios. The IPCC Expert Meeting in September 2007 identified a new greenhouse gas concentration scenario "Representative Concentration Pathway(RCP)" and established the framework and development schedules for Climate Modeling (CM), Integrated Assessment Modeling(IAM), Impact Adaptation Vulnerability(IAV) community for the fifth IPCC Assessment Reports while 130 researchers and users took part in. The CM community at the IPCC Expert Meeting in September 2008, agreed on a new set of coordinated climate model experiments, the phase five of the Coupled Model Intercomparison Project(CMIP5), which consists of more than 30 standardized experiment protocols for the shortterm and long-term time scales, in order to enhance understanding on climate change for the IPCC AR5 and to develop climate change scenarios and to address major issues raised at the IPCC AR4. Since early 2009, fourteen countries including the Korea have been carrying out CMIP5-related projects. Withe increasing interest on climate change, in 2009 the COdinated Regional Downscaling EXperiment(CORDEX) has been launched to generate regional and local level information on climate change. The National Institute of Meteorological Research(NIMR) under the Korea Meteorological Administration (KMA) has contributed to the IPCC AR4 by developing climate change scenarios based on IPCC SRES using ECHO-G and embarked on crafting national scenarios for climate change as well as RCP-based global ones by engaging in international projects such as CMIP5 and CORDEX. NIMR/KMA will make a contribution to drawing the IPCC AR5 and will develop national climate change scenarios reflecting geographical factors, local climate characteristics and user needs and provide them to national IAV and IAM communites to assess future regional climate impacts and take action.

A Performance Evaluation of the e-Gov Standard Framework on PaaS Cloud Computing Environment: A Geo-based Image Processing Case (PaaS 클라우드 컴퓨팅 환경에서 전자정부 표준프레임워크 성능평가: 공간영상 정보처리 사례)

  • KIM, Kwang-Seob;LEE, Ki-Won
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.1-13
    • /
    • 2018
  • Both Platform as a Service (PaaS) as one of the cloud computing service models and the e-government (e-Gov) standard framework from the Ministry of the Interior and Safety (MOIS) provide developers with practical computing environments to build their applications in every web-based services. Web application developers in the geo-spatial information field can utilize and deploy many middleware software or common functions provided by either the cloud-based service or the e-Gov standard framework. However, there are few studies for their applicability and performance in the field of actual geo-spatial information application yet. Therefore, the motivation of this study was to investigate the relevance of these technologies or platform. The applicability of these computing environments and the performance evaluation were performed after a test application deployment of the spatial image processing case service using Web Processing Service (WPS) 2.0 on the e-Gov standard framework. This system was a test service supported by a cloud environment of Cloud Foundry, one of open source PaaS cloud platforms. Using these components, the performance of the test system in two cases of 300 and 500 threads was assessed through a comparison test with two kinds of service: a service case for only the PaaS and that on the e-Gov on the PaaS. The performance measurements were based on the recording of response time with respect to users' requests during 3,600 seconds. According to the experimental results, all the test cases of the e-Gov on PaaS considered showed a greater performance. It is expected that the e-Gov standard framework on the PaaS cloud would be important factors to build the web-based spatial information service, especially in public sectors.

Design of Cloud-Based Data Analysis System for Culture Medium Management in Smart Greenhouses (스마트온실 배양액 관리를 위한 클라우드 기반 데이터 분석시스템 설계)

  • Heo, Jeong-Wook;Park, Kyeong-Hun;Lee, Jae-Su;Hong, Seung-Gil;Lee, Gong-In;Baek, Jeong-Hyun
    • Korean Journal of Environmental Agriculture
    • /
    • v.37 no.4
    • /
    • pp.251-259
    • /
    • 2018
  • BACKGROUND: Various culture media have been used for hydroponic cultures of horticultural plants under the smart greenhouses with natural and artificial light types. Management of the culture medium for the control of medium amounts and/or necessary components absorbed by plants during the cultivation period is performed with ICT (Information and Communication Technology) and/or IoT (Internet of Things) in a smart farm system. This study was conducted to develop the cloud-based data analysis system for effective management of culture medium applying to hydroponic culture and plant growth in smart greenhouses. METHODS AND RESULTS: Conventional inorganic Yamazaki and organic media derived from agricultural byproducts such as a immature fruit, leaf, or stem were used for hydroponic culture media. Component changes of the solutions according to the growth stage were monitored and plant growth was observed. Red and green lettuce seedlings (Lactuca sativa L.) which developed 2~3 true leaves were considered as plant materials. The seedlings were hydroponically grown in the smart greenhouse with fluorescent and light-emitting diodes (LEDs) lights of $150{\mu}mol/m^2/s$ light intensity for 35 days. Growth data of the seedlings were classified and stored to develop the relational database in the virtual machine which was generated from an open stack cloud system on the base of growth parameter. Relation of the plant growth and nutrient absorption pattern of 9 inorganic components inside the media during the cultivation period was investigated. The stored data associated with component changes and growth parameters were visualized on the web through the web framework and Node JS. CONCLUSION: Time-series changes of inorganic components in the culture media were observed. The increases of the unfolded leaves or fresh weight of the seedlings were mainly dependent on the macroelements such as a $NO_3-N$, and affected by the different inorganic and organic media. Though the data analysis system was developed, actual measurement data were offered by using the user smart device, and analysis and comparison of the data were visualized graphically in time series based on the cloud database. Agricultural management in data visualization and/or plant growth can be implemented by the data analysis system under whole agricultural sites regardless of various culture environmental changes.

Multi-Category Sentiment Analysis for Social Opinion Related to Artificial Intelligence on Social Media (소셜 미디어 상에서의 인공지능 관련 사회적 여론에 대한 다 범주 감성 분석)

  • Lee, Sang Won;Choi, Chang Wook;Kim, Dong Sung;Yeo, Woon Young;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.51-66
    • /
    • 2018
  • As AI (Artificial Intelligence) technologies have been swiftly evolved, a lot of products and services are under development in various fields for better users' experience. On this technology advance, negative effects of AI technologies also have been discussed actively while there exists positive expectation on them at the same time. For instance, many social issues such as trolley dilemma and system security issues are being debated, whereas autonomous vehicles based on artificial intelligence have had attention in terms of stability increase. Therefore, it needs to check and analyse major social issues on artificial intelligence for their development and societal acceptance. In this paper, multi-categorical sentiment analysis is conducted over online public opinion on artificial intelligence after identifying the trending topics related to artificial intelligence for two years from January 2016 to December 2017, which include the event, match between Lee Sedol and AlphaGo. Using the largest web portal in South Korea, online news, news headlines and news comments were crawled. Considering the importance of trending topics, online public opinion was analysed into seven multiple sentimental categories comprised of anger, dislike, fear, happiness, neutrality, sadness, and surprise by topics, not only two simple positive or negative sentiment. As a result, it was found that the top sentiment is "happiness" in most events and yet sentiments on each keyword are different. In addition, when the research period was divided into four periods, the first half of 2016, the second half of the year, the first half of 2017, and the second half of the year, it is confirmed that the sentiment of 'anger' decreases as goes by time. Based on the results of this analysis, it is possible to grasp various topics and trends currently discussed on artificial intelligence, and it can be used to prepare countermeasures. We hope that we can improve to measure public opinion more precisely in the future by integrating empathy level of news comments.

Effects of vocal aerobic treatment on voice improvement in patients with voice disorders (성대에어로빅치료법이 음성장애환자의 음성개선에 미치는 효과)

  • Park, Jun-Hee;Yoo, Jae-Yeon;Lee, Ha-Na
    • Phonetics and Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.69-76
    • /
    • 2019
  • This study aimed to investigate the effects of vocal aerobic treatment (VAT) on the improvement of voice in patients with voice disorders. Twenty patients (13 males, 7 females) were diagnosed with voice disorders on the basis of videostroboscopy and voice evaluations. Acoustic evaluation was performed with the Multidimensional voice program (MDVP) and Voice Range Profile (VRP) of Computerized Speech Lab (CSL), and aerodynamic evaluation with PAS (Phonatory Aerodynamic System). The changes in F0, Jitter, Shimmer, and NHR before and after treatment were measured by MDVP. F0 range and Energy range were measured with VRP before and after treatment, and the changes in Expiratory Volume (FVC), Phonation Time (PHOT), Mean Expiratory Airflow (MEAF), Mean Peak Air Pressure (MPAP), and Aerodynamic Efficiency (AEFF) with PAS. Videostroboscopy was performed to evaluate the regularity, symmetry, mucosal wave, and amplitude changes of both vocal cords before and after treatment. Voice therapy was performed once a week for each patient using the VAT program in a holistic voice therapy approach. The average number of treatments per patient was 6.5. In the MDVP, Jitter, Shimmer, and NHR showed statistically significant decreases (p < .001, p < .01, p < .05). VRP results showed that Hz and semitones in the frequency range improved significantly after treatment (p < .01, p < .05), as did PAS, FVC, and PHOT (p < .01, p < .001). The results for videostroboscopy, functional voice disorder, laryngopharyngeal reflux, and benign vocal fold lesions were normal. Thus, the VAT program was found to be effective in improving the acoustic and aerodynamic aspects of the voice of patients with voice disorders. In future studies, the effect of VAT on the same group of voice disorders should be studied. It is also necessary to investigate subjective voice improvement and objective voice improvement. Furthermore, it is necessary to examine the effects of VAT in professional voice users.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Mobile application-based dietary sugar intake reduction intervention study according to the stages of behavior change in female college students (모바일 어플리케이션 기반 당류 저감화 중재 프로그램의 행동변화단계에 따른 효과 분석 : 일부 여대생 대상 연구)

  • Choi, Yunjung;Kim, Hyun-Sook
    • Journal of Nutrition and Health
    • /
    • v.52 no.5
    • /
    • pp.488-500
    • /
    • 2019
  • Purpose: This study examined the effects of a mobile app-based program to reduce the dietary sugar intake according to the stages of the behavioral change in dietary sugar reduction in female college students. Methods: The program used in this study can monitor the dietary sugar intake after recording the dietary intake and provide education message for the reduction of dietary sugar intake. In an eight-week pre-post intervention study, 68 female college students were instructed to record all the food they consumed daily and received weekly education information. At pre-post intervention, the subjects were asked to answer the questionnaire about sugar-related nutrition knowledge, sugar-intake behavior, and sugar-intake frequency. For statistical analysis, ANOVA and a paired t-test were used for comparative analysis according Precontemplation (PC), Contemplation Preparation (C P), and A M (Action Maintenance) stage. Results: Significant differences were observed in the frequency of snacking, experience of nutrition education, and preference for sweetness according to the stages of behavior change in dietary sugar reduction. After finishing an intervention, the sugar-related nutrition knowledge score was increased significantly in the stages of Precontemplation (PC) and Contemplation Preparation (C P). The score of the sugar intake behavior increased significantly in all stages. The intake frequency of chocolate, muffins or cakes, and drinking yogurt decreased significantly in the PC stage and the intake frequency of biscuits, carbonated beverages, and fruit juice decreased significantly in the C P stage. Subjects in the PC and C P stages had an undesirable propensity in nutrition knowledge, sugar-intake behavior, and sugar-intake frequency compared to the A M stage, but this intervention improved significantly their nutrition knowledge, sugar-intake behavior, and sugar-intake frequency. Conclusion: This program can be an effective educational tool in the stages of PC and C P, and is expected to further increase the usability and sustainability of mobile application if supplemented appropriately to a health platform program.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Study on Improvement of Collaborative Filtering Based on Implicit User Feedback Using RFM Multidimensional Analysis (RFM 다차원 분석 기법을 활용한 암시적 사용자 피드백 기반 협업 필터링 개선 연구)

  • Lee, Jae-Seong;Kim, Jaeyoung;Kang, Byeongwook
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.139-161
    • /
    • 2019
  • The utilization of the e-commerce market has become a common life style in today. It has become important part to know where and how to make reasonable purchases of good quality products for customers. This change in purchase psychology tends to make it difficult for customers to make purchasing decisions in vast amounts of information. In this case, the recommendation system has the effect of reducing the cost of information retrieval and improving the satisfaction by analyzing the purchasing behavior of the customer. Amazon and Netflix are considered to be the well-known examples of sales marketing using the recommendation system. In the case of Amazon, 60% of the recommendation is made by purchasing goods, and 35% of the sales increase was achieved. Netflix, on the other hand, found that 75% of movie recommendations were made using services. This personalization technique is considered to be one of the key strategies for one-to-one marketing that can be useful in online markets where salespeople do not exist. Recommendation techniques that are mainly used in recommendation systems today include collaborative filtering and content-based filtering. Furthermore, hybrid techniques and association rules that use these techniques in combination are also being used in various fields. Of these, collaborative filtering recommendation techniques are the most popular today. Collaborative filtering is a method of recommending products preferred by neighbors who have similar preferences or purchasing behavior, based on the assumption that users who have exhibited similar tendencies in purchasing or evaluating products in the past will have a similar tendency to other products. However, most of the existed systems are recommended only within the same category of products such as books and movies. This is because the recommendation system estimates the purchase satisfaction about new item which have never been bought yet using customer's purchase rating points of a similar commodity based on the transaction data. In addition, there is a problem about the reliability of purchase ratings used in the recommendation system. Reliability of customer purchase ratings is causing serious problems. In particular, 'Compensatory Review' refers to the intentional manipulation of a customer purchase rating by a company intervention. In fact, Amazon has been hard-pressed for these "compassionate reviews" since 2016 and has worked hard to reduce false information and increase credibility. The survey showed that the average rating for products with 'Compensated Review' was higher than those without 'Compensation Review'. And it turns out that 'Compensatory Review' is about 12 times less likely to give the lowest rating, and about 4 times less likely to leave a critical opinion. As such, customer purchase ratings are full of various noises. This problem is directly related to the performance of recommendation systems aimed at maximizing profits by attracting highly satisfied customers in most e-commerce transactions. In this study, we propose the possibility of using new indicators that can objectively substitute existing customer 's purchase ratings by using RFM multi-dimensional analysis technique to solve a series of problems. RFM multi-dimensional analysis technique is the most widely used analytical method in customer relationship management marketing(CRM), and is a data analysis method for selecting customers who are likely to purchase goods. As a result of verifying the actual purchase history data using the relevant index, the accuracy was as high as about 55%. This is a result of recommending a total of 4,386 different types of products that have never been bought before, thus the verification result means relatively high accuracy and utilization value. And this study suggests the possibility of general recommendation system that can be applied to various offline product data. If additional data is acquired in the future, the accuracy of the proposed recommendation system can be improved.

Design and Implementation of Transmission Scheduler for Terrestrial UHD Contents (지상파 UHD 콘텐츠 전송 스케줄러 설계 및 구현)

  • Paik, Jong-Ho;Seo, Minjae;Yu, Kyung-A
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.118-131
    • /
    • 2019
  • In order to provide 8K UHD contents of terrestrial broadcasting with a large capacity, the terrestrial broadcasting system has various problems such as limited bandwidth and so on. To solve these problems, UHD contents transmission technology has been actively studied, and an 8K UHD broadcasting system using terrestrial broadcasting network and communication network has been proposed. The proposed technique is to solve the limited bandwidth problem of terrestrial broadcasting network by segmenting 8K UHD contents and transmitting them to heterogeneous networks through hierarchical separation. Through the terrestrial broadcasting network, the base layer corresponding to FHD and the additional enhancement layer data for 4K UHD are transmitted, and the additional enhancement layer data corresponding to 8K UHD is transmitted through the communication network. When 8K UHD contents are provided in such a way, user can receive up to 4K UHD broadcasting by terrestrial channels, and also can receive up to 8K UHD additional communication networks. However, in order to transmit the 4K UHD contents within the allocated bit rate of the domestic terrestrial UHD broadcasting, the compression rate is increased, so a certain level of image deterioration occurs inevitably. Due to the nature of UHD contents, video quality should be considered as a top priority over other factors, so that video quality should be guaranteed even within a limited bit rate. This requires packet scheduling of content generators in the broadcasting system. Since the multiplexer sends out the packets received from the content generator in order, it is very important to make the transmission time and the transmission rate of the process from the content generator to the multiplexer constant and accurate. Therefore, we propose a variable transmission scheduler between the content generator and the multiplexer to guarantee the image quality of a certain level of UHD contents in this paper.