• Title/Summary/Keyword: Language order

Search Result 1,978, Processing Time 0.033 seconds

The post-epic characteristics in Jan Lauwers' theatre -, and - (얀 라우어스(Jan Lauwers) 공연의 탈서사적 특징들 -<이사벨라의 방(Isabella's Room)>, <랍스터 가게(The Lobster Shop)>, <사슴의 집(Deer House)>을 중심으로-)

  • Nam, Jisoo
    • Journal of Korean Theatre Studies Association
    • /
    • no.48
    • /
    • pp.447-484
    • /
    • 2012
  • This study aims to analyze the characteristics of post-epic theatre in the Belgian theatre director Jan Lauwers' trilogy titled in "Happy Face/Sad Face": (2004), (2006) and (2008). I regard that it played a very important junction for him to create his own theatrical style compared to earlier years. From this period, Lauwers has tried to create his original plays in order to concentrate the story of our era and has showed to combine a variety of media such as dance, installation, video, singing etc. In this context, I would like to study his own theatricality from the three perspectives of dramaturgy, directing and acting largely based on Hans-Thies Lehmann's theory of post-epic theatre, who pointed out the significance of Lauwer's theatrical leading role very early. First, from the dramaturgical point of view, we need to pay attention to the theme of translunary death; where the living and the dead coexist on the stage. In fact, death is the theme that Lauwers has been struggling to research for quite long time. In his trilogy, the dead never exits the stage. The dead, who is not a representative tragic character, even meddles the things among or with the living and provide comments to people. As a consequence, it happens to reduce a dramaturgical strong tension, leads depreciation of suspense and produces humanism in a way. This approach helps to create his unique comical theatrical atmosphere even though he deals with the contemporary tragic issues such as war, horror and death. Second, from the directing point of view, it is worth to take a look at the polyphonic strategy in terms to applying various media. Among all the things, the arts of dancing and singing in chorus are actively applied in Lauwer's trilogy. The dance is used in individual and microscopic way, on the other hand, singing shows collective and is a macroscopic quality. The dance is the representing media to show Lauwer's simultaneous microscopic mise-en-scene. While main plot takes place around the center-stage, actors perform a dance around the off-centered stage. Instead of exiting from the stage during the performance, the actors would continue dance -sometimes more like movements- around the off-centered stage. This not only describes the narrative, but also shows how each character is engaged to the main plot or incident, and how they look into it as a character. Its simultaneous microscopic mise-en-scene intends to function such as: showing a variety moments of lives, amplifying some moments or incidents, revealing character's emotion, creating illusionary theatrical atmosphere and so on. Meanwhile, singing simple lyrics and tunes are an example of the media to stimulate the audiences' catharsis. As the simple melody lingers in the audiences' mind, it ends up delivering a theatrical message or theme after the performance. This message would be transferred from the singing in chorus functions as a sort of leitmotive in order to make an impression to the audience. This not only richens their emotion but also creates an illusionary effect. Third, from the acting perspective, I'd like to point out the "detachment" aesthetic which Lehmann has pointed out. The actors never go deep into the drama by consistently doing recognize a theatrical illusion. The audience happens to pay attention to their presence through the actor's deliberate gesture, business, movement, rhythm, language, dance etc. The actors are against forming closed action by speaking in various languages or by revealing deliberately stage directions or acts, and by creating expressive mise-en-scene with multiple media. As a consequent, the stage can be transformed to not a metaphoric but a metonymic place. These actions are the ultimate intention for a direct effect to the audience. So to speak, Lauwers uses the anti-illusionary theatrical method: the scenes of fantastic death, interruption of singing and dance, speaking many kinds of languages, acting in detachment-status and so on. These strategies function to make cracks in spectators' desire who has a desire to construct a linear narrative. I'd like to say that it is the numerous potentiality to let the reality penetrate though and collide the reality with a fiction. By doing so, it induces for spectators to see the reality in the fiction. As Lehmann says, "when theatre presents itself as a sketch and not as a finished painting, the spectators are given the chance to feel their own presence, to reflect on it, and to contribute to the unfinished character themselves". In this sense the spectators can perform an objective criticism on our society and world in Lauwer's theatre because there are a number of gaps and cracks in his theatrical illusion where reality can penetrate. This is also the point that we can find out the artists' responsibility in this era of our being.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

The Effect of Corporate SNS Marketing on User Behavior: Focusing on Facebook Fan Page Analytics (기업의 SNS 마케팅 활동이 이용자 행동에 미치는 영향: 페이스북 팬페이지 애널리틱스를 중심으로)

  • Jeon, Hyeong-Jun;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.75-95
    • /
    • 2020
  • With the growth of social networks, various forms of SNS have emerged. Based on various motivations for use such as interactivity, information exchange, and entertainment, SNS users are also on the fast-growing trend. Facebook is the main SNS channel, and companies have started using Facebook pages as a public relations channel. To this end, in the early stages of operation, companies began to secure a number of fans, and as a result, the number of corporate Facebook fans has recently increased to as many as millions. from a corporate perspective, Facebook is attracting attention because it makes it easier for you to meet the customers you want. Facebook provides an efficient advertising platform based on the numerous data it has. Advertising targeting can be conducted using their demographic characteristics, behavior, or contact information. It is optimized for advertisements that can expose information to a desired target, so that results can be obtained more effectively. it rethink and communicate corporate brand image to customers through contents. The study was conducted through Facebook advertising data, and could be of great help to business people working in the online advertising industry. For this reason, the independent variables used in the research were selected based on the characteristics of the content that the actual business is concerned with. Recently, the company's Facebook page operation goal is to go beyond securing the number of fan pages, branding to promote its brand, and further aiming to communicate with major customers. the main figures for this assessment are Facebook's 'OK', 'Attachment', 'Share', and 'Number of Click' which are the dependent variables of this study. in order to measure the outcome of the target, the consumer's response is set as a key measurable key performance indicator (KPI), and a strategy is set and executed to achieve this. Here, KPI uses Facebook's ad numbers 'reach', 'exposure', 'like', 'share', 'comment', 'clicks', and 'CPC' depending on the situation. in order to achieve the corresponding figures, the consideration of content production must be prior, and in this study, the independent variables were organized by dividing into three considerations for content production into three. The effects of content material, content structure, and message styles on Facebook's user behavior were analyzed using regression analysis. Content materials are related to the content's difficulty, company relevance, and daily involvement. According to existing research, it was very important how the content would attract users' interest. Content could be divided into informative content and interesting content. Informational content is content related to the brand, and information exchange with users is important. Interesting content is defined as posts that are not related to brands related to interesting movies or anecdotes. Based on this, this study started with the assumption that the difficulty, company relevance, and daily involvement have an effect on the dependent variable. In addition, previous studies have found that content types affect Facebook user activity. I think it depends on the combination of photos and text used in the content. Based on this study, the actual photos were used and the hashtag and independent variables were also examined. Finally, we focused on the advertising message. In the previous studies, the effect of advertising messages on users was different depending on whether they were narrative or non-narrative, and furthermore, the influence on message intimacy was different. In this study, we conducted research on the behavior that Facebook users' behavior would be different depending on the language and formality. For dependent variables, 'OK' and 'Full Click Count' are set by every user's action on the content. In this study, we defined each independent variable in the existing study literature and analyzed the effect on the dependent variable, and found that 'good' factors such as 'self association', 'actual use', and 'hidden' are important. Could. Material difficulties', 'actual participation' and 'large scale * difficulties'. In addition, variables such as 'Self Connect', 'Actual Engagement' and 'Sexual Sexual Attention' have been shown to have a significant impact on 'Full Click'. It is expected that through research results, it is possible to contribute to the operation and production strategy of company Facebook operators and content creators by presenting a content strategy optimized for the purpose of the content. In this study, we defined each independent variable in the existing research literature and analyzed its effect on the dependent variable, and we could see that factors on 'good' were significant such as 'self-association', 'reality use', 'concernal material difficulty', 'real-life involvement' and 'massive*difficulty'. In addition, variables such as 'self-connection', 'real-life involvement' and 'formative*attention' were shown to have significant effects for 'full-click'. Through the research results, it is expected that by presenting an optimized content strategy for content purposes, it can contribute to the operation and production strategy of corporate Facebook operators and content producers.

A Study on Modernization of International Conventions Relating to Aviation Security and Implementation of National Legislation (항공보안 관련 국제협약의 현대화와 국내입법의 이행 연구)

  • Lee, Kang-Bin
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.30 no.2
    • /
    • pp.201-248
    • /
    • 2015
  • In Korea the number of unlawful interference act on board aircrafts has been increased continuously according to the growth of aviation demand, and there were 55 incidents in 2000, followed by 354 incidents in 2014, and an average of 211 incidents a year over the past five years. In 1963, a number of states adopted the Convention on Offences and Certain Other Acts Committed on Board Aircraft (the Tokyo Convention 1963) as the first worldwide international legal instrument on aviation security. The Tokyo Convention took effect in 1969 and, shortly afterward, in 1970 the Convention for the Suppression of Unlawful Seizure of Aircraft(the Hague Convention 1970) was adopted, and the Convention for the Suppression of Unlawful Acts Against the Safety of Civil Aviation(the Montreal Convention 1971) was adopted in 1971. After 9/11 incidents in 2001, to amend and supplement the Montreal Convention 1971, the Convention on the Suppression of Unlawful Acts Relating to International Civil Aviation(the Beijing Convention 2010) was adopted in 2010, and to supplement the Hague Convention 1970, the Protocol Supplementary to the Convention for the Suppression of Unlawful Seizure of Aircraft(the Beijing Protocol 2010) was adopted in 2010. Since then, in response to increased cases of unruly behavior on board aircrafts which escalated in both severity and frequency,, the Montreal Protocol which is seen as an amendment to the Convention on Offences and Certain Other Acts Committed on Board Aircraft(the Tokyo Convention 1963) was adopted in 2014. Korea ratified the Tokyo Convention 1963, the Hague Convention 1970, the Montreal Convention 1971, the Montreal Supplementary Protocol 1988, and the Convention on the Marking of Plastic Explosive 1991 which have proven to be effective. Under the Tokyo Convention ratified in 1970, Korea further enacted the Aircraft Navigation Safety Act in 1974, as well as the Aviation Safety and Security Act that replaced the Aircraft Navigation Safety Act in August 2002. Meanwhile, the title of the Aviation Safety and Security Act was changed to the Aviation Security Act in April 2014. The Aviation Security Act is essentially an implementing legislation of the Tokyo Convention and Hague Convention. Also the language of the Aviation Security Act is generally broader than the unruly and disruptive behavior in Sections 1-3 of the model legislation in ICAO Circular 288. The Aviation Security Act has reflected the considerable parts of the implementation of national legislation under the Beijing Convention and Beijing Protocol 2010, and the Montreal Protocol 2014 that are the modernized international conventions relating to aviation security. However, in future, when these international conventions would come into effect and Korea would ratify them, the national legislation that should be amended or provided newly in the Aviation Security Act are as followings : The jurisdiction, the definition of 'in flight', the immunity from the actions against the aircraft commander, etc., the compulsory delivery of the offender by the aircraft commander, etc., the strengthening of penalty on the person breaking the law, the enlargement of application to the accomplice, and the observance of international convention. Among them, particularly the Korean legislation is silent on the scope of the jurisdiction. Therefore, in order for jurisdiction to be extended to the extra-territorial cases of unruly and disruptive offences, it is desirable that either the Aviation Security Act or the general Crime Codes should be revised. In conclusion, in order to meet the intelligent and diverse aviation threats, the Korean government should review closely the contents of international conventions relating to aviation security and the current ratification status of international conventions by each state, and make effort to improve the legislation relating to aviation security and the aviation security system for the ratification of international conventions and the implementation of national legislation under international conventions.

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.

A Comparison of the Teachers' Awareness about the Importance of Job Performing Abilities from National Competency Standards (국가직무능력표준(NCS)의 보육교사 직무수행능력에 대한 운영주체에 따른 보육교사의 중요도 인식 비교)

  • Jun, Yun Sook
    • Korean Journal of Child Education & Care
    • /
    • v.17 no.4
    • /
    • pp.1-30
    • /
    • 2017
  • The purpose of this study is to investigate the awareness of daycare teachers by the nursery school operation performance of the National Competency Standards(NCS). For this purpose, 35 public and private daycare center, home daycare centers and corporation daycare center teachers were randomly sampled in Busan Metropolitan City. We conducted a questionnaire consisting of 230 sub-elements based on 45 sub-competency unit elements extracted from 12 areas of job ability of the national competence standards. Final questionnaires were collected from 117 questionnaires of 28 national daycare teachers, 30 private daycare teachers, 30 home daycare teachers, and 29 corporation daycare teachers. As a result, there was a difference in the awareness of importance among the four groups in all 12 performance capability units. For the 12 job performance units, the importance was recognized in the order of corporate daycare center, national daycare center, home daycare center, and private daycare center. Among them, "Establishment of Child Care Center Management Policy", "Construction of Child Care Service", "Child Care Activity Management", "Infant and Child Playing Guide", "Body and Art Activity Guidance", "Language, Nutritional guidance" there were no differences in the awareness of the importance of corporate daycare teachers and national daycare teachers, and there was no difference between national daycare teachers and home daycare teachers. However, there was a difference between corporate daycare teachers and home daycare teachers.. There is also a difference in the awareness of importance between home daycare teachers and private daycare teachers. In the remaining five job competence units, "Child Care Assessment", "Support for Infant and Toddler Development", "Cooperation with the Home and Community", "Child Care Management", and "Child Care Research", corporation daycare centers, national daycare centers. There was no statistically significant difference in the awareness of importance, but there was a significant difference from the private daycare teachers. Also this tendency was consistently observed in 45 sub-capacity unit elements.

Analysis of Vision Statements in 6th Community Health Plan of Local Government in Korea (우리나라 시·군·구 지역보건의료계획의 비전(Vision) 문구 분석)

  • Ahn, Chi-Young;Kim, Hyun-Soo;Kim, Won-bin;Oh, Chang-hoon;Hong, Jee-Young;Kim, Eun-Young;Lee, Moo-Sik
    • Journal of agricultural medicine and community health
    • /
    • v.42 no.1
    • /
    • pp.1-12
    • /
    • 2017
  • Objectives: In this study, we analyzed vision statements of the 6th community health plan of local government in Korea. Methods: We examined vision statements letters, missions and strategy plans, and long-term missions of 6th community health plans of 229 local government in Korea. We also analyzed the numbers of vision letters, sentence examination, word frequency, each vision statement with frequency analysis, chi-square test, and one-way ANOVA. Results: Among 229 local government, 172(75.1%) of local government had the number of letters (Korean) less than 17 of vision statements, and there were a significant differences according to type of community health centers (p<0.05). Figuration (37.1%) were the most used in an expression of vision statement sentence, and special characters (43.2%) were the most used language except Korean. The most commonly used words of vision statement in order of frequency were 'health', 'happiness', 'with', 'citizen', 'city', '100 years old' etc. Chungcheong provinces and Daejeon metropolitan city had a highest score in directionality on phrase evaluation, and there were a significant differences according to regional classes of local government (p<0.01). Gyeongsang provinces, Ulsan, Daegu, and Busan metropolitan cities had a highest score in future orientation and sharing possibilities on phrase evaluation, and there were a significant differences according to regional classes of local government (p<0.01). Conclusions: Vision is one of the most important component of community health plan. We need more detailed 'vision statement guideline' and the community health care centers of local government should effort to make more clear and complete their vision.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.