• Title/Summary/Keyword: Keyword mapping

Search Result 42, Processing Time 0.028 seconds

Emotional Palette: Mapping Affective User experience Elements based on Trend (Emotional Palette: Trend에 따른 감성적 사용자 경험 요소 매핑)

  • Jeon, Myoung-Hoon;Lee, Ju-Hwan;Yang, Jung-Min;Heo, U-Beom;Lim, Tae-Hoon;Ahn, Jung-Hee;Kim, Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.451-455
    • /
    • 2008
  • Emotional design gets more and more important. However, the systematical approaches to integration of user experience elements in product design have been rarely tried. This study consists of three parts. We extracted affective words fitting to design direction based on trend analysis. Then, user experience elements were matched with affective words. Finally, a prototype system was made to guide designing affective factors in electronic products. In the present study, user experience elements were defined as color, material & finishing, and sound. Through various documents analysis and trend analysis, trend analysis experts and user experience designers extracted 31 affective keywords which could fully reflect current trend. After paired-comparison of selected keywords, 2 sensibility dimensions were obtained by multidimensional scaling. Trend affective keywords could be explained by 2 dimensions of human-centered' vs. 'techno-centered' and 'warm vs. cool'. Next, user experience elements stimuli were matched with each keyword by user direct positioning on the 2 dimensions affective map. Based on the result of the experiment, the prototype system was developed for the product designers. The results of the current study could guide designers to design emotionally satisfactory products.

  • PDF

국가연구개발사업 평가에서 사회연결망 분석 활용 방안

  • Gi, Ji-Hun
    • Proceedings of the Korea Technology Innovation Society Conference
    • /
    • 2017.11a
    • /
    • pp.129-129
    • /
    • 2017
  • In planning and evaluating government R&D programs, one of the first steps is to understand the government's current R&D investment portfolio - which fields or topics the government is now investing in in R&D. Analysis methods of an investment portfolio of government R&D tend traditionally to rely on keyword searches or ad-hoc two-dimensional classifications. The main drawback of these approaches is their limited ability to account for the characteristics of the whole government investment in R&D and the role of individual R&D program in it, which tends to depend on the relationship with other programs. This paper suggests a new method for mapping and analyzing government investment in R&D using a combination of methods from natural language processing (NLP) and network analysis. The NLP enables us to build a network of government R&D programs whose links are defined as similarity in R&D topics. Then methods from network analysis show the characteristics of government investment in R&D, including major investment fields, unexplored topics, and key R&D programs which play a role like a hub or a bridge in the network of R&D programs, which are difficult to be identified by conventional methods. These insights can be utilized in planning a new R&D program, in reviewing its proposal, or in evaluating the performance of R&D programs. The utilized (filtered) Korean text corpus consists of hundreds of R&D program descriptions in the budget requests for fiscal year 2017 submitted by government departments to the Korean Ministry of Strategy and Finance.

  • PDF

Machine Learning Process for the Prediction of the IT Asset Fault Recovery (IT자산 장애처리의 사전 예측을 위한 기계학습 프로세스)

  • Moon, Young-Joon;Rhew, Sung-Yul;Choi, Il-Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.4
    • /
    • pp.281-290
    • /
    • 2013
  • The IT asset is a core part that supports the management objective of an organization, and the fast settlement of the IT asset fault is very important. In this study, a fault recovery prediction technique is proposed, which uses the existing fault data to address the IT asset fault. The proposed fault recovery prediction technique is as follows. First, the existing fault recovery data were pre-processed and classified by fault recovery type; second, a rule was established for the keyword mapping of the classified fault recovery types and reported data; and third, a machine learning process that allows the prediction of the fault recovery method based on the established rule was presented. To verify the effectiveness of the proposed machine learning process, company A's 33,000 computer fault data for the duration of six months were tested. The hit rate for fault recovery prediction was approximately 72%, and it increased to 81% via continuous machine learning.

Development of Extracting System for Meaning·Subject Related Social Topic using Deep Learning (딥러닝을 통한 의미·주제 연관성 기반의 소셜 토픽 추출 시스템 개발)

  • Cho, Eunsook;Min, Soyeon;Kim, Sehoon;Kim, Bonggil
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.14 no.4
    • /
    • pp.35-45
    • /
    • 2018
  • Users are sharing many of contents such as text, image, video, and so on in SNS. There are various information as like as personal interesting, opinion, and relationship in social media contents. Therefore, many of recommendation systems or search systems are being developed through analysis of social media contents. In order to extract subject-related topics of social context being collected from social media channels in developing those system, it is necessary to develop ontologies for semantic analysis. However, it is difficult to develop formal ontology because social media contents have the characteristics of non-formal data. Therefore, we develop a social topic system based on semantic and subject correlation. First of all, an extracting system of social topic based on semantic relationship analyzes semantic correlation and then extracts topics expressing semantic information of corresponding social context. Because the possibility of developing formal ontology expressing fully semantic information of various areas is limited, we develop a self-extensible architecture of ontology for semantic correlation. And then, a classifier of social contents and feed back classifies equivalent subject's social contents and feedbacks for extracting social topics according semantic correlation. The result of analyzing social contents and feedbacks extracts subject keyword, and index by measuring the degree of association based on social topic's semantic correlation. Deep Learning is applied into the process of indexing for improving accuracy and performance of mapping analysis of subject's extracting and semantic correlation. We expect that proposed system provides customized contents for users as well as optimized searching results because of analyzing semantic and subject correlation.

Research Trend of Joint Mobilization Type on Shoulder : A scoping review (어깨관절 질환에 대한 관절가동술 유형의 연구 동향 : 주제범위 문헌고찰)

  • Jeong-Woo Lee;Nam-Gi Lee
    • Journal of The Korean Society of Integrative Medicine
    • /
    • v.11 no.3
    • /
    • pp.171-183
    • /
    • 2023
  • Purpose : This study sought to investigate research trends regarding joint mobilization type among patients with shoulder joint diseases. Methods : A scoping review was conducted according to the five steps outlined by Arskey and O'Malley and PRISMA-ScR. We searched six domestic databases (ScienceOn, DBpia, Riss, Kmbase, Kiss, KCI) and three international databases (CINAHL, Pubmed, Cochrane central) between 2013 and June 2023. The keyword terms used were 'joint mobilization', 'Kaltenborn', 'Maitland', 'Mulligan', and 'shoulder joint'. Results : There were a total of 44 studies that investigated the topic, and these were divided into quantitative analysis and topic analysis. In terms of publication year, the number of studies within the last five years has increased more than compared to the previous five years, with most of them being randomized clinical trials. In shoulder joint diseases, it was found that the majority of joint movement studies focused on adhesive joint cystitis and shoulder collision syndrome. The Mulligan concept was the most commonly studied type of joint motion. The dependent variables used included pain, joint function (disability), and muscle function. The visual analog scale was the most commonly used for the pain variable, followed by the numeric rating scale. For joint function and disability variables, range of motion was the most commonly used, followed by shoulder pain and disability index, and disabilities of the arm, shoulder, and hand. For muscle function, variables such as muscle tone, strength, and activity were used. Conclusion : We believe that findings of this scoping review can serve as valuable mapping data for joint mobilization research on shoulder joint diseases. Further studies including systematic reviews and meta-analyses based on these results are recommended.

Term Mapping Methodology between Everyday Words and Legal Terms for Law Information Search System (법령정보 검색을 위한 생활용어와 법률용어 간의 대응관계 탐색 방법론)

  • Kim, Ji Hyun;Lee, Jong-Seo;Lee, Myungjin;Kim, Wooju;Hong, June Seok
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.137-152
    • /
    • 2012
  • In the generation of Web 2.0, as many users start to make lots of web contents called user created contents by themselves, the World Wide Web is overflowing by countless information. Therefore, it becomes the key to find out meaningful information among lots of resources. Nowadays, the information retrieval is the most important thing throughout the whole field and several types of search services are developed and widely used in various fields to retrieve information that user really wants. Especially, the legal information search is one of the indispensable services in order to provide people with their convenience through searching the law necessary to their present situation as a channel getting knowledge about it. The Office of Legislation in Korea provides the Korean Law Information portal service to search the law information such as legislation, administrative rule, and judicial precedent from 2009, so people can conveniently find information related to the law. However, this service has limitation because the recent technology for search engine basically returns documents depending on whether the query is included in it or not as a search result. Therefore, it is really difficult to retrieve information related the law for general users who are not familiar with legal terms in the search engine using simple matching of keywords in spite of those kinds of efforts of the Office of Legislation in Korea, because there is a huge divergence between everyday words and legal terms which are especially from Chinese words. Generally, people try to access the law information using everyday words, so they have a difficulty to get the result that they exactly want. In this paper, we propose a term mapping methodology between everyday words and legal terms for general users who don't have sufficient background about legal terms, and we develop a search service that can provide the search results of law information from everyday words. This will be able to search the law information accurately without the knowledge of legal terminology. In other words, our research goal is to make a law information search system that general users are able to retrieval the law information with everyday words. First, this paper takes advantage of tags of internet blogs using the concept for collective intelligence to find out the term mapping relationship between everyday words and legal terms. In order to achieve our goal, we collect tags related to an everyday word from web blog posts. Generally, people add a non-hierarchical keyword or term like a synonym, especially called tag, in order to describe, classify, and manage their posts when they make any post in the internet blog. Second, the collected tags are clustered through the cluster analysis method, K-means. Then, we find a mapping relationship between an everyday word and a legal term using our estimation measure to select the fittest one that can match with an everyday word. Selected legal terms are given the definite relationship, and the relations between everyday words and legal terms are described using SKOS that is an ontology to describe the knowledge related to thesauri, classification schemes, taxonomies, and subject-heading. Thus, based on proposed mapping and searching methodologies, our legal information search system finds out a legal term mapped with user query and retrieves law information using a matched legal term, if users try to retrieve law information using an everyday word. Therefore, from our research, users can get exact results even if they do not have the knowledge related to legal terms. As a result of our research, we expect that general users who don't have professional legal background can conveniently and efficiently retrieve the legal information using everyday words.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

An Efficient Frequent Melody Indexing Method to Improve Performance of Query-By-Humming System (허밍 질의 처리 시스템의 성능 향상을 위한 효율적인 빈번 멜로디 인덱싱 방법)

  • You, Jin-Hee;Park, Sang-Hyun
    • Journal of KIISE:Databases
    • /
    • v.34 no.4
    • /
    • pp.283-303
    • /
    • 2007
  • Recently, the study of efficient way to store and retrieve enormous music data is becoming the one of important issues in the multimedia database. Most general method of MIR (Music Information Retrieval) includes a text-based approach using text information to search a desired music. However, if users did not remember the keyword about the music, it can not give them correct answers. Moreover, since these types of systems are implemented only for exact matching between the query and music data, it can not mine any information on similar music data. Thus, these systems are inappropriate to achieve similarity matching of music data. In order to solve the problem, we propose an Efficient Query-By-Humming System (EQBHS) with a content-based indexing method that efficiently retrieve and store music when a user inquires with his incorrect humming. For the purpose of accelerating query processing in EQBHS, we design indices for significant melodies, which are 1) frequent melodies occurring many times in a single music, on the assumption that users are to hum what they can easily remember and 2) melodies partitioned by rests. In addition, we propose an error tolerated mapping method from a note to a character to make searching efficient, and the frequent melody extraction algorithm. We verified the assumption for frequent melodies by making up questions and compared the performance of the proposed EQBHS with N-gram by executing various experiments with a number of music data.

Component Analysis for Constructing an Emotion Ontology (감정 온톨로지의 구축을 위한 구성요소 분석)

  • Yoon, Ae-Sun;Kwon, Hyuk-Chul
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.1
    • /
    • pp.157-175
    • /
    • 2010
  • Understanding dialogue participant's emotion is important as well as decoding the explicit message in human communication. It is well known that non-verbal elements are more suitable for conveying speaker's emotions than verbal elements. Written texts, however, contain a variety of linguistic units that express emotions. This study aims at analyzing components for constructing an emotion ontology, that provides us with numerous applications in Human Language Technology. A majority of the previous work in text-based emotion processing focused on the classification of emotions, the construction of a dictionary describing emotion, and the retrieval of those lexica in texts through keyword spotting and/or syntactic parsing techniques. The retrieved or computed emotions based on that process did not show good results in terms of accuracy. Thus, more sophisticate components analysis is proposed and the linguistic factors are introduced in this study. (1) 5 linguistic types of emotion expressions are differentiated in terms of target (verbal/non-verbal) and the method (expressive/descriptive/iconic). The correlations among them as well as their correlation with the non-verbal expressive type are also determined. This characteristic is expected to guarantees more adaptability to our ontology in multi-modal environments. (2) As emotion-related components, this study proposes 24 emotion types, the 5-scale intensity (-2~+2), and the 3-scale polarity (positive/negative/neutral) which can describe a variety of emotions in more detail and in standardized way. (3) We introduce verbal expression-related components, such as 'experiencer', 'description target', 'description method' and 'linguistic features', which can classify and tag appropriately verbal expressions of emotions. (4) Adopting the linguistic tag sets proposed by ISO and TEI and providing the mapping table between our classification of emotions and Plutchik's, our ontology can be easily employed for multilingual processing.

  • PDF

Trends in disaster safety research in Korea: Focusing on the journal papers of the departments related to disaster prevention and safety engineering

  • Kim, Byungkyu;You, Beom-Jong;Shim, Hyoung-Seop
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.43-57
    • /
    • 2022
  • In this paper, we propose a method of analyzing research papers published by researchers belonging to university departments in the field of disaster & safety for the scientometric analysis of the research status in the field of disaster safety. In order to conduct analysis research, the dataset constructed in previous studies was newly improved and utilized. In detail, for research papers of authors belonging to the disaster prevention and safety engineering type department of domestic universities, institution identification, cited journal identification of references, department type classification, disaster safety type classification, researcher major information, KSIC(Korean Standard Industrial Classification) mapping information was reflected in the experimental data. The proposed method has a difference from previous studies in the field of disaster & safety and data set based on related keyword searches. As a result of the analysis, the type and regional distribution of organizations belonging to the department of disaster prevention and safety engineering, the composition of co-authored department types, the researchers' majors, the status of disaster safety types and standard industry classification, the status of citations in academic journals, and major keywords were identified in detail. In addition, various co-occurrence networks were created and visualized for each analysis unit to identify key connections. The research results will be used to identify and recommend major organizations and information by disaster type for the establishment of an intelligent crisis warning system. In order to provide comprehensive and constant analysis information in the future, it is necessary to expand the analysis scope and automate the identification and classification process for data set construction.