• Title/Summary/Keyword: Matching Rule

Search Result 120, Processing Time 0.029 seconds

LiDAR Static Obstacle Map based Vehicle Dynamic State Estimation Algorithm for Urban Autonomous Driving (도심자율주행을 위한 라이다 정지 장애물 지도 기반 차량 동적 상태 추정 알고리즘)

  • Kim, Jongho;Lee, Hojoon;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.14-19
    • /
    • 2021
  • This paper presents LiDAR static obstacle map based vehicle dynamic state estimation algorithm for urban autonomous driving. In an autonomous driving, state estimation of host vehicle is important for accurate prediction of ego motion and perceived object. Therefore, in a situation in which noise exists in the control input of the vehicle, state estimation using sensor such as LiDAR and vision is required. However, it is difficult to obtain a measurement for the vehicle state because the recognition sensor of autonomous vehicle perceives including a dynamic object. The proposed algorithm consists of two parts. First, a Bayesian rule-based static obstacle map is constructed using continuous LiDAR point cloud input. Second, vehicle odometry during the time interval is calculated by matching the static obstacle map using Normal Distribution Transformation (NDT) method. And the velocity and yaw rate of vehicle are estimated based on the Extended Kalman Filter (EKF) using vehicle odometry as measurement. The proposed algorithm is implemented in the Linux Robot Operating System (ROS) environment, and is verified with data obtained from actual driving on urban roads. The test results show a more robust and accurate dynamic state estimation result when there is a bias in the chassis IMU sensor.

Development of the ICF/KCF code set the people with Nervous System Disease: Based on Physical Therapy (신경계 환자 평가를 위한 ICF/KCF 코드세트 개발: 물리치료 중심으로)

  • Ju-Min Song;Sun-Wook Park
    • Journal of the Korean Society of Physical Medicine
    • /
    • v.18 no.1
    • /
    • pp.99-110
    • /
    • 2023
  • PURPOSE: This study was conducted to suggest a way to easily understand and utilize the International Classification of Functioning, Disability and Health (ICF) or Korean Standard Classification of Functioning, Disability and Health (KCF), a common and standard language related to health information. METHODS: The tools used by physical therapists to evaluate the functioning of neurological patients were collected from 10 domestic hospitals. By applying the ICF linking rule, two experts compared, analyzed, and linked the concepts in the items of the collected tools and the ICF/KCF codes. The frequency of use of the selected tool, the matching rate of the liking results of two experts, and the number of the codes linked were treated as descriptive statistics and the code set was presented as a list. RESULTS: The berg balance scale, trunk impairment scale, timed up and go test, functional ambulation category, 6 Minute walk test, manual muscle test, and range of motion measurements were the most commonly used tools for evaluating the functioning. The total number of items of the seven tools was 33, and the codes linked to the ICF/KCF were 69. Twenty-two codes were mapped, excluding duplicate codes. Ten codes in the body function, 11 codes in the activity, and one code in the environmental factor were included. CONCLUSION: The information on the development process of the code set will increase the understanding of ICF/KCF and the developed code set can conveniently be used for collecting patients' functioning information.

Application Analysis of Digital Photogrammetry and Optical Scanning Technique for Cultural Heritages Restoration (문화재 원형복원을 위한 수치사진측량과 광학스캐닝기법의 응용분석)

  • Han, Seung Hee;Bae, Yeon Soung;Bae, Sang Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.5D
    • /
    • pp.869-876
    • /
    • 2006
  • In the case of earthenware cultural heritages that are found in the form of fragments, the major task is quick and precise restoration. The existing method, which follows the rule of trial and error, is not only greatly time consuming but also lacked precision. If this job could be done by three dimensional scanning, matching up pieces could be done with remarkable efficiency. In this study, the original earthenware was modeled through three-dimensional pattern scanning and photogrammetry, and each of the fragments were scanned and modeled. In order to obtain images from the photogrammetry, we calibrated and used a Canon EOS 1DS real size camera. We analyzed the relationship among the sections of the formed model, efficiently compounded them, and analyzed the errors through residual and color error map. Also, we built a web-based three-dimensional simulation environment centering around the users, for the virtual museum.

A Unit Selection Methods using Flexible Break in a Japanese TTS (일본어 합성기에서 유동 Break를 이용한 합성단위 선택 방법)

  • Song, Young-Hwan;Na, Deok-Su;Kim, Jong-Kuk;Bae, Myung-Jin;Lee, Jong-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.8
    • /
    • pp.403-408
    • /
    • 2007
  • In a large corpus-based speech synthesizer, a break, which is a parameter influencing the naturalness and intelligibility, is used as an important feature during a unit selection process. Japanese is a language having intonations, which ate indicated by the relative differences in pitch heights and the APs(Accentual Phrases) are placed according to the changes of the accents while a break occurs on a boundary of the APs. Although a break can be predicted by using J-ToBI(Japanese-Tones and Break Indices), which is a rule-based or statistical approach, it is very difficult to predict a break exactly due to the flexibility. Therefore, in this paper, a method is to conduct a unit search by dividing breaks into two types, such as a fixed break and a flexible break, in order to use the advantages of a large-scale corpus, which includes various types of prosodies. As a result of an experiment, the proposed unit selection method contributed itself to enhance the naturalness of synthesized speeches.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

Place Assimilation in OT

  • Lee, Sechang
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.109-116
    • /
    • 1996
  • In this paper, I would like to explore the possibility that the nature of place assimilation can be captured in terms of the OCP within the Optimality Theory (Mccarthy & Prince 1999. 1995; Prince & Smolensky 1993). In derivational models, each assimilatory process would be expressed through a different autosegmental rule. However, what any such model misses is a clear generalization that all of those processes have the effect of avoiding a configuration in which two consonantal place nodes are adjacent across a syllable boundary, as illustrated in (1):(equation omitted) In a derivational model, it is a coincidence that across languages there are changes that have the result of modifying a structure of the form (1a) into the other structure that does not have adjacent consonantal place nodes (1b). OT allows us to express this effect through a constraint given in (2) that forbids adjacent place nodes: (2) OCP(PL): Adjacent place nodes are prohibited. At this point, then, a question arises as to how consonantal and vocalic place nodes are formally distinguished in the output for the purpose of applying the OCP(PL). Besides, the OCP(PL) would affect equally complex onsets and codas as well as coda-onset clusters in languages that have them such as English. To remedy this problem, following Mccarthy (1994), I assume that the canonical markedness constraint is a prohibition defined over no more than two segments, $\alpha$ and $\beta$: that is, $^{*}\{{\alpha, {\;}{\beta{\}$ with appropriate conditions imposed on $\alpha$ and $\beta$. I propose the OCP(PL) again in the following format (3) OCP(PL) (table omitted) $\alpha$ and $\beta$ are the target and the trigger of place assimilation, respectively. The '*' is a reminder that, in this format, constraints specify negative targets or prohibited configurations. Any structure matching the specifications is in violation of this constraint. Now, in correspondence terms, the meaning of the OCP(PL) is this: the constraint is violated if a consonantal place $\alpha$ is immediately followed by a consonantal place $\bebt$ in surface. One advantage of this format is that the OCP(PL) would also be invoked in dealing with place assimilation within complex coda (e.g., sink [si(equation omitted)k]): we can make the constraint scan the consonantal clusters only, excluding any intervening vowels. Finally, the onset clusters typically do not undergo place assimilation. I propose that the onsets be protected by certain constraint which ensures that the coda, not the onset loses the place feature.

  • PDF

A Comparative Study of Two Paradigms in Information Retrieval: Centering on Newer Perspectives on Users (정보검색에 있어서 두 패러다임의 비교분석 : 이용자에 대한 새로운 인식을 중심으로)

  • Cho Myung-Dae
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.24
    • /
    • pp.333-369
    • /
    • 1993
  • 정보검색 시스템을 대하는 대부분의 이용자의 대답은 '이용하기에 어렵다'라는 것이다. 기계적인 정보검색을 기본 철학으로 하는 기존의 matching paradigm은 정보 곡체를 여기 저기 내용을 옮길 수 있는 물건으로 간주한다. 그리고 기존의 정보시스템은 이용자가 시스템을 구성한 사람의 의도 (즉, indexing, cataloguing rule)를 완전히 이해한다면, 즉 완전하게 질문식(query)을 작성한다면, 효과적인 검색을 할 수 있는 그런 시스템이다. 그러나 어느 이용자가 그 복잡한 시스템을 이해하고 정보검색을 할 수 있겠는가? 한마디로 시스템을 설계한 사람의 의도로 이용자가 적응해서 검색을 한다는 것은 아주 힘든 일이다. 그러나 우리가 이용자에 대한 인식을 다시 한다면 보다 나은 시스템을 만들 수 있다고 본다. 우리 인간은 아주 창조적이어서 자기가 처한 상황에서 이치에 맞게끔 자기 나름대로의 행동을 할 수 있다(sense-making approach). 이 사실을 인식한다면, 왜 이용자들의 행동양식에 시스템 설계자가 적응을 못하는 것인가? 하고 의문을 던질 수 있다. 앞으로의 시스템이 이용자들의 자연스러운 행동 패턴에 맞게 끔 설계된다면 기존의 시스템과 함께 쉽게 이용할 수 있는 편리한 시스템이 설계될 수 있을 것이다. 그러므로 도서관 및 정보학 연구에 있어서 기존의 분류. 목록에 대한 연구와 이용자체에 대한연구(예를 들면, 몇 시에 이용자가 많은가? 어떤 종류의 책을 어떤 계충에서 많이 보는가? 도서 및 잡지가 어떻게 양적으로 성장해 왔는가? 등등의 use study)와 함께 여기서 제시한 제3의 요소인 이용자의 인식(cognition)을 시스템설계에 반드시 도입을 해야만 한다고 본다(user-centric approach). 즉 이용자를 중간 중간에서 도울 수 있는 facilitator가 많이 제공되어야 한다. 이용자의 다양한 패턴의 정보요구(information needs)에 부응할 수 있고, 질문식(query)을 잘 만들 수 없는 이용자를 도울 수 있고(ASK hypothesis: Anomolous State of Knowledge), 어떤 질문식 없이도 자유스럽게 Browsing할 수 있는(예를 들면 hypertext) 시스템을 설계하기 위해서는 눈에 보이는 이용자의 행동패턴(external behavior)도 중요하지만 우리 눈에는 보이지 않는 이용자의 심리상태를 이해한다면 훨씬 나은 시스템을 만들 수 있다. 이용자가 '왜?' '어떤 상황에서,' '어떤 목적으로,' '어떻게,' 정보를 검색하는지에 대해서 새로운 관심을 들려서 이용자들이 얼마나 우리 시스템 설계자들의 의도에 미치지 못한다는 사실을 인식 해야한다. 이 분야의 연구를 위해서는 새로운 paradigm이 필수적으로 필요하다고 본다. 단지 'user-study'만으로는 부족하며 새로운 시각으로 이용자를 연구해야 한다. 가령 새롭게 설치된 computer-assisted system에서 이용자들이 어떻게, 그리핀 어떤 분야에서 왜 그렇게 오류 (error)를 범하는지 분석한다면 앞으로의 computer 시스템 선계에 큰 도움을 줄 수 있을 것으로 믿는다. 실제로 많은 방법이 개발되고 있다. 그러면 시스템 설계자가 가졌던 이용자들이 이러 이러한 방식으로 정보검색을 할 것이라는 예측과(즉, conceptual model) 실제 이용자들이 정보검색을 할 때 일어나는 행동패턴 사이에는(즉, mental model) 상당한 차이점이 있다는 것을 알게 될 것이다. 이 차이점을 줄이는 것이 시스템 설계자의 의무라고 생각한다. 결론적으로, Computer에 대한 새로운 지식과 함께 이용자들의 인식을 연구할 수 있는, 철학적이고 방법론적인 연구를 계속하나가면서, 이용자들의 행동패턴을 어떻게 시스템 설계에 적용할 수 있는 지를 연구해야 한다. 중요하게 인식해야할 사실은 구 Paradigm을 완전히 무시하라는 것은 아니고 단지 이용자에 대한 새로운 인식을 추가하자는 것이다. 그것이 진정한 User Study가 될 수 있는 길이라고 생각하며, 컴퓨터와 이용자 사이의 '원활한 의사교환'이 필수불가결 한 지금 우리 학문이 가야 할 한 연구분야이다. (Human Interaction with Computers)

  • PDF

Effect of Community-Based Interventions for Registering and Managing Diabetes Patients in Rural Areas of Korea: Focusing on Medication Adherence by Difference in Difference Regression Analysis (한 농촌 지역사회 기반 당뇨병 환자의 등록관리 중재의 효과: 투약순응도에 대한 이중차이분석을 중심으로)

  • Hyo-Rim Son;So Youn Park;Hee-Jung Yong;Seong-Hyeon Chae;Eun Jung Kim;Eun-Sook Won;Yuna Kim;Se-Jin Bae;Chun-Bae Kim
    • Health Policy and Management
    • /
    • v.33 no.1
    • /
    • pp.3-18
    • /
    • 2023
  • Background: A chronic disease management program including patient education, recall and remind service, and reduction of out-of-pocket payment was implemented in Korea through a chronic care model. This study aimed to assess the effect of a community-based intervention program for improving medication adherence of patients with diabetes mellitus in rural areas of Korea. Methods: We applied a non-equivalent control group design using Korean National Health Insurance Big Data. Hongcheon County has been continuously adopting this program since 2012 as an intervention region. Hoengseong County did not adopt such program. It was used as a control region. Subjects were a cohort of patients with diabetes mellitus aged more than 65 years but less than 85 years among residents for 11 years from 2010 to 2020. After 1:1 matching, there were 368 subjects in the intervention region and 368 in the control region. Indirect indicators were analyzed using the difference-in-difference regression according to Andersen's medical use model. Results: The increasing percent point of diabetic patients who continuously received insurance benefits for more than 240 days from 2010 to 2014 and from 2010 to 2020 were 2.6%p and 2.7%p in the intervention region and 3.0%p and 3.9%p in the control region, respectively. The number of dispensations per prescription of diabetic patient in the intervention region increased by approximately 4.61% by month compared to that in the control region. Conclusion: The intervention program encouraged older people with diabetes mellitus to receive continuous care for overcoming the rule of halves in the community. More research is needed to determine whether further improvement in the continuity of comprehensive care can prevent the progression of cardiovascular diseases.

X-tree Diff: An Efficient Change Detection Algorithm for Tree-structured Data (X-tree Diff: 트리 기반 데이터를 위한 효율적인 변화 탐지 알고리즘)

  • Lee, Suk-Kyoon;Kim, Dong-Ah
    • The KIPS Transactions:PartC
    • /
    • v.10C no.6
    • /
    • pp.683-694
    • /
    • 2003
  • We present X-tree Diff, a change detection algorithm for tree-structured data. Our work is motivated by need to monitor massive volume of web documents and detect suspicious changes, called defacement attack on web sites. From this context, our algorithm should be very efficient in speed and use of memory space. X-tree Diff uses a special ordered labeled tree, X-tree, to represent XML/HTML documents. X-tree nodes have a special field, tMD, which stores a 128-bit hash value representing the structure and data of subtrees, so match identical subtrees form the old and new versions. During this process, X-tree Diff uses the Rule of Delaying Ambiguous Matchings, implying that it perform exact matching where a node in the old version has one-to one corrspondence with the corresponding node in the new, by delaying all the others. It drastically reduces the possibility of wrong matchings. X-tree Diff propagates such exact matchings upwards in Step 2, and obtain more matchings downwsards from roots in Step 3. In step 4, nodes to ve inserted or deleted are decided, We aldo show thst X-tree Diff runs on O(n), woere n is the number of noses in X-trees, in worst case as well as in average case, This result is even better than that of BULD Diff algorithm, which is O(n log(n)) in worst case, We experimented X-tree Diff on reat data, which are about 11,000 home pages from about 20 wev sites, instead of synthetic documets manipulated for experimented for ex[erimentation. Currently, X-treeDiff algorithm is being used in a commeercial hacking detection system, called the WIDS(Web-Document Intrusion Detection System), which is to find changes occured in registered websites, and report suspicious changes to users.

Term Mapping Methodology between Everyday Words and Legal Terms for Law Information Search System (법령정보 검색을 위한 생활용어와 법률용어 간의 대응관계 탐색 방법론)

  • Kim, Ji Hyun;Lee, Jong-Seo;Lee, Myungjin;Kim, Wooju;Hong, June Seok
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.137-152
    • /
    • 2012
  • In the generation of Web 2.0, as many users start to make lots of web contents called user created contents by themselves, the World Wide Web is overflowing by countless information. Therefore, it becomes the key to find out meaningful information among lots of resources. Nowadays, the information retrieval is the most important thing throughout the whole field and several types of search services are developed and widely used in various fields to retrieve information that user really wants. Especially, the legal information search is one of the indispensable services in order to provide people with their convenience through searching the law necessary to their present situation as a channel getting knowledge about it. The Office of Legislation in Korea provides the Korean Law Information portal service to search the law information such as legislation, administrative rule, and judicial precedent from 2009, so people can conveniently find information related to the law. However, this service has limitation because the recent technology for search engine basically returns documents depending on whether the query is included in it or not as a search result. Therefore, it is really difficult to retrieve information related the law for general users who are not familiar with legal terms in the search engine using simple matching of keywords in spite of those kinds of efforts of the Office of Legislation in Korea, because there is a huge divergence between everyday words and legal terms which are especially from Chinese words. Generally, people try to access the law information using everyday words, so they have a difficulty to get the result that they exactly want. In this paper, we propose a term mapping methodology between everyday words and legal terms for general users who don't have sufficient background about legal terms, and we develop a search service that can provide the search results of law information from everyday words. This will be able to search the law information accurately without the knowledge of legal terminology. In other words, our research goal is to make a law information search system that general users are able to retrieval the law information with everyday words. First, this paper takes advantage of tags of internet blogs using the concept for collective intelligence to find out the term mapping relationship between everyday words and legal terms. In order to achieve our goal, we collect tags related to an everyday word from web blog posts. Generally, people add a non-hierarchical keyword or term like a synonym, especially called tag, in order to describe, classify, and manage their posts when they make any post in the internet blog. Second, the collected tags are clustered through the cluster analysis method, K-means. Then, we find a mapping relationship between an everyday word and a legal term using our estimation measure to select the fittest one that can match with an everyday word. Selected legal terms are given the definite relationship, and the relations between everyday words and legal terms are described using SKOS that is an ontology to describe the knowledge related to thesauri, classification schemes, taxonomies, and subject-heading. Thus, based on proposed mapping and searching methodologies, our legal information search system finds out a legal term mapped with user query and retrieves law information using a matched legal term, if users try to retrieve law information using an everyday word. Therefore, from our research, users can get exact results even if they do not have the knowledge related to legal terms. As a result of our research, we expect that general users who don't have professional legal background can conveniently and efficiently retrieve the legal information using everyday words.