• Title/Summary/Keyword: 검색 알고리즘

Search Result 1,684, Processing Time 0.027 seconds

Automatic Extraction of Eye and Mouth Fields from Face Images using MultiLayer Perceptrons and Eigenfeatures (고유특징과 다층 신경망을 이용한 얼굴 영상에서의 눈과 입 영역 자동 추출)

  • Ryu, Yeon-Sik;O, Se-Yeong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.2
    • /
    • pp.31-43
    • /
    • 2000
  • This paper presents a novel algorithm lot extraction of the eye and mouth fields (facial features) from 2D gray level face images. First of all, it has been found that Eigenfeatures, derived from the eigenvalues and the eigenvectors of the binary edge data set constructed from the eye and mouth fields are very good features to locate these fields. The Eigenfeatures, extracted from the positive and negative training samples for the facial features, ate used to train a MultiLayer Perceptron(MLP) whose output indicates the degree to which a particular image window contains the eye or the mouth within itself. Second, to ensure robustness, the ensemble network consisting of multiple MLPs is used instead of a single MLP. The output of the ensemble network becomes the average of the multiple locations of the field each found by the constituent MLPs. Finally, in order to reduce the computation time, we extracted the coarse search region lot eyes and mouth by using prior information on face images. The advantages of the proposed approach includes that only a small number of frontal faces are sufficient to train the nets and furthermore, lends themselves to good generalization to non-frontal poses and even to other people's faces. It was also experimentally verified that the proposed algorithm is robust against slight variations of facial size and pose due to the generalization characteristics of neural networks.

  • PDF

A System for Automatic Classification of Traditional Culture Texts (전통문화 콘텐츠 표준체계를 활용한 자동 텍스트 분류 시스템)

  • Hur, YunA;Lee, DongYub;Kim, Kuekyeng;Yu, Wonhee;Lim, HeuiSeok
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.12
    • /
    • pp.39-47
    • /
    • 2017
  • The Internet have increased the number of digital web documents related to the history and traditions of Korean Culture. However, users who search for creators or materials related to traditional cultures are not able to get the information they want and the results are not enough. Document classification is required to access this effective information. In the past, document classification has been difficult to manually and manually classify documents, but it has recently been difficult to spend a lot of time and money. Therefore, this paper develops an automatic text classification model of traditional cultural contents based on the data of the Korean information culture field composed of systematic classifications of traditional cultural contents. This study applied TF-IDF model, Bag-of-Words model, and TF-IDF/Bag-of-Words combined model to extract word frequencies for 'Korea Traditional Culture' data. And we developed the automatic text classification model of traditional cultural contents using Support Vector Machine classification algorithm.

Home Health Care Service Using Routine Vital Sign Checkup and Electronic Health Questionnaires (주기적인 생리변수 측정과 전자건강설문을 이용한 재택건강관리서비스)

  • 박승훈;우응제;이광호;김종철
    • Journal of Biomedical Engineering Research
    • /
    • v.22 no.5
    • /
    • pp.469-477
    • /
    • 2001
  • In this Paper. we describe a home health care service using electronic health questionnaires and routine checkup of vital signs Including ECG (Electrocardiography) , blood pressure. and SpO$_2$ (Oxygen Saturation) . This system is for patients at home with chronic diseases, discharged Patients, or any normal people for the Prevention of disease The service requires a home health care terminal and a PC with Interned connection installed at Patient home. The distance health care management center is equipped with a vital-sign and questionnaire interpreter as well as database, Web, and notification servers with UMS (Unified Messaging System). Participating Physician can access the servers at the center using a Web browser running on a PC available to them at any time. These components are linked together through various kinds of data and voice communication channels including PSTN (Public Switched Telephone Network) . CATV(Community Antenna TV) . Interned. and mobile communication network. Following the Physician's direction given to a Patient. he or she uses the home health care terminal to collect vital signs and fill out the questionnaire. When the terminal automatically transmits these data to the management center. the data interpreter and servers at the center process the information fo1lowing the Protocol implemented on the system. Physicians can retrieve and review data corresponding to their Patients and send back their diagnostic reports to the center. UMS at the center delivers the physician 's recommendation to the corresponding patient through the notification server. Patients can also reprieve and review their own records as well as diagnostic reports from physicians. The system Provides a new way of collecting diagnostic information and delivering doctor's recommendation to patients at home for their health management. Future works are needed in the development of new technology for measurements and interpretations of various vital signs .

  • PDF

Development of GML Map Visualization Service and POI Management Tool using Tagging (GML 지도 가시화 서비스 및 태깅을 이용한 POI 관리 도구 개발)

  • Park, Yong-Jin;Song, Eun-Ha;Jeong, Young-Sik
    • Journal of Internet Computing and Services
    • /
    • v.9 no.3
    • /
    • pp.141-158
    • /
    • 2008
  • In this paper, we developed the GML Map Server which visualized the map based on GML as international standard for exchanging the common format map and for interoperability of GIS information. And also, it should transmit effectively GML map into the mobile device by using dynamic map partition and caching. It manages a partition based on the visualization area of a mobile device in order to visualize the map to a mobile device in real time, and transmits the partition area by serializing it for the benefit of transmission. Also, the received partition area is compounded in a mobile device and is visualized by being partitioned again as four visible areas based on the display of a mobile device. Then, the area is managed by applying a caching algorithm in consideration of repetitiveness for a received map for the efficient operation of resources. Also, in order to prevent the delay in transmission time as regards the instance density area of the map, an adaptive map partition mechanism is proposed for maintaining the regularity of transmission time. GML Map Server can trace the position of mobile device with WIPI environment in this paper. The field emulator can be created mobile devices and mobile devices be moved and traced it's position instead of real-world. And we developed POIM(POI Management) for management hierarchically POI information and for the efficiency POI search by using the individual tagging technology with visual interface.

  • PDF

The Establishment and Application of Very Short Range Forecast of Precipitation System (초단시간 강수예보시스템 구축 및 활용)

  • Choi, Ji-Hye;Nam, Kyung-Yeub;Suk, Mi-Kyung;Choi, Byoung-Cheol
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2006.05a
    • /
    • pp.1515-1519
    • /
    • 2006
  • 본 연구에서는 초단시간 강수예보(VSRF, Very Short-Range Forecast of precipitation) 시스템 구축 현황을 소개하고자 한다. VSRF 모델은 레이더 반사도 자료와 지상 AWS 자료를 이용하여 레이더-AWS 강우강도를 산출하는 강수분석과정과 분석된 강수량 자료와 중규모 수치예보장을 사용하여 외삽법에 의한 초단시간 강수예보를 수행하는 예보과정, 실시간으로 산출된 강수예보 자료를 검증하고 홈페이지에 제공하는 자료지원과정으로 구성된다. 본 연구에서는 모델의 예보능력을 향상시키기 위해 크게 두 가지 측면에서 모델을 개선하였다. 첫째는 모델의 입력자료인 레이더-AWS 강우강도 자료를 기상연구소 원격탐사연구실에서 운영하던 WPMM (Window Probability Matching Method)과 기상청 기상레이더과에서 운영하던 RQPE(Radar Quantitative Precipitation Estimation)의 알고리즘을 통합하여 정확한 강우강도 자료인 레이더-AWS 강우강도(RAR, Radar-AWS Rain rate) 시스템을 구축하여 개선하였으며, 둘째는 외삽과정을 통한 예보가 3시간이 지나면 예측능력이 감소하는 문제점을 보완하기 위해 현업 중규모 모델(RDAPS, Regional Data Assimilation and Prediction System)의 예측강수와 병합하여 모델을 개선하였다. 또한 이를 시계열 검증 및 공간 검증하는 실시간 검증 시스템을 구축하여 실시간으로 모델의 정확성을 평가하고 있다. 그 결과 입력자료 개선을 통한 모델의 정확도는 크게 향상된 결과는 볼 수 없었지만 미약하게 향상된 것을 확인할 수 있었으며, 모델의 병합을 통한 모델의 개선은 예측 3시간 이후부터는 50% 정도 향상되었다.의 대안을 제시하고자 한다.X>${\mu}_{max,A}$는 최대암모니아 섭취률을 이용하여 구한 결과 $0.65d^{-1}$로 나타났다.EX>$60%{\sim}87%$가 수심 10m 이내에 분포하였고, 녹조강과 남조강이 우점하는 하절기에는 5m 이내에 주로 분포하였다. 취수탑 지점의 수심이 연중 $25{\sim}35m$를 유지하는 H호의 경우 간헐식 폭기장치를 가동하는 기간은 물론 그 외 기간에도 취수구의 심도를 표층 10m 이하로 유지 할 경우 전체 조류 유입량을 60% 이상 저감할 수 있을 것으로 조사되었다.심볼 및 색채 디자인 등의 작업이 수반되어야 하며, 이들을 고려한 인터넷용 GIS기본도를 신규 제작한다. 상습침수지구와 관련된 각종 GIS데이타와 각 기관이 보유하고 있는 공공정보 가운데 공간정보와 연계되어야 하는 자료를 인터넷 GIS를 이용하여 효율적으로 관리하기 위해서는 단계별 구축전략이 필요하다. 따라서 본 논문에서는 인터넷 GIS를 이용하여 상습침수구역관련 정보를 검색, 처리 및 분석할 수 있는 상습침수 구역 종합정보화 시스템을 구축토록 하였다.N, 항목에서 보 상류가 높게 나타났으나, 철거되지 않은 검전보나 안양대교보에 비해 그 차이가 크지 않은 것으로 나타났다.의 기상변화가 자발성 기흉 발생에 영향을 미친다고 추론할 수 있었다. 향후 본 연구에서 추론된 기상변화와 기흉 발생과의 인과관계를 확인하고 좀 더 구체화하기 위한 연구가 필요할 것이다.게 이루어질 수 있을 것으로 기대된다.는 초과수익률이 상승하지만, 이후로는 감소하므로, 반전거래전략을 활용하는 경우 주식투자기간은 24개월이하의 중단기가 적합함을 발견하였다. 이상의 행태적 측면과 투자성과측면의 실증결과를 통하여 한국주식시장에

  • PDF

Image Separation of Talker from a Background by Differential Image and Contours Information (차영상 및 윤곽선에 의한 배경에서 화자분리)

  • Park Jong-Il;Park Young-Bum;Yoo Hyun-Joong
    • The KIPS Transactions:PartB
    • /
    • v.12B no.6 s.102
    • /
    • pp.671-678
    • /
    • 2005
  • In this paper, we suggest an algorithm that allows us to extract the important obbject from motion pictures and then replace the background with arbitrary images. The suggested technique can be used not only for protecting privacy and reducing the size of data to be transferred by removing the background of each frame, but also for replacing the background with user-selected image in video communication systems including mobile phones. Because of the relatively large size of image data, digital image processing usually takes much of the resources like memory and CPU. This can cause trouble especially for mobile video phones which typically have restricted resources. In our experiments, we could reduce the requirements of time and memory for processing the images by restricting the search area to the vicinity of major object's contour found in the previous frame based on the fact that the movement of major object is not wide or rapid in general. Specifically, we detected edges and used the edge image of the initial frame to locate candidate-object areas. Then, on the located areas, we computed the difference image between adjacent frames and used it to determine and trace the major object that might be moving. And then we computed the contour of the major object and used it to separate major object from the background. We could successfully separate major object from the background and replate the background with arbitrary images.

Scalable RDFS Reasoning using Logic Programming Approach in a Single Machine (단일머신 환경에서의 논리적 프로그래밍 방식 기반 대용량 RDFS 추론 기법)

  • Jagvaral, Batselem;Kim, Jemin;Lee, Wan-Gon;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.41 no.10
    • /
    • pp.762-773
    • /
    • 2014
  • As the web of data is increasingly producing large RDFS datasets, it becomes essential in building scalable reasoning engines over large triples. There have been many researches used expensive distributed framework, such as Hadoop, to reason over large RDFS triples. However, in many cases we are required to handle millions of triples. In such cases, it is not necessary to deploy expensive distributed systems because logic program based reasoners in a single machine can produce similar reasoning performances with that of distributed reasoner using Hadoop. In this paper, we propose a scalable RDFS reasoner using logical programming methods in a single machine and compare our empirical results with that of distributed systems. We show that our logic programming based reasoner using a single machine performs as similar as expensive distributed reasoner does up to 200 million RDFS triples. In addition, we designed a meta data structure by decomposing the ontology triples into separate sectors. Instead of loading all the triples into a single model, we selected an appropriate subset of the triples for each ontology reasoning rule. Unification makes it easy to handle conjunctive queries for RDFS schema reasoning, therefore, we have designed and implemented RDFS axioms using logic programming unifications and efficient conjunctive query handling mechanisms. The throughputs of our approach reached to 166K Triples/sec over LUBM1500 with 200 million triples. It is comparable to that of WebPIE, distributed reasoner using Hadoop and Map Reduce, which performs 185K Triples/sec. We show that it is unnecessary to use the distributed system up to 200 million triples and the performance of logic programming based reasoner in a single machine becomes comparable with that of expensive distributed reasoner which employs Hadoop framework.

Elicitation of Collective Intelligence by Fuzzy Relational Methodology (퍼지관계 이론에 의한 집단지성의 도출)

  • Joo, Young-Do
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.17-35
    • /
    • 2011
  • The collective intelligence is a common-based production by the collaboration and competition of many peer individuals. In other words, it is the aggregation of individual intelligence to lead the wisdom of crowd. Recently, the utilization of the collective intelligence has become one of the emerging research areas, since it has been adopted as an important principle of web 2.0 to aim openness, sharing and participation. This paper introduces an approach to seek the collective intelligence by cognition of the relation and interaction among individual participants. It describes a methodology well-suited to evaluate individual intelligence in information retrieval and classification as an application field. The research investigates how to derive and represent such cognitive intelligence from individuals through the application of fuzzy relational theory to personal construct theory and knowledge grid technique. Crucial to this research is to implement formally and process interpretatively the cognitive knowledge of participants who makes the mutual relation and social interaction. What is needed is a technique to analyze cognitive intelligence structure in the form of Hasse diagram, which is an instantiation of this perceptive intelligence of human beings. The search for the collective intelligence requires a theory of similarity to deal with underlying problems; clustering of social subgroups of individuals through identification of individual intelligence and commonality among intelligence and then elicitation of collective intelligence to aggregate the congruence or sharing of all the participants of the entire group. Unlike standard approaches to similarity based on statistical techniques, the method presented employs a theory of fuzzy relational products with the related computational procedures to cover issues of similarity and dissimilarity.

Collaboration and Node Migration Method of Multi-Agent Using Metadata of Naming-Agent (네이밍 에이전트의 메타데이터를 이용한 멀티 에이전트의 협력 및 노드 이주 기법)

  • Kim, Kwang-Jong;Lee, Yon-Sik
    • The KIPS Transactions:PartD
    • /
    • v.11D no.1
    • /
    • pp.105-114
    • /
    • 2004
  • In this paper, we propose a collaboration method of diverse agents each others in multi-agent model and describe a node migration algorithm of Mobile-Agent (MA) using by the metadata of Naming-Agent (NA). Collaboration work of multi-agent assures stability of agent system and provides reliability of information retrieval on the distributed environment. NA, an important part of multi-agent, identifies each agents and series the unique name of each agents, and each agent references the specified object using by its name. Also, NA integrates and manages naming service by agents classification such as Client-Push-Agent (CPA), Server-Push-Agent (SPA), and System-Monitoring-Agent (SMA) based on its characteristic. And, NA provides the location list of mobile nodes to specified MA. Therefore, when MA does move through the nodes, it is needed to improve the efficiency of node migration by specified priority according to hit_count, hit_ratio, node processing and network traffic time. Therefore, in this paper, for the integrated naming service, we design Naming Agent and show the structure of metadata which constructed with fields such as hit_count, hit_ratio, total_count of documents, and so on. And, this paper presents the flow of creation and updating of metadata and the method of node migration with hit_count through the collaboration of multi-agent.

An Approach of Scalable SHIF Ontology Reasoning using Spark Framework (Spark 프레임워크를 적용한 대용량 SHIF 온톨로지 추론 기법)

  • Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1195-1206
    • /
    • 2015
  • For the management of a knowledge system, systems that automatically infer and manage scalable knowledge are required. Most of these systems use ontologies in order to exchange knowledge between machines and infer new knowledge. Therefore, approaches are needed that infer new knowledge for scalable ontology. In this paper, we propose an approach to perform rule based reasoning for scalable SHIF ontologies in a spark framework which works similarly to MapReduce in distributed memories on a cluster. For performing efficient reasoning in distributed memories, we focus on three areas. First, we define a data structure for splitting scalable ontology triples into small sets according to each reasoning rule and loading these triple sets in distributed memories. Second, a rule execution order and iteration conditions based on dependencies and correlations among the SHIF rules are defined. Finally, we explain the operations that are adapted to execute the rules, and these operations are based on reasoning algorithms. In order to evaluate the suggested methods in this paper, we perform an experiment with WebPie, which is a representative ontology reasoner based on a cluster using the LUBM set, which is formal data used to evaluate ontology inference and search speed. Consequently, the proposed approach shows that the throughput is improved by 28,400% (157k/sec) from WebPie(553/sec) with LUBM.