• Title/Summary/Keyword: 자동정보 추출

Search Result 1,995, Processing Time 0.032 seconds

Syllable-based Korean POS Tagging Based on Combining a Pre-analyzed Dictionary with Machine Learning (기분석사전과 기계학습 방법을 결합한 음절 단위 한국어 품사 태깅)

  • Lee, Chung-Hee;Lim, Joon-Ho;Lim, Soojong;Kim, Hyun-Ki
    • Journal of KIISE
    • /
    • v.43 no.3
    • /
    • pp.362-369
    • /
    • 2016
  • This study is directed toward the design of a hybrid algorithm for syllable-based Korean POS tagging. Previous syllable-based works on Korean POS tagging have relied on a sequence labeling method and mostly used only a machine learning method. We present a new algorithm integrating a machine learning method and a pre-analyzed dictionary. We used a Sejong tagged corpus for training and evaluation. While the machine learning engine achieved eojeol precision of 0.964, the proposed hybrid engine achieved eojeol precision of 0.990. In a Quiz domain test, the machine learning engine and the proposed hybrid engine obtained 0.961 and 0.972, respectively. This result indicates our method to be effective for Korean POS tagging.

Reconstruction of Partially Damaged face for Improving a Face Recognition Rate (얼굴 인식률 향상을 위한 손상된 얼굴 영역의 복원)

  • 최재영;황승호;김낙빈
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.3
    • /
    • pp.308-318
    • /
    • 2004
  • A subject to recognize the damaged facial image is becoming an important issue in commercialization of automatic face recognition. The method to recognize a face on a damaged image is divided into two types. The one is to recognize remainders after removing the damaged information and the other is to recognize a total face after recovering the damaged information. On this paper, we present the reconstruction method by analyzing the main materials after extracting the damaged region through Kohonen network. The suggested algorithm in this paper estimates feature vectors of the damaged region using eigen-faces in PCA and then reconstructs the damaged image. This allows also the reconstruction under the untrained images. Through testing the artificial images where the eye and the mouth which have many effects to face recognition are damaged, the recognition rate of the proposed results showed similar results with the method which used Kohonen network, and improved about 11.8% more than symmetrical property method. Also, in case of the untrained image, our results improved about 14% more than that of the Kohonen method and about 7% more than that of the symmetrical property method.

  • PDF

Inter-Process Testing of Parallel Programs based on Message Sequence Charts Specifications (MSC 명세에 기반한 병렬 프로그램의 프로세스 간 테스팅)

  • Bae, Hyun-Seop;Chung, In-Sang;Kim, Hyeon-Soo;Kwon, Yong-Rae;Chung, Young-Sik;Lee, Byung-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.2
    • /
    • pp.108-119
    • /
    • 2000
  • Most of prior works on testing parallel programs have concentrated on how to guarantee the reproducibility by employing event traces exercised during executions of a program. Consequently, little work has been done to generate meaningful event sequences, especially, from specifications. This paper describes techniques for deriving event sequences from Message Sequence Charts(MSCs) which are widely used in telecommunication areas for its simplicity in specifying the behaviors of a program. For deriving event sequences from MSCs, we have to uncover the causality relations among events embedded implicitly in MSCs. In order to attain this goal, we adapt vector time stamping which has been previously used to determine the ordering of events taken place during an execution of interacting processes. Then, valid event sequences, satisfying the causality relations, are generated according to the interleaving rules suggested in this paper. The feasibility of our testing technique was investigated using the phone conversation example. In addition, we discussed on the experimental results gained from the example and how to combine various test criteria into our testing environment.

  • PDF

Decreasing Parameter Decision in Edge Strength Hough Transform (경계선 강도 허프 변환에서 감쇄 파라미터의 결정)

  • Woo, Young-Woon;Heo, Gyeong-Yong;Kim, Kwang-Baek
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.728-731
    • /
    • 2007
  • Though the Hough transform is a well-known method for detecting analytical shape represented by a number of free parameters, the basic property of the Hough transform, the one-to-many mapping from an image space to a Hough space, causes the innate problem, the sensitivity to noise. To remedy this problem, Edge Strength Hough Transform (ESHT) was proposed and proved to reduce the noise sensitivity. However the performance of ESHT depends on the size of a Hough space and image and some other parameters, which play an important role in ESHT and should be decided experimentally. In this paper, we derived a formula to decide decreasing parameter. Using the derived formulae, the decreasing parameter value can be decided only with the pre-determined values, the size of a Hough space and an image, which make it possible to decide them automatically.

  • PDF

A Best View Selection Method in Videos of Interested Player Captured by Multiple Cameras (다중 카메라로 관심선수를 촬영한 동영상에서 베스트 뷰 추출방법)

  • Hong, Hotak;Um, Gimun;Nang, Jongho
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1319-1332
    • /
    • 2017
  • In recent years, the number of video cameras that are used to record and broadcast live sporting events has increased, and selecting the shots with the best view from multiple cameras has been an actively researched topic. Existing approaches have assumed that the background in video is fixed. However, this paper proposes a best view selection method for cases in which the background is not fixed. In our study, an athlete of interest was recorded in video during motion with multiple cameras. Then, each frame from all cameras is analyzed for establishing rules to select the best view. The frames were selected using our system and are compared with what human viewers have indicated as being the most desirable. For the evaluation, we asked each of 20 non-specialists to pick the best and worst views. The set of the best views that were selected the most coincided with 54.5% of the frame selection using our proposed method. On the other hand, the set of views most selected as worst through human selection coincided with 9% of best view shots selected using our method, demonstrating the efficacy of our proposed method.

Functional Test Automation for Android GUI Widgets Using XML (XML을 이용한 안드로이드 GUI 위젯의 기능 테스트 자동화)

  • Ma, Yingzhe;Choi, Eun-Man
    • The KIPS Transactions:PartD
    • /
    • v.19D no.2
    • /
    • pp.203-210
    • /
    • 2012
  • Capture-and-replay technique is a common automatic method for GUI testing. Testing applications on Android platform cannot use directly capture-and-replay technique due to the testing framework which is already set up and technical supported by Google and lack of automatic linking GUI elements to actions handling widget events. Without capture-and-replay testing tools testers must design and implement testing scenarios according to the specification, and make linking every GUI elements to event handling parts all by hand. This paper proposes a more improved and optimized approach than common capture-and-replay technique for automatic testing Android GUI widgets. XML is applied to extract GUI elements from applications based on tracing the actions to handle widget events. After tracing click events using monitoring in capture phase test cases will be created by communicating status of activated widget in replay phase with API events.

Automatic Detection of Highlights in Soccer videos based on analysis of scene structure (축구 동영상에서의 장면 구조 분석에 기반한 자동적인 하이라이트 장면 검출)

  • Park, Ki-Tae;Moon, Young-Shik
    • The KIPS Transactions:PartB
    • /
    • v.14B no.1 s.111
    • /
    • pp.1-4
    • /
    • 2007
  • In this paper, we propose an efficient scheme for automatically detecting highlight scenes in soccer videos. Highlights are defined as shooting scenes and goal scenes. Through the analysis of soccer videos, we notice that most of highlight scenes are shown around the goal post area. It is also noticed that the TV camera zooms in a setter player or spectators after the highlight stones. Detection of highlight scenes for soccer videos consists of three steps. The first step is the extraction of the playing field using a statistical threshold. The second step is the detection of goal posts. In the final step, we detect a zooming of a soccer player or spectators by using connected component labeling of non-playing field. In order to evaluate the performance of our method, the precision and the recall are computed. Experimental results have shown the effectiveness of the proposed method, with 95.2% precision and 85.4% recall.

Detection of Music Mood for Context-aware Music Recommendation (상황인지 음악추천을 위한 음악 분위기 검출)

  • Lee, Jong-In;Yeo, Dong-Gyu;Kim, Byeong-Man
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.263-274
    • /
    • 2010
  • To provide context-aware music recommendation service, first of all, we need to catch music mood that a user prefers depending on his situation or context. Among various music characteristics, music mood has a close relation with people‘s emotion. Based on this relationship, some researchers have studied on music mood detection, where they manually select a representative segment of music and classify its mood. Although such approaches show good performance on music mood classification, it's difficult to apply them to new music due to the manual intervention. Moreover, it is more difficult to detect music mood because the mood usually varies with time. To cope with these problems, this paper presents an automatic method to classify the music mood. First, a whole music is segmented into several groups that have similar characteristics by structural information. Then, the mood of each segments is detected, where each individual's preference on mood is modelled by regression based on Thayer's two-dimensional mood model. Experimental results show that the proposed method achieves 80% or higher accuracy.

Improved Image Restoration Algorithm about Vehicle Camera for Corresponding of Harsh Conditions (가혹한 조건에 대응하기 위한 차량용 카메라의 개선된 영상복원 알고리즘)

  • Jang, Young-Min;Cho, Sang-Bock;Lee, Jong-Hwa
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.114-123
    • /
    • 2014
  • Vehicle Black Box (Event Data Recorder EDR) only recognizes the general surrounding environments of load. In addition, general EDR is difficult to recognize the images of a sudden illumination change. It appears that the lens is being a severe distortion. Therefore, general EDR does not provide the clues of the circumstances of the accident. To solve this problem, we estimate the value of Normalized Luminance Descriptor(NLD) and Normalized Contrast Descriptor(NCD). Illumination change is corrected using Normalized Image Quality(NIQ). Second, we are corrected lens distortion using model of Field Of View(FOV) based on designed method of fisheye lens. As a result, we propose integration algorithm of two methods that correct distortions of images using each Gamma Correction and Lens Correction in parallel.

Enhancing Red Tide Image Recognition using NMF and Image Revision (NMF와 이미지 보정을 이용한 적조 이미지 인식 향상)

  • Park, Sun;Lee, Seong-Ro
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.2
    • /
    • pp.331-336
    • /
    • 2012
  • Red tide is a temporary natural phenomenon involving harmful algal blooms (HABs) in company with a changing sea color from normal to red or reddish brown, and which has a bad influence on coast environments and sea ecosystems. The HABs have inflicted massive mortality on fin fish and shellfish, damaging the economies of fisheries for almost every year from 1990 in South Korea. There have been many studies on red tide due to increasing damage from red tide on fishing and aquaculture industry. However, internal study of automatic red tide image classification is not enough. Especially, extraction of matching center features for recognizing algae image object is difficult because over 200 species of algae in the world have a different size and features. Previously studies used a few type of red tide algae for image classification. In this paper, we proposed the red tide image recognition method using NMF and revison of rotation angle for enhancing of recognition of red tide algae image.