• 제목/요약/키워드: search word

검색결과 384건 처리시간 0.025초

Needleman-Wunsch 알고리즘을 이용한 유사예문 검색 (Searching Similar Example-Sentences Using the Needleman-Wunsch Algorithm)

  • 김동주;김한우
    • 한국컴퓨터정보학회논문지
    • /
    • 제11권4호
    • /
    • pp.181-188
    • /
    • 2006
  • 본 논문에서는 번역지원 시스템을 위한 유사예문 검객 알고리즘을 제안한다. 유사예문 검색이란 질의문에 대하여 구조적, 의미적으로 유사한 예문을 찾는 것으로 번역지원 시스템의 핵심 요소이다. 제안하는 알고리즘은 생물정보학 분야에서 두 단백질의 아미노산열의 유사성을 판별하기 위한 Needleman-Wunsch 알고리즘에 기반하고 있다. 표면정보만 이용하는 Needleman-Wunsch 알고리즘을 그대로 문장 비교에 적용하였을 경우 단어 굴절요소에 민감하여 의미적으로 유사한 문장을 발견하지 못할 가능성이 높다. 따라서 표면 정보 외에 단어의 표제어 정보를 추가적으로 이용한다. 또한 문장 구조의 유사성 정도를 반영하기 위해 품사 정보를 이용한다. 즉, 본 논문에서는 단어의 표면 정보. 표제어 정보, 품사 정보를 융합한 문장 비교 척도를 제안한다. 그리고 이 척도를 이용하여 유사 문장을 검색하고, 유사성에 기여하는 부분쌍을 파악하여 결과로 제시한다. 제안하는 알고리즘은 전기통신 분야의 데이터에 대해 매우 우수한 성능을 보였다.

  • PDF

질의기반 사용자 프로파일을 이용하는 개인화 웹 검색 (Personalized Web Search using Query based User Profile)

  • 윤성희
    • 한국산학기술학회논문지
    • /
    • 제17권2호
    • /
    • pp.690-696
    • /
    • 2016
  • 사용자 입력 질의와 웹 문서에 포함된 단어들의 형태적 일치를 검사하여 관련 문서를 검색하는 검색엔진은 사용자의 개인별 관심 분야를 반영하는 검색 결과를 생성하기 어렵다. 본 논문에서는 개인별 관심사를 파악하여 질의 의도에 적합한 내용의 문서를 검색하는 개인화된 웹 검색 방법을 제안한다. 개인화 검색의 성능은 사용자의 개인적 관심사를 정확하게 표현하는 우수한 사용자 프로파일을 생성하는 전략에 좌우된다. 본 연구에서 개인 프로파일은 사용자가 최근 입력한 질의어들과 검색에서 클릭했던 문서들에 나타나는 주제어들이 출현 빈도를 반영한 가중치와 함께 등록된 데이터베이스이다. 특히 중의적 질의어의 정확한 의미를 결정하기 위해 워드넷을 기반으로 프로파일에 등록된 단어들과 의미 유사도를 계산한다. 기존 웹 검색 시스템의 사용자 측에 질의확장 모듈과 순위재계산 모듈을 추가하는 확장모듈을 구축하여 비교 실험하였으며, 본 연구의 방법을 적용한 개인화 웹 검색의 결과는 특히 10위 이내 상위의 결과 문서들에 대해 92%의 정확률과 82%의 재현율을 보여 향상된 성능을 검증하였다.

Effective Thematic Words Extraction from a Book using Compound Noun Phrase Synthesis Method

  • Ahn, Hee-Jeong;Kim, Kee-Won;Kim, Seung-Hoon
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권3호
    • /
    • pp.107-113
    • /
    • 2017
  • Most of online bookstores are providing a user with the bibliographic book information rather than the concrete information such as thematic words and atmosphere. Especially, thematic words help a user to understand books and cast a wide net. In this paper, we propose an efficient extraction method of thematic words from book text by applying the compound noun and noun phrase synthetic method. The compound nouns represent the characteristics of a book in more detail than single nouns. The proposed method extracts the thematic word from book text by recognizing two types of noun phrases, such as a single noun and a compound noun combined with single nouns. The recognized single nouns, compound nouns, and noun phrases are calculated through TF-IDF weights and extracted as main words. In addition, this paper suggests a method to calculate the frequency of subject, object, and other roles separately, not just the sum of the frequencies of all nouns in the TF-IDF calculation method. Experiments is carried out in the field of economic management, and thematic word extraction verification is conducted through survey and book search. Thus, 9 out of the 10 experimental results used in this study indicate that the thematic word extracted by the proposed method is more effective in understanding the content. Also, it is confirmed that the thematic word extracted by the proposed method has a better book search result.

Language-Independent Word Acquisition Method Using a State-Transition Model

  • Xu, Bin;Yamagishi, Naohide;Suzuki, Makoto;Goto, Masayuki
    • Industrial Engineering and Management Systems
    • /
    • 제15권3호
    • /
    • pp.224-230
    • /
    • 2016
  • The use of new words, numerous spoken languages, and abbreviations on the Internet is extensive. As such, automatically acquiring words for the purpose of analyzing Internet content is very difficult. In a previous study, we proposed a method for Japanese word segmentation using character N-grams. The previously proposed method is based on a simple state-transition model that is established under the assumption that the input document is described based on four states (denoted as A, B, C, and D) specified beforehand: state A represents words (nouns, verbs, etc.); state B represents statement separators (punctuation marks, conjunctions, etc.); state C represents postpositions (namely, words that follow nouns); and state D represents prepositions (namely, words that precede nouns). According to this state-transition model, based on the states applied to each pseudo-word, we search the document from beginning to end for an accessible pattern. In other words, the process of this transition detects some words during the search. In the present paper, we perform experiments based on the proposed word acquisition algorithm using Japanese and Chinese newspaper articles. These articles were obtained from Japan's Kyoto University and the Chinese People's Daily. The proposed method does not depend on the language structure. If text documents are expressed in Unicode the proposed method can, using the same algorithm, obtain words in Japanese and Chinese, which do not contain spaces between words. Hence, we demonstrate that the proposed method is language independent.

지식베이스를 이용한 임베디드용 연속음성인식의 어휘 적용률 개선 (Vocabulary Coverage Improvement for Embedded Continuous Speech Recognition Using Knowledgebase)

  • 김광호;임민규;김지환
    • 대한음성학회지:말소리
    • /
    • 제68권
    • /
    • pp.115-126
    • /
    • 2008
  • In this paper, we propose a vocabulary coverage improvement method for embedded continuous speech recognition (CSR) using knowledgebase. A vocabulary in CSR is normally derived from a word frequency list. Therefore, the vocabulary coverage is dependent on a corpus. In the previous research, we presented an improved way of vocabulary generation using part-of-speech (POS) tagged corpus. We analyzed all words paired with 101 among 152 POS tags and decided on a set of words which have to be included in vocabularies of any size. However, for the other 51 POS tags (e.g. nouns, verbs), the vocabulary inclusion of words paired with such POS tags are still based on word frequency counted on a corpus. In this paper, we propose a corpus independent word inclusion method for noun-, verb-, and named entity(NE)-related POS tags using knowledgebase. For noun-related POS tags, we generate synonym groups and analyze their relative importance using Google search. Then, we categorize verbs by lemma and analyze relative importance of each lemma from a pre-analyzed statistic for verbs. We determine the inclusion order of NEs through Google search. The proposed method shows better coverage for the test short message service (SMS) text corpus.

  • PDF

CONTINUOUS DIGIT RECOGNITION FOR A REAL-TIME VOICE DIALING SYSTEM USING DISCRETE HIDDEN MARKOV MODELS

  • Choi, S.H.;Hong, H.J.;Lee, S.W.;Kim, H.K.;Oh, K.C.;Kim, K.C.;Lee, H.S.
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1994년도 FIFTH WESTERN PACIFIC REGIONAL ACOUSTICS CONFERENCE SEOUL KOREA
    • /
    • pp.1027-1032
    • /
    • 1994
  • This paper introduces a interword modeling and a Viterbi search method for continuous speech recognition. We also describe a development of a real-time voice dialing system which can recognize around one hundred words and continuous digits in speaker independent mode. For continuous digit recognition, between-word units have been proposed to provide a more precise representation of word junctures. The best path in HMM is found by the Viterbi search algorithm, from which digit sequences are recognized. The simulation results show that a interword modeling using the context-dependent between-word units provide better recognition rates than a pause modeling using the context-independent pause unit. The voice dialing system is implemented on a DSP board with a telephone interface plugged in an IBM PC AT/486.

  • PDF

한국어 음성인식 플랫폼 (ECHOS) 개발 (Development of a Korean Speech Recognition Platform (ECHOS))

  • 권오욱;권석봉;장규철;윤성락;김용래;장광동;김회린;유창동;김봉완;이용주
    • 한국음향학회지
    • /
    • 제24권8호
    • /
    • pp.498-504
    • /
    • 2005
  • 교육 및 연구 목적을 위하여 개발된 한국어 음성인식 플랫폼인 ECHOS를 소개한다. 음성인식을 위한 기본 모듈을 제공하는 BCHOS는 이해하기 쉽고 간단한 객체지향 구조를 가지며, 표준 템플릿 라이브러리 (STL)를 이용한 C++ 언어로 구현되었다. 입력은 8또는 16 kHz로 샘플링된 디지털 음성 데이터이며. 출력은 1-beat 인식결과, N-best 인식결과 및 word graph이다. ECHOS는 MFCC와 PLP 특징추출, HMM에 기반한 음향모델, n-gram 언어모델, 유한상태망 (FSN)과 렉시컬트리를 지원하는 탐색알고리듬으로 구성되며, 고립단어인식으로부터 대어휘 연속음성인식에 이르는 다양한 태스크를 처리할 수 있다. 플랫폼의 동작을 검증하기 위하여 ECHOS와 hidden Markov model toolkit (HTK)의 성능을 비교한다. ECHOS는 FSN 명령어 인식 태스크에서 HTK와 거의 비슷한 인식률을 나타내고 인식시간은 객체지향 구현 때문에 약 2배 정도 증가한다. 8000단어 연속음성인식에서는 HTK와 달리 렉시컬트리 탐색 알고리듬을 사용함으로써 단어오류율은 $40\%$ 증가하나 인식시간은 0.5배로 감소한다.

새로운 Ternary CAM을 이용한 고속 허프만 디코더 설계 (A high speed huffman decoder using new ternary CAM)

  • 이광진;김상훈;이주석;박노경;차균현
    • 한국통신학회논문지
    • /
    • 제21권7호
    • /
    • pp.1716-1725
    • /
    • 1996
  • In this paper, the huffman decoder which is a part of the decoder in JPEG standard format is designed by using a new Ternary CAM. First, the 256 word * 16 bit-size new bit-word all parallel Ternary CAM system is designed and verified using SPICE and CADENCE Verilog-XL, and then the verified novel Ternary CAM is applied to the new huffman decoder architecture of JPEG. So the performnce of the designed CAM cell and it's block is verified. The new Ternary CAM has various applications because it has search data mask and storing data mask function, which enable bit-wise search and don't care state storing. When the CAM is used for huffman look-up table in huffman decoder, the CAM is partitioned according to the decoding symbol frequency. The scheme of partitioning CAM for huffman table overcomes the drawbacks of all-parallel CAM with much power and load. So operation speed and power consumption are improved.

  • PDF

중학교 1학년 학생들의 농도 문장제 해결력에 대한 분석 (An Analysis of Density Word Problem Solving Ability of Seventh Graders)

  • 박정아;신현용
    • 한국수학교육학회지시리즈A:수학교육
    • /
    • 제44권4호
    • /
    • pp.525-534
    • /
    • 2005
  • The purpose of this study is to analyze difficulties in the density word problem solving process of seventh graders and to search for the way to increase their problem solving ability in the density word problem. The results of this study could help teachers diagnose students' difficulties involved in density word problem and remedy the understanding of the concept of density, algebraic expressions, and algebraic symbols.

  • PDF

Hierarchical Structure in Semantic Networks of Japanese Word Associations

  • Miyake, Maki;Joyce, Terry;Jung, Jae-Young;Akama, Hiroyuki
    • 한국언어정보학회:학술대회논문집
    • /
    • 한국언어정보학회 2007년도 정기학술대회
    • /
    • pp.321-329
    • /
    • 2007
  • This paper reports on the application of network analysis approaches to investigate the characteristics of graph representations of Japanese word associations. Two semantic networks are constructed from two separate Japanese word association databases. The basic statistical features of the networks indicate that they have scale-free and small-world properties and that they exhibit hierarchical organization. A graph clustering method is also applied to the networks with the objective of generating hierarchical structures within the semantic networks. The method is shown to be an efficient tool for analyzing large-scale structures within corpora. As a utilization of the network clustering results, we briefly introduce two web-based applications: the first is a search system that highlights various possible relations between words according to association type, while the second is to present the hierarchical architecture of a semantic network. The systems realize dynamic representations of network structures based on the relationships between words and concepts.

  • PDF