• 제목/요약/키워드: word context

검색결과 353건 처리시간 0.026초

The Guessing Model Revisited: A Case Study of a Korean Young Learner

  • Yim, Su Yon
    • 영어어문교육
    • /
    • 제17권3호
    • /
    • pp.273-290
    • /
    • 2011
  • This paper presents a case study involving one Korean primary school student and people around him in order to explore the reading process in English of a young Korean EFL learner and to investigate the social context in which his reading takes place. Six participants were included in the study (one primary school student and five adult participants). The student participant was asked to read a text in English and translate what he read into Korean and the teacher participants were asked to listen to the student's reading. Semi-structured interview was used to collect data from the student as well as five adult participants (his private tutor, his parent, his state school teacher, and two other state school teachers). The analysis reveals four characteristics of the way a young EFL learner approaches reading: word-by-word reading, disconnected word recognition, selective use of cues, and lack of awareness of difficulties. The four characteristics of Kilsu's reading suggest that reading can become a wild guessing game for young foreign learners, if they give selective attention to unimportant cues while reading. The pedagogical implications of this study are also discussed to help teachers designing reading lessons for young learners.

  • PDF

Deep recurrent neural networks with word embeddings for Urdu named entity recognition

  • Khan, Wahab;Daud, Ali;Alotaibi, Fahd;Aljohani, Naif;Arafat, Sachi
    • ETRI Journal
    • /
    • 제42권1호
    • /
    • pp.90-100
    • /
    • 2020
  • Named entity recognition (NER) continues to be an important task in natural language processing because it is featured as a subtask and/or subproblem in information extraction and machine translation. In Urdu language processing, it is a very difficult task. This paper proposes various deep recurrent neural network (DRNN) learning models with word embedding. Experimental results demonstrate that they improve upon current state-of-the-art NER approaches for Urdu. The DRRN models evaluated include forward and bidirectional extensions of the long short-term memory and back propagation through time approaches. The proposed models consider both language-dependent features, such as part-of-speech tags, and language-independent features, such as the "context windows" of words. The effectiveness of the DRNN models with word embedding for NER in Urdu is demonstrated using three datasets. The results reveal that the proposed approach significantly outperforms previous conditional random field and artificial neural network approaches. The best f-measure values achieved on the three benchmark datasets using the proposed deep learning approaches are 81.1%, 79.94%, and 63.21%, respectively.

질의 어휘와의 근접도를 반영한 단어 그래프 기반 질의 확장 (Query Expansion based on Word Graph using Term Proximity)

  • 장계훈;이경순
    • 정보처리학회논문지B
    • /
    • 제19B권1호
    • /
    • pp.37-42
    • /
    • 2012
  • 잠정적 적합성 피드백모델은 초기 검색 결과의 상위에 순위화된 문서를 적합 문서라 가정하고, 상위문서에서 빈도가 높은 어휘를 확장 질의로 선택한다. 빈도수를 이용한 질의 확장 방법의 단점은 문서 안에서 포함된 어휘들 사이의 근접도에 상관없이 각 어휘를 독립적으로 생각한다는 것이다. 본 논문에서는 어휘빈도를 이용한 질의 확장을 대체할 수 있는 어휘 근접도를 반영한 단어 그래프 기반 질의 확장을 제안한다. 질의 어휘 주변에 발생한 어휘들을 노드로 표현하고, 어휘들 사이의 근접도를 에지의 가중치로 하여 단어 그래프를 표현한다. 반복된 연산을 통해 확장 질의를 선택함으로써 성능을 향상시키는 기법을 제안한다. 유효성 검증을 위해 웹문서 집합인 TREC WT10g 테스트 컬렉션에 대한 실험에서 언어모델 보다 MAP 평가 기준에서 6.4% 향상됨을 보였다.

저빈도어를 고려한 개념학습 기반 의미 중의성 해소 (Word Sense Disambiguation based on Concept Learning with a focus on the Lowest Frequency Words)

  • 김동성;최재웅
    • 한국언어정보학회지:언어와정보
    • /
    • 제10권1호
    • /
    • pp.21-46
    • /
    • 2006
  • This study proposes a Word Sense Disambiguation (WSD) algorithm, based on concept learning with special emphasis on statistically meaningful lowest frequency words. Previous works on WSD typically make use of frequency of collocation and its probability. Such probability based WSD approaches tend to ignore the lowest frequency words which could be meaningful in the context. In this paper, we show an algorithm to extract and make use of the meaningful lowest frequency words in WSD. Learning method is adopted from the Find-Specific algorithm of Mitchell (1997), according to which the search proceeds from the specific predefined hypothetical spaces to the general ones. In our model, this algorithm is used to find contexts with the most specific classifiers and then moves to the more general ones. We build up small seed data and apply those data to the relatively large test data. Following the algorithm in Yarowsky (1995), the classified test data are exhaustively included in the seed data, thus expanding the seed data. However, this might result in lots of noise in the seed data. Thus we introduce the 'maximum a posterior hypothesis' based on the Bayes' assumption to validate the noise status of the new seed data. We use the Naive Bayes Classifier and prove that the application of Find-Specific algorithm enhances the correctness of WSD.

  • PDF

A Study of Efficiency Information Filtering System using One-Hot Long Short-Term Memory

  • Kim, Hee sook;Lee, Min Hi
    • International Journal of Advanced Culture Technology
    • /
    • 제5권1호
    • /
    • pp.83-89
    • /
    • 2017
  • In this paper, we propose an extended method of one-hot Long Short-Term Memory (LSTM) and evaluate the performance on spam filtering task. Most of traditional methods proposed for spam filtering task use word occurrences to represent spam or non-spam messages and all syntactic and semantic information are ignored. Major issue appears when both spam and non-spam messages share many common words and noise words. Therefore, it becomes challenging to the system to filter correct labels between spam and non-spam. Unlike previous studies on information filtering task, instead of using only word occurrence and word context as in probabilistic models, we apply a neural network-based approach to train the system filter for a better performance. In addition to one-hot representation, using term weight with attention mechanism allows classifier to focus on potential words which most likely appear in spam and non-spam collection. As a result, we obtained some improvement over the performances of the previous methods. We find out using region embedding and pooling features on the top of LSTM along with attention mechanism allows system to explore a better document representation for filtering task in general.

미관측문맥 모델링을 위한 다중단어카테고리 결정 (Determining Multiple Word Category Membership for Modeling Unseen Context)

  • 한명수;정민화
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 2000년도 하계학술발표대회 논문집 제19권 1호
    • /
    • pp.23-26
    • /
    • 2000
  • 본 논문에서는 연속음성인식에 사용되는 언어모델이 학습 코퍼스에서 나타나지 않는 문맥에 대하여 신뢰할만한 확률을 생성할 수 있도록 하는 방안으로 다중 단어 카테고리 결정방법을 제안하였다. 제안된 다중 단어 카테고리 결정 방법은 기존의 카테고리 기반 언어모델에서의 미관측 문맥에 대한 모델링 능력을 유지하면서 동형이의어에 대한 확률의 과도한 일반화를 방지한다. 제안된 방법을 이용한 언어모델의 성능을 측정하기 위해 미관측 문맥이 $31\%$ 포함된 인식문장에 대한 N-Best rescoring을 수행한 결과 word accuracy는 1-Best문장에 대해서 $3.2\%$의 향상을 얻었고 기존의 카테고리기반 언어모델을 적용한 결과에 비하여 $0.8\%$의 향상을 얻을 수 있었다.

  • PDF

한국 영아의 초기 의사소통 : 몸짓의 발달 (The Development of Gesture in the Early Communication of Korean Infants)

  • 장유경;최윤영;김소연
    • 아동학회지
    • /
    • 제26권1호
    • /
    • pp.155-167
    • /
    • 2005
  • Korean infants' use of gesture was examined with 45 10-to 17-month olds. The mothers of infants were asked to check each word in the MacArthur Communicative Development Inventory-Korean (MCDI-K) vocabulary checklist if their infant had a gesture for a given word and to indicate what kind of early communicative behavior she showed in 5 different situations. The results show that infants in this study have 11 gestures, of which many are learned within the context of routines or games. Referential gestures were rarely reported. There was no positive correlation between the number of gestures and the number of expressive words. However, more qualitative measures on early communicative behaviors show that there was a positive correlation between "frequent use of gestures" and "try to communicate by verbal means".

  • PDF

Comparison of Neural Network Techniques for Text Data Analysis

  • Kim, Munhee;Kang, Kee-Hoon
    • International Journal of Advanced Culture Technology
    • /
    • 제8권2호
    • /
    • pp.231-238
    • /
    • 2020
  • Generally, sequential data refers to data having continuity. Text data, which is a representative type of unstructured data, is also sequential data in that it is necessary to know the meaning of the preceding word in order to know the meaning of the following word or context. So far, many techniques for analyzing sequential data such as text data have been proposed. In this paper, four methods of 1d-CNN, LSTM, BiLSTM, and C-LSTM are introduced, focusing on neural network techniques. In addition, by using this, IMDb movie review data was classified into two classes to compare the performance of the techniques in terms of accuracy and analysis time.

Feed-Forward Neural Network를 이용한 문맥의존 철자오류 교정 (Context-sensitive Spelling Error Correction using Feed-Forward Neural Network)

  • 황현선;이창기
    • 한국정보과학회 언어공학연구회:학술대회논문집(한글 및 한국어 정보처리)
    • /
    • 한국정보과학회언어공학연구회 2015년도 제27회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.124-128
    • /
    • 2015
  • 문맥의존 철자오류는 해당 단어만 봤을 때에는 오류가 아니지만 문맥상으로는 오류인 문제를 말한다. 이러한 문제를 해결하기 위해서는 문맥정보를 보아야 하지만, 형태소 분석 단계에서는 자세한 문맥 정보를 보기 어렵다. 본 논문에서는 형태소 분석 정보만을 이용한 철자오류 수정을 위한 문맥으로 사전훈련(pre-training)된 단어 표현(Word Embedding)를 사용하고, 기존의 기계학습 알고리즘보다 좋다고 알려진 딥 러닝(Deep Learning) 기술을 적용한 시스템을 제안한다. 실험결과, 기존의 기계학습 알고리즘인 Structural SVM보다 높은 F1-measure 91.61 ~ 98.05%의 성능을 보였다.

  • PDF

참여정부 대통령기록 연구 대통령 행사기록을 중심으로 (A Study on the Presidential Records of the Participatory Government : Focusing on the Records of Presidential Events)

  • 이경용
    • 기록학연구
    • /
    • 제71호
    • /
    • pp.131-167
    • /
    • 2022
  • 이 논문은 제16대 대통령기록 중에서 기록관리비서실이 대통령 행사와 관련해서 생산한 '말씀록'의 생산 과정을 둘러싼 기록들의 내용을 분석하였다. 이를 통해 대통령기록관에 이관된 참여정부의 대통령 행사 기록의 생산 맥락을 올바로 이해한 가운데 관련 기록을 연계, 조직하여 적극적으로 활용하는 방안을 제안하였다.