• Title/Summary/Keyword: 과학 용어

Search Result 602, Processing Time 0.028 seconds

Levels and Patterns of Main Terms' Interrelationships in Student Teachers' Notable Questions about the Contents of the Elementary Science Textbooks (초등 과학교과서 내용에 대한 예비교사들의 주요 질문에 나타나는 용어의 상호 관련성 수준과 유형)

  • Lee, Myeong-Je
    • Journal of the Korean earth science society
    • /
    • v.27 no.1
    • /
    • pp.20-31
    • /
    • 2006
  • This study analysed student teachers' notable questions about the earth science contents in the elementary science textbooks. The contents of notable questions were defined as ‘notable question contents 1' and 'notable question contents 2'. Both the question contorts are contents about which the number of questions is above three times and from two times to three times as much as the mean number of questions per page of each unit respectively. The results are as follows. First, question contents 1 are found as 'clouds observation', 'geological strata formation' and so on. Question contents 2, 'rainfall measurement', 'moon's movement during one night' and so on are found. Second, the number of interrelationships of main terms in questions increased in each question of question contents 1, but 4 term-patterns are found more in question contents 2 than question contents 1. Third, high interrelationship patterns of terms in question contents 1 are 'coal and petroleum-generation', 'metamorphosis-heat and pressure', 'metamorphosis-heat and pressure-metamorphic rocks', 'planet-sun-comet-revolution' and in question contents 2. 'constellation plate-use', 'dryness and wetness hygrometer-principle', 'seismograph-principle-earthquake', 'earth rotation axis-tilting-occurrence', 'dryness and wetness hygrometer-principle-humidity' and so on. The sources of questions analysed in this study are estimated as the content construction system of textbooks, or students' general questions about the earth science contents. If this is the former, the problems in texts and illustrations in textbooks should be articulated and resolved. And if the latter, the elementary science curriculum has to be reconsidered in view of scientific literacy in earth science.

Determining the Specificity of Terms using Compositional and Contextual Information (구성정보와 문맥정보를 이용한 전문용어의 전문성 측정 방법)

  • Ryu Pum-Mo;Bae Sun-Mee;Choi Key-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.7
    • /
    • pp.636-645
    • /
    • 2006
  • A tenn with more domain specific information has higher level of term specificity. We propose new specificity calculation methods of terms based on information theoretic measures using compositional and contextual information. Specificity of terms is a kind of necessary conditions in tenn hierarchy construction task. The methods use based on compositional and contextual information of terms. The compositional information includes frequency, $tf{\cdot}idf$, bigram and internal structure of the terms. The contextual information of a tenn includes the probabilistic distribution of modifiers of terms. The proposed methods can be applied to other domains without extra procedures. Experiments showed very promising result with the precision of 82.0% when applied to the terms in MeSH thesaurus.

A Study on the Use of Stopword Corpus for Cleansing Unstructured Text Data (비정형 텍스트 데이터 정제를 위한 불용어 코퍼스의 활용에 관한 연구)

  • Lee, Won-Jo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.6
    • /
    • pp.891-897
    • /
    • 2022
  • In big data analysis, raw text data mostly exists in various unstructured data forms, so it becomes a structured data form that can be analyzed only after undergoing heuristic pre-processing and computer post-processing cleansing. Therefore, in this study, unnecessary elements are purified through pre-processing of the collected raw data in order to apply the wordcloud of R program, which is one of the text data analysis techniques, and stopwords are removed in the post-processing process. Then, a case study of wordcloud analysis was conducted, which calculates the frequency of occurrence of words and expresses words with high frequency as key issues. In this study, to improve the problems of the "nested stopword source code" method, which is the existing stopword processing method, using the word cloud technique of R, we propose the use of "general stopword corpus" and "user-defined stopword corpus" and conduct case analysis. The advantages and disadvantages of the proposed "unstructured data cleansing process model" are comparatively verified and presented, and the practical application of word cloud visualization analysis using the "proposed external corpus cleansing technique" is presented.

알기쉬운 과학용어

  • Korea Institute of Science and Technology Information
    • Journal of Scientific & Technological Knowledge Infrastructure
    • /
    • s.17
    • /
    • pp.66-67
    • /
    • 2005
  • PDF