• Title/Summary/Keyword: 벡터 공간

Search Result 1,214, Processing Time 0.021 seconds

The pattern of movement and stress distribution during retraction of maxillary incisors using a 3-D finite element method (상악 전치부 후방 견인 시 이동 양상과 응력 분포에 관한 삼차원 유한요소법적 연구)

  • Chung, Ae-Jin;Kim, Un-Su;Lee, Soo-Haeng;Kang, Seong-Soo;Choi, Hee-In;Jo, Jin-Hyung;Kim, Sang-Cheol
    • The korean journal of orthodontics
    • /
    • v.37 no.2 s.121
    • /
    • pp.98-113
    • /
    • 2007
  • Objective: The purpose of this study was to evaluate the displacement pattern and the stress distribution shown on a finite element model 3-D visualization of a dry human skull using CT during the retraction of upper anterior teeth. Methods: Experimental groups were differentiated into 8 groups according to corticotomy, anchorage (buccal: mini implant between the maxillary second premolar and first molar and second premolar reinforced with a mini Implant, palatal: mini implant between the maxillary first molar and second molar and mini implant on the midpalatal suture) and force application point (use of a power arm or not). Results: In cases where anterior teeth were retracted by a conventional T-loop arch wire, the anterior teeth tipped more postero-inferiorly and the posterior teeth moved slightly in a mesial direction. In cases where anterior teeth were retracted with corticotomy, the stress at the anterior bone segment was distributed widely and showed a smaller degree of tipping movement of the anterior teeth, but with a greater amount of displacement. In cases where anterior teeth were retracted from the buccal side with force applied to the mini implant placed between the maxillary second premolar and the first molar to the canine power arm, it showed that a smaller degree of tipping movement was generated than when force was applied to the second premolar reinforced with a mini implant from the canine bracket. In cases where anterior teeth were retracted from the palatal side with force applied to the mini implant on the midpalatal suture, it resulted in a greater degree of tipping movement than when force was applied to the mini implant between the maxillary first and second molars. Conclusion: The results of this study verifies the effects of corticotomies and the effects of controlling orthodontic force vectors during tooth movement.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Numerical modeling of secondary flow behavior in a meandering channel with submerged vanes (잠긴수제가 설치된 만곡수로에서의 이차류 거동 수치모의)

  • Lee, Jung Seop;Park, Sang Deog;Choi, Cheol Hee;Paik, Joongcheol
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.10
    • /
    • pp.743-752
    • /
    • 2019
  • The flow in the meandering channel is characterized by the spiral motion of secondary currents that typically cause the erosion along the outer bank. Hydraulic structures, such as spur dike and groyne, are commonly installed on the channel bottom near the outer bank to mitigate the strength of secondary currents. This study is to investigate the effects of submerged vanes installed in a $90^{\circ}$ meandering channel on the development of secondary currents through three-dimensional numerical modeling using the hybrid RANS/LES method for turbulence and the volume of fluid method, based on OpenFOAM open source toolbox, for capturing the free surface at the Froude number of 0.43. We employ the second-order-accurate finite volume methods in the space and time for the numerical modeling and compare numerical results with experimental measurements for evaluating the numerical predictions. Numerical results show that the present simulations well reproduce the experimental measurements, in terms of the time-averaged streamwise velocity and secondary velocity vector fields in the bend with submerged vanes. The computed flow fields reveal that the streamwise velocity near the bed along the outer bank at the end section of bend dramatically decrease by one third of mean velocity after the installation of vanes, which support that submerged vanes mitigate the strength of primary secondary flow and are helpful for the channel stability along the outer bank. The flow between the top of vanes and the free surface accelerates and the maximum velocity of free surface flow near the flow impingement along the outer bank increases about 20% due to the installation of submerged vanes. Numerical solutions show the formations of the horseshoe vortices at the front of vanes and the lee wakes behind the vanes, which are responsible for strong local scour around vanes. Additional study on the shapes and arrangement of vanes is required for mitigate the local scour.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.