• Title/Summary/Keyword: Token Frequency

Search Result 23, Processing Time 0.043 seconds

Gradient Reduction of $C_1$ in /pk/ Sequences

  • Son, Min-Jung
    • Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.43-60
    • /
    • 2008
  • Instrumental studies (e.g., aerodynamic, EPG, and EMMA) have shown that the first of two stops in sequence can be articulatorily reduced in time and space sometimes; either gradient or categorical. The current EMMA study aims to examine possible factors_linguistic (e.g., speech rate, word boundary, and prosodic boundary) and paralinguistic (e.g., natural context and repetition)_to induce gradient reduction of $C_1$ in /pk/ cluster sequences. EMMA data are collected from five Seoul-Korean speakers. The results show that gradient reduction of lip aperture seldom occurs, being quite restricted both in speaker frequency and in token frequency. The results also suggest that the place assimilation is not a lexical process, implying that speakers have not fully developed this process to be phonologized in the abstract level.

  • PDF

Adjusting Weights of Single-word and Multi-word Terms for Keyphrase Extraction from Article Text

  • Kang, In-Su
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.8
    • /
    • pp.47-54
    • /
    • 2021
  • Given a document, keyphrase extraction is to automatically extract words or phrases which topically represent the content of the document. In unsupervised keyphrase extraction approaches, candidate words or phrases are first extracted from the input document, and scores are calculated for keyphrase candidates, and final keyphrases are selected based on the scores. Regarding the computation of the scores of candidates in unsupervised keyphrase extraction, this study proposes a method of adjusting the scores of keyphrase candidates according to the types of keyphrase candidates: word-type or phrase-type. For this, type-token ratios of word-type and phrase-type candidates as well as information content of high-frequency word-type and phrase-type candidates are collected from the input document, and those values are employed in adjusting the scores of keyphrase candidates. In experiments using four keyphrase extraction evaluation datasets which were constructed for full-text articles in English, the proposed method performed better than a baseline method and comparison methods in three datasets.

NFT(Non-Fungible Token) Patent Trend Analysis using Topic Modeling

  • Sin-Nyum Choi;Woong Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.41-48
    • /
    • 2023
  • In this paper, we propose an analysis of recent trends in the NFT (Non-Fungible Token) industry using topic modeling techniques, focusing on their universal application across various industrial fields. For this study, patent data was utilized to understand industry trends. We collected data on 371 domestic and 454 international NFT-related patents registered in the patent information search service KIPRIS from 2017, when the first NFT standard was introduced, to October 2023. In the preprocessing stage, stopwords and lemmas were removed, and only noun words were extracted. For the analysis, the top 50 words by frequency were listed, and their corresponding TF-IDF values were examined to derive key keywords of the industry trends. Next, Using the LDA algorithm, we identified four major latent topics within the patent data, both domestically and internationally. We analyzed these topics and presented our findings on NFT industry trends, underpinned by real-world industry cases. While previous review presented trends from an academic perspective using paper data, this study is significant as it provides practical trend information based on data rooted in field practice. It is expected to be a useful reference for professionals in the NFT industry for understanding market conditions and generating new items.

Shapes of Vowel F0 Contours Influenced by Preceding Obstruents of Different Types - Automatic Analyses Using Tilt Parameters-

  • Jang, Tae-Yeoub
    • Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.105-116
    • /
    • 2004
  • The fundamental frequency of a vowel is known to be affected by the identity of the preceding consonant. The general agreement is that strong consonants trigger higher F0 than weak consonants. However, there has been a disagreement on the shape of this segmentally affected F0 contours. Some studies report that shapes of contours are differentiated based on the consonant type, but others regard this observation as misleading. This research attempts to resolve this controversy by investigating shapes and slopes of F0 contours of Korean word level speech data produced by four male speakers. Instead of entirely relying on traditional human intuition and judgment, I employed an automatic F0 contour analysis technique known as tilt parameterisation (Taylor 2000). After necessary manipulation of an F0 contour of each data token, various parameters are collapsed into a single tilt value which directly indicates the shape of the contour. The result, in terms of statistical inference, shows that it is not viable to conclude that the type of consonant is significantly related to the shape of F0 contour. A supplementary measurement is also made to see if the slope of each contour bears meaningful information. Unlike shapes themselves, slopes are suspected to be practically more practical for consonantal differentiation, although confirmation is required through further refined experiments.

  • PDF

Generate of OCL on XML Sechma Meta Model (XML 스키마 메타모델에서 OCL 생성)

  • Lee Don-Yang;Choi Han-Yong
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.6
    • /
    • pp.42-49
    • /
    • 2006
  • XML used rapid method of meta language representation in internet for information transmission. In addition to XML Schema used frequency specification to variety data type. This thesis designed to Simple Type meta model of XML schema using UML. But because structure of XML schema complicate and suppose variety data type we can recognize many difficult matter to user's apprehension and application of model properties that appeared UML. To way out of this matter this study could specified clearly to structured expression in XML schema meta model that is applied OCL specification and together, come up with method of detailed design to parse tree and token generation for lexical and symmentics analysis in compile step on this study foundation.

  • PDF

An Analysis on the Vocabulary in the English-Translation Version of Donguibogam Using the Corpus-based Analysis (코퍼스 분석방법을 이용한 『동의보감(東醫寶鑑)』 영역본의 어휘 분석)

  • Jung, Ji-Hun;Kim, Dong-Ryul;Kim, Do-Hoon
    • The Journal of Korean Medical History
    • /
    • v.28 no.2
    • /
    • pp.37-45
    • /
    • 2015
  • Objectives : A quantitative analysis on the vocabulary in the English translation version of Donguibogam. Methods : This study quantitatively analyzed the English-translated texts of Donguibogam with the Corpus-based analysis, and compared the quantitative results analyzing the texts of original Donguibogam. Results : As the results from conducting the corpus analysis on the English-translation version of Donguibogam, it was found that the number of total words (Token) was about 1,207,376, and the all types of used words were about 20.495 and the TTR (Type/Token Rate) was 1.69. The accumulation rate reaching to the high-ranking 1000 words was 83.54%, and the accumulation rate reaching to the high-ranking 2000 words was 90.82%. As the words having the high-ranking frequency, the function words like 'the, and of, is' mainly appeared, and for the content words, the words like 'randix, qi, rhizoma and water' were appeared in multi frequencies. As the results from comparing them with the corpus analysis results of original version of Donguibogam, it was found that the TTR was higher in the English translation version than that of original version. The compositions of function words and contents words having high-ranking frequencies were similar between the English translation version and the original version of Donguibogam. The both versions were also similar in that their statements in the parts of 'Remedies' and 'Acupuncture' showed higher composition rate of contents words than the rate of function words. Conclusions : The vocabulary in the English translation version of Donguibogam showed that this book was a book keeping the complete form of sentence and an Korean medical book at the same time. Meanwhile, the English translation version of Donguibogam had some problems like the unification of vocabulary due to several translators, and the incomplete delivery of word's meanings from the Chinese character-culture area to the English-culture area, and these problems are considered as the matters to be considered in a work translating Korean old medical books in English.

Inverse Document Frequency-Based Word Embedding of Unseen Words for Question Answering Systems (질의응답 시스템에서 처음 보는 단어의 역문헌빈도 기반 단어 임베딩 기법)

  • Lee, Wooin;Song, Gwangho;Shim, Kyuseok
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.902-909
    • /
    • 2016
  • Question answering system (QA system) is a system that finds an actual answer to the question posed by a user, whereas a typical search engine would only find the links to the relevant documents. Recent works related to the open domain QA systems are receiving much attention in the fields of natural language processing, artificial intelligence, and data mining. However, the prior works on QA systems simply replace all words that are not in the training data with a single token, even though such unseen words are likely to play crucial roles in differentiating the candidate answers from the actual answers. In this paper, we propose a method to compute vectors of such unseen words by taking into account the context in which the words have occurred. Next, we also propose a model which utilizes inverse document frequencies (IDF) to efficiently process unseen words by expanding the system's vocabulary. Finally, we validate that the proposed method and model improve the performance of a QA system through experiments.

Digital Image Watermarking Scheme in the Singular Vector Domain (특이 벡터 영역에서 디지털 영상 워터마킹 방법)

  • Lee, Juck Sik
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.16 no.4
    • /
    • pp.122-128
    • /
    • 2015
  • As multimedia information is spread over cyber networks, problems such as protection of legal rights and original proof of an information owner raise recently. Various image transformations of DCT, DFT and DWT have been used to embed a watermark as a token of ownership. Recently, SVD being used in the field of numerical analysis is additionally applied to the watermarking methods. A watermarking method is proposed in this paper using Gabor cosine and sine transform as well as SVD for embedding and extraction of watermarks for digital images. After delivering attacks such as noise addition, space transformation, filtering and compression on watermarked images, watermark extraction algorithm is performed using the proposed GCST-SVD method. Normalized correlation values are calculated to measure the similarity between embedded watermark and extracted one as the index of watermark performance. Also visual inspection for the extracted watermark images has been done. Watermark images are inserted into the lowest vertical ac frequency band. From the experimental results, the proposed watermarking method using the singular vectors of SVD shows large correlation values of 0.9 or more and visual features of an embedded watermark for various attacks.

A Self-Timed Ring based Lightweight TRNG with Feedback Structure (피드백 구조를 갖는 Self-Timed Ring 기반의 경량 TRNG)

  • Choe, Jun-Yeong;Shin, Kyung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.2
    • /
    • pp.268-275
    • /
    • 2020
  • A lightweight hardware design of self-timed ring based true random number generator (TRNG) suitable for information security applications is described. To reduce hardware complexity of TRNG, an entropy extractor with feedback structure was proposed, which minimizes the number of ring stages. The number of ring stages of the FSTR-TRNG was determined to be a multiple of eleven, taking into account operating clock frequency and entropy extraction circuit, and the ratio of tokens to bubbles was determined to operate in evenly-spaced mode. The hardware operation of FSTR-TRNG was verified by FPGA implementation. A set of statistical randomness tests defined by NIST 800-22 were performed by extracting 20 million bits of binary sequences generated by FSTR-TRNG, and all of the fifteen test items were found to meet the criteria. The FSTR-TRNG occupied 46 slices of Spartan-6 FPGA device, and it was implemented with about 2,500 gate equivalents (GEs) when synthesized in 180 nm CMOS standard cell library.

A Corpus-based English Syntax Academic Word List Building and its Lexical Profile Analysis (코퍼스 기반 영어 통사론 학술 어휘목록 구축 및 어휘 분포 분석)

  • Lee, Hye-Jin;Lee, Je-Young
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.12
    • /
    • pp.132-139
    • /
    • 2021
  • This corpus-driven research expounded the compilation of the most frequently occurring academic words in the domain of syntax and compared the extracted wordlist with Academic Word List(AWL) of Coxhead(2000) and General Service List(GSL) of West(1953) to examine their distribution and coverage within the syntax corpus. A specialized 546,074 token corpus, composed of widely used must-read syntax textbooks for English education majors, was loaded into and analyzed with AntWordProfiler 1.4.1. Under the parameter of lexical frequency, the analysis identified 288(50.5%) AWL word forms, appeared 16 times or more, as well as 218(38.2%) AWL items, occurred not exceeding 15 times. The analysis also indicated that the coverage of AWL and GSL accounted for 9.19% and 78.92% respectively and the combination of GSL and AWL amounted to 88.11% of all tokens. Given that AWL can be instrumental in serving broad disciplinary needs, this study highlighted the necessity to compile the domain-specific AWL as a lexical repertoire to promote academic literacy and competence.