• 제목/요약/키워드: source text

검색결과 267건 처리시간 0.026초

UNIQUENESS AND MULTIPLICITY OF SOLUTIONS FOR THE NONLINEAR ELLIPTIC SYSTEM

  • Jung, Tacksun;Choi, Q-Heung
    • 충청수학회지
    • /
    • 제21권1호
    • /
    • pp.139-146
    • /
    • 2008
  • We investigate the uniqueness and multiplicity of solutions for the nonlinear elliptic system with Dirichlet boundary condition $$\{-{\Delta}u+g_1(u,v)=f_1(x){\text{ in }}{\Omega},\\-{\Delta}v+g_2(u,v)=f_2(x){\text{ in }}{\Omega},$$ where ${\Omega}$ is a bounded set in $R^n$ with smooth boundary ${\partial}{\Omega}$. Here $g_1$, $g_2$ are nonlinear functions of u, v and $f_1$, $f_2$ are source terms.

  • PDF

Objective Material analysis to the device with IoT Framework System

  • Lee, KyuTae;Ki, Jang Geun
    • International Journal of Advanced Culture Technology
    • /
    • 제8권2호
    • /
    • pp.289-296
    • /
    • 2020
  • Software copyright are written in text form of documents and stored as files, so it is easy to expose on an illegal copyright. The IOT framework configuration and service environment are also evaluated in software structure and revealed to replication environments. Illegal copyright can be easily created by intelligently modifying the program code in the framework system. This paper deals with similarity comparison to determine the suspicion of illegal copying. In general, original source code should be provided for similarity comparison on both. However, recently, the suspected developer have refused to provide the source code, and comparative evaluation are performed only with executable code. This study dealt with how to analyze the similarity with the execution code and the circuit configuration and interface state of the system without the original source code. In this paper, we propose a method of analyzing the data of the object without source code and verifying the similarity comparison result through evaluation examples.

대용량 소스코드 시각화기법 연구 (Visualization Techniques for Massive Source Code)

  • 서동수
    • 컴퓨터교육학회논문지
    • /
    • 제18권4호
    • /
    • pp.63-70
    • /
    • 2015
  • 프로그램 소스코드는 텍스트를 기반으로 하는 정보이며 동시에 논리 구조를 포함하고 있는 복잡한 구문의 집합체이다. 특히 소스코드의 규모가 수만 라인에 이르는 경우 구조적, 논리적인 복잡함으로 인해 기존의 빅데이터 시각화 기법이 잘 적용되기 힘들다는 문제가 발생한다. 본 논문은 소스코드가 갖는 구조적인 특징을 시각화하는데 있어 필요한 절차를 제안한다. 이를 위해 본 논문은 파싱 과정을 거쳐 생성된 추상구문트리를 대상으로 프로그램의 구조특징을 표현하기 위한 자료형의 정의, 함수간 호출관계를 표현한다. 이들 정보를 바탕으로 제어 정보를 네트워크 형태로 시각화함으로써 모듈의 구조적인 특징을 개괄적으로 살펴볼 수 있는 방법을 제시한다. 본 연구의 결과는 대규모 소프트웨어의 구조적 특징을 이해하거나 변경을 관리하는 효과적인 수단으로 활용할 수 있다.

Urdu News Classification using Application of Machine Learning Algorithms on News Headline

  • Khan, Muhammad Badruddin
    • International Journal of Computer Science & Network Security
    • /
    • 제21권2호
    • /
    • pp.229-237
    • /
    • 2021
  • Our modern 'information-hungry' age demands delivery of information at unprecedented fast rates. Timely delivery of noteworthy information about recent events can help people from different segments of life in number of ways. As world has become global village, the flow of news in terms of volume and speed demands involvement of machines to help humans to handle the enormous data. News are presented to public in forms of video, audio, image and text. News text available on internet is a source of knowledge for billions of internet users. Urdu language is spoken and understood by millions of people from Indian subcontinent. Availability of online Urdu news enable this branch of humanity to improve their understandings of the world and make their decisions. This paper uses available online Urdu news data to train machines to automatically categorize provided news. Various machine learning algorithms were used on news headline for training purpose and the results demonstrate that Bernoulli Naïve Bayes (Bernoulli NB) and Multinomial Naïve Bayes (Multinomial NB) algorithm outperformed other algorithms in terms of all performance parameters. The maximum level of accuracy achieved for the dataset was 94.278% by multinomial NB classifier followed by Bernoulli NB classifier with accuracy of 94.274% when Urdu stop words were removed from dataset. The results suggest that short text of headlines of news can be used as an input for text categorization process.

Weibo Disaster Rumor Recognition Method Based on Adversarial Training and Stacked Structure

  • Diao, Lei;Tang, Zhan;Guo, Xuchao;Bai, Zhao;Lu, Shuhan;Li, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권10호
    • /
    • pp.3211-3229
    • /
    • 2022
  • To solve the problems existing in the process of Weibo disaster rumor recognition, such as lack of corpus, poor text standardization, difficult to learn semantic information, and simple semantic features of disaster rumor text, this paper takes Sina Weibo as the data source, constructs a dataset for Weibo disaster rumor recognition, and proposes a deep learning model BERT_AT_Stacked LSTM for Weibo disaster rumor recognition. First, add adversarial disturbance to the embedding vector of each word to generate adversarial samples to enhance the features of rumor text, and carry out adversarial training to solve the problem that the text features of disaster rumors are relatively single. Second, the BERT part obtains the word-level semantic information of each Weibo text and generates a hidden vector containing sentence-level feature information. Finally, the hidden complex semantic information of poorly-regulated Weibo texts is learned using a Stacked Long Short-Term Memory (Stacked LSTM) structure. The experimental results show that, compared with other comparative models, the model in this paper has more advantages in recognizing disaster rumors on Weibo, with an F1_Socre of 97.48%, and has been tested on an open general domain dataset, with an F1_Score of 94.59%, indicating that the model has better generalization.

Synthesis of β-Galactooligosaccharide Using Bifidobacterial β-Galactosidase Purified from Recombinant Escherichia coli

  • Oh, So Young;Youn, So Youn;Park, Myung Soo;Kim, Hyoung-Geun;Baek, Nam-In;Li, Zhipeng;Ji, Geun Eog
    • Journal of Microbiology and Biotechnology
    • /
    • 제27권8호
    • /
    • pp.1392-1400
    • /
    • 2017
  • Galactooligosaccharides (GOSs) are known to be selectively utilized by Bifidobacterium, which can bring about healthy changes of the composition of intestinal microflora. In this study, ${\beta}-GOS$ were synthesized using bifidobacterial ${\beta}-galactosidase$ (G1) purified from recombinant E. coli with a high GOS yield and with high productivity and enhanced bifidogenic activity. The purified recombinant G1 showed maximum production of ${\beta}-GOSs$ at pH 8.5 and $45^{\circ}C$. A matrix-assisted laser desorption ionization time-of-flight mass spectrometry analysis of the major peaks of the produced ${\beta}-GOSs$ showed MW of 527 and 689, indicating the synthesis of ${\beta}-GOSs$ at degrees of polymerization (DP) of 3 and DP4, respectively. The trisaccharides were identified as ${\beta}-{\text\tiny{D}}$-galactopyranosyl-($1{\rightarrow}4$)-O-${\beta}-{\text\tiny{D}}$-galactopyranosyl-($1{\rightarrow}4$)-O-${\beta}-{\text\tiny{D}}$-glucopyranose, and the tetrasaccharides were identified as ${\beta}-{\text\tiny{D}}$-galactopyranosyl-($1{\rightarrow}4$)-O-${\beta}-{\text\tiny{D}}$-galactopyranosyl-($1{\rightarrow}4$)-O-${\beta}-{\text\tiny{D}}$-galactopyranosyl-($1{\rightarrow}4$)-O-${\beta}-{\text\tiny{D}}$-glucopyranose. The maximal production yield of GOSs was as high as 25.3% (w/v) using purified recombinant ${\beta}-galactosidase$ and 36% (w/v) of lactose as a substrate at pH 8.5 and $45^{\circ}C$. After 140 min of the reaction under this condition, 268.3 g/l of GOSs was obtained. With regard to the prebiotic effect, all of the tested Bifidobacterium except for B. breve grew well in BHI medium containing ${\beta}-GOS$ as a sole carbon source, whereas lactobacilli and Streptococcus thermophilus scarcely grew in the same medium. Only Bacteroides fragilis, Clostridium ramosum, and Enterobacter cloacae among the 17 pathogens tested grew in BHI medium containing ${\beta}-GOS$ as a sole carbon source; the remaining pathogens did not grow in the same medium. Consequently, the ${\beta}-GOS$ are expected to contribute to the beneficial change of intestinal microbial flora.

Data Dictionary 기반의 R Programming을 통한 비정형 Text Mining Algorithm 연구 (A study on unstructured text mining algorithm through R programming based on data dictionary)

  • 이종화;이현규
    • 한국산업정보학회논문지
    • /
    • 제20권2호
    • /
    • pp.113-124
    • /
    • 2015
  • 미리 선언된 구조를 이용하여 수집 저장된 정형적 데이터와는 달리 웹 2.0의 시대에서 일반 사용자들이 평상시에 사용하는 자연어 형태로 작성된 비정형 데이터 분석은 과거보다 훨씬 더 넓은 응용범위를 가지고 있다. 데이터 양이 폭발적으로 증가하고 있다는 특성뿐 만 아니라 인간의 감성이 그대로 표현된 특성을 가진 텍스트에서 의미 있는 정보를 추출하는 빅데이터 분석 기법을 텍스트마이닝(Text Mining)이라 하며 본 연구는 이를 주제로 하고 있다. 본 연구를 위해 오픈 소스인 통계분석용 소프트웨어 R 프로그램을 이용하였으며, 비정형 텍스트 문서를 웹 환경에서 수집, 저장, 전처리, 분석 작업과 시각화(Frequency Analysis, Cluster Analysis, Word Cloud, Social Network Analysis)작업 등의 과정에 관한 알고리즘 구현을 연구하였다. 특히, 연구자의 연구 영역 분석에 초점을 더욱 높이기 위해 Data Dictionary를 참조한 키워드 추출 기법을 사용하였다. 실제 사례에 적용한 R은 다양한 OS 구동, 일반적 언어와의 인터페이스 지원 등 통계 분석용 소프트웨어로써 매우 유용하다는 점을 발견할 수 있었다.

Text-to-speech 시스템에서의 화자 변환 기능 구현 (Implementation of the Voice Conversion in the Text-to-speech System)

  • 황철규;김형순
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1999년도 학술발표대회 논문집 제18권 2호
    • /
    • pp.33-36
    • /
    • 1999
  • 본 논문에서는 기존의 text-to-speech(TTS) 합성방식이 미리 정해진 화자에 의한 단조로운 합성음을 가지는 문제를 극복하기 위하여, 임의의 화자의 음색을 표현할 수 있는 화자 변환(Voice Conversion) 기능을 구현하였다. 구현된 방식은 화자의 음향공간을 Gaussian Mixture Model(GMM)로 모델링하여 연속 확률 분포에 따른 화자 변환을 가능케 했다. 원시화자(source)와 목적화자(target)간의 특징 벡터의 joint density function을 이용하여 목적화자의 음향공간 특징벡터와 변환된 벡터간의 제곱오류를 최소화하는 변환 함수를 구하였으며, 구해진 변환 함수로 벡터 mapping에 의한 스펙트럼 포락선을 변환했다. 운율 변환은 음성 신호를 정현파 모델에 의해서 모델링하고, 분석된 운율 정보(피치, 지속 시간)는 평균값을 고려해서 변환했다. 성능 평가를 위해서 VQ mapping 방법을 함께 구현하여 각각의 정규화된 켑스트럼 거리를 구해서 성능을 비교 평가하였다. 합성시에는 ABS-OLA 기반의 정현파 모델링 방식을 채택함으로써 자연스러운 합성음을 생성할 수 있었다.

  • PDF

A Study of Main Contents Extraction from Web News Pages based on XPath Analysis

  • Sun, Bok-Keun
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권7호
    • /
    • pp.1-7
    • /
    • 2015
  • Although data on the internet can be used in various fields such as source of data of IR(Information Retrieval), Data mining and knowledge information servece, and contains a lot of unnecessary information. The removal of the unnecessary data is a problem to be solved prior to the study of the knowledge-based information service that is based on the data of the web page, in this paper, we solve the problem through the implementation of XTractor(XPath Extractor). Since XPath is used to navigate the attribute data and the data elements in the XML document, the XPath analysis to be carried out through the XTractor. XTractor Extracts main text by html parsing, XPath grouping and detecting the XPath contains the main data. The result, the recognition and precision rate are showed in 97.9%, 93.9%, except for a few cases in a large amount of experimental data and it was confirmed that it is possible to properly extract the main text of the news.