• 제목/요약/키워드: Corpus-based

Search Result 571, Processing Time 0.038 seconds

Korean Red Ginseng extract ameliorates demyelination by inhibiting infiltration and activation of immune cells in cuprizone-administrated mice

  • Min Jung Lee;Jong Hee Choi;Tae Woo Kwon;Hyo-Sung Jo;Yujeong Ha;Seung-Yeol Nah;Ik-Hyun Cho
    • Journal of Ginseng Research
    • /
    • v.47 no.5
    • /
    • pp.672-680
    • /
    • 2023
  • Background: Korean Red Ginseng (KRG), the steamed root of Panax ginseng, has pharmacological activities for immunological and neurodegenerative disorders. But, the role of KRGE in multiple sclerosis (MS) remains unclear. Purpose: To determine whether KRG extract (KRGE) could inhibit demyelination in corpus callosum (CC) of cuprizone (CPZ)-induced murine model of MS Methods: Male adult mice were fed with a standard chow diet or a chow diet supplemented with 0.2% (w/w) CPZ ad libitum for six weeks to induce demyelination while were simultaneously administered with distilled water (DW) alone or KRGE-DW (0.004%, 0.02 and 0.1% of KRGE) by drinking. Results: Administration with KRGE-DW alleviated demyelination and oligodendrocyte degeneration associated with inhibition of infiltration and activation of resident microglia and monocyte-derived macrophages as well as downregulation of proinflammatory mediators in the CC of CPZ-fed mice. KRGE-DW also attenuated the level of infiltration of Th1 and Th17) cells, in line with inhibited Mrna expression of IFN-γ and IL-17, respectively, in the CC. These positive effects of KRGE-DW mitigated behavioral dysfunction based on elevated plus maze and the rotarod tests. Conclusion: The results strongly suggest that KRGE-DW may inhibit CPZ-induced demyelination due to its oligodendroglial protective and anti-inflammatory activities by inhibiting infiltration/activation of immune cells. Thus, KRGE might have potential in therapeutic intervention for MS.

Linac Based Radiosurgery for Cerebral Arteriovenous Malformations (선형가속기 방사선 수술을 이용한 뇌동정맥기형의 치료)

  • Lee, Sung Yeal;Son, Eun Ik;Kim, Ok Bae;Choi, Tae Jin;Kim, Dong Won;Yim, Man Bin;Kim, In Hong
    • Journal of Korean Neurosurgical Society
    • /
    • v.29 no.8
    • /
    • pp.1030-1036
    • /
    • 2000
  • Objective : The aim of this study was to retrospectively analyze the safety and effect of Linac based Photon Knife Radiosugery System(PKRS) for treatment of cerebral arteriovenous malformation. Patients and Methods : The authors analyzed the clinical method and results of ten patients who were followed up more than two years, among the 18 patients who had radiosurgery on arteriovenous malformation from June, 1992, to Dec. 1997, with Linac based Photon knife radiosurgery system(PKRS) which was developed in our hospital. Results : The average age of the patients was 30.4(with the range of 13-49), and the sex was seven males and three females. For the initial clinical symptoms, there were five patients with headache, three with seizure, one with hemiparesis, and one with vomiting. Before the radiosurgery, computed tomography, MRI, and cerebral angiogram were done. For the location of arteriovenous malformation, it was found on six patients of cerebral hemisphere, two of thalamus, one of brainstem, and one of corpus callosum. Regarding the size of nidus, there were seven patients of smaller than 3cm, and three patients of larger than 3cm. Computed tomography, MRI, and cerebral angiogram were done periodically for sixth months, first year, and second year after the radiosurgery of PKRS for the completeness of obliteration. Six cases showed complete obliteration, and four partial obliterations were observed among ten cases, and interestingly, six cases of complete obliteration were observed among seven cases of small AVM of smaller than 3cm(the rate of complete obliteration : 85.7%). All patients tolerated the treatment and no significant complication were seen. Conclusion : In this study, linac based radiosurgery using PKRS onto arteriovenous malformation showed excellent effects, therefore authors believe that it is an ideal method for small sized or deep seated AVM.

  • PDF

Three-Phase English Syntactic Analysis for Improving the Parsing Efficiency (영어 구문 분석의 효율 개선을 위한 3단계 구문 분석)

  • Kim, Sung-Dong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.1
    • /
    • pp.21-28
    • /
    • 2016
  • The performance of an English-Korean machine translation system depends heavily on its English parser. The parser in this paper is a part of the rule-based English-Korean MT system, which includes many syntactic rules and performs the chart-based parsing. The parser generates too many structures due to many syntactic rules, so much time and memory are required. The rule-based parser has difficulty in analyzing and translating the long sentences including the commas because they cause high parsing complexity. In this paper, we propose the 3-phase parsing method with sentence segmentation to efficiently translate the long sentences appearing in usual. Each phase of the syntactic analysis applies its own independent syntactic rules in order to reduce parsing complexity. For the purpose, we classify the syntactic rules into 3 classes and design the 3-phase parsing algorithm. Especially, the syntactic rules in the 3rd class are for the sentence structures composed with commas. We present the automatic rule acquisition method for 3rd class rules from the syntactic analysis of the corpus, with which we aim to continuously improve the coverage of the parsing. The experimental results shows that the proposed 3-phase parsing method is superior to the prior parsing method using only intra-sentence segmentation in terms of the parsing speed/memory efficiency with keeping the translation quality.

A Deep Learning-based Depression Trend Analysis of Korean on Social Media (딥러닝 기반 소셜미디어 한글 텍스트 우울 경향 분석)

  • Park, Seojeong;Lee, Soobin;Kim, Woo Jung;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.1
    • /
    • pp.91-117
    • /
    • 2022
  • The number of depressed patients in Korea and around the world is rapidly increasing every year. However, most of the mentally ill patients are not aware that they are suffering from the disease, so adequate treatment is not being performed. If depressive symptoms are neglected, it can lead to suicide, anxiety, and other psychological problems. Therefore, early detection and treatment of depression are very important in improving mental health. To improve this problem, this study presented a deep learning-based depression tendency model using Korean social media text. After collecting data from Naver KonwledgeiN, Naver Blog, Hidoc, and Twitter, DSM-5 major depressive disorder diagnosis criteria were used to classify and annotate classes according to the number of depressive symptoms. Afterwards, TF-IDF analysis and simultaneous word analysis were performed to examine the characteristics of each class of the corpus constructed. In addition, word embedding, dictionary-based sentiment analysis, and LDA topic modeling were performed to generate a depression tendency classification model using various text features. Through this, the embedded text, sentiment score, and topic number for each document were calculated and used as text features. As a result, it was confirmed that the highest accuracy rate of 83.28% was achieved when the depression tendency was classified based on the KorBERT algorithm by combining both the emotional score and the topic of the document with the embedded text. This study establishes a classification model for Korean depression trends with improved performance using various text features, and detects potential depressive patients early among Korean online community users, enabling rapid treatment and prevention, thereby enabling the mental health of Korean society. It is significant in that it can help in promotion.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

A Semantic Text Model with Wikipedia-based Concept Space (위키피디어 기반 개념 공간을 가지는 시멘틱 텍스트 모델)

  • Kim, Han-Joon;Chang, Jae-Young
    • The Journal of Society for e-Business Studies
    • /
    • v.19 no.3
    • /
    • pp.107-123
    • /
    • 2014
  • Current text mining techniques suffer from the problem that the conventional text representation models cannot express the semantic or conceptual information for the textual documents written with natural languages. The conventional text models represent the textual documents as bag of words, which include vector space model, Boolean model, statistical model, and tensor space model. These models express documents only with the term literals for indexing and the frequency-based weights for their corresponding terms; that is, they ignore semantical information, sequential order information, and structural information of terms. Most of the text mining techniques have been developed assuming that the given documents are represented as 'bag-of-words' based text models. However, currently, confronting the big data era, a new paradigm of text representation model is required which can analyse huge amounts of textual documents more precisely. Our text model regards the 'concept' as an independent space equated with the 'term' and 'document' spaces used in the vector space model, and it expresses the relatedness among the three spaces. To develop the concept space, we use Wikipedia data, each of which defines a single concept. Consequently, a document collection is represented as a 3-order tensor with semantic information, and then the proposed model is called text cuboid model in our paper. Through experiments using the popular 20NewsGroup document corpus, we prove the superiority of the proposed text model in terms of document clustering and concept clustering.

Relationship between the Conception Rate after Estrus Induction using $PGF_2{\alpha}$ and Other Parameters in Holstein Dairy Cows (젖소에서 $PGF_2{\alpha}$ 투여에 의한 발정 유도 후 수태율과 다른 인자와의 관계)

  • Park, Chul-Ho;Lim, Won-Ho;Suh, Guk-Hyun;Oh, Ki-Seok;Son, Chang-Ho
    • Journal of Embryo Transfer
    • /
    • v.25 no.3
    • /
    • pp.133-139
    • /
    • 2010
  • The purpose of this study was to determine the relationship between conception rate and other parameters (body condition score; BCS, progesterone concentrations and follicle size) before estrus induction with $PGF_2{\alpha}$. The conception rate in cows with (2.75, 2.75 to 3.25 and 3.25), BCS regardless of AI (artificial insemination) time was 47.5, 67.5% and 48.5% at $PGF_2{\alpha}$ injection, respectively. The conception rate regardless of BCS was 59.0% in cows inseminated based on detected estrus, and 46.2% in cows inseminated at 72 to 80 hours (timed artificial insemination, TAI) after $PGF_2{\alpha}$ injection. The conception rate regardless of AI time was 43.0% in cows with low progesterone concentrations (less than 1.0 ng/ml), and 67.5% in cows with high progesterone concentrations (more than 1.0 ng/ml) at $PGF_2{\alpha}$ injection. The conception rate regardless of progesterone concentrations was 59.9% in cows inseminated based on detected estrus, and 48.1% in cows of TAI after $PGF_2{\alpha}$ injection. The conception rate regardless of AI time was 36.0% in cows with small dominant follicles (less than 5 mm), 56.0% in cows between 5 mm to 10 mm of follicle size, and 65.5% in cows with large dominant follicles (more than 10 mm) at $PGF_2{\alpha}$ injection, respectively. The conception rate regardless of follicle size was 57.3% in cows inseminated based on detected estrus, and 47.6% in cows of TAI after $PGF_2{\alpha}$ injection. These results indicated that if the cows with BCS 2.75 to 3.25, active corpus luteum, and/or large dominant follicle (more than 10 mm) are used for estrus induction, the conception rate will be greater.

Trends in Incidence of Common Cancers in Iran

  • Enayatrad, Mostafa;Mirzaei, Maryam;Salehiniya, Hamid;Karimirad, Mohammad Reza;Vaziri, Siavash;Mansouri, Fiezollah;Moudi, Asieh
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.17 no.sup3
    • /
    • pp.39-42
    • /
    • 2016
  • Cancer is a major public health problem in Iran. The aim of this study was to evaluate trends in incidence of ten common cancers in Iran, based on the national cancer registry reports from 2004 to 2009. This epidemiological study was carried out based on existing age-standardized estimate cancer data from the national report on cancer registry/Ministry of Health in Iran. The obtained data were analyzed by test for linear trend and $P{\geq}0.05$ was taken as the significant level. Totals of 41,169 and 32,898 cases of cancer were registered in men and females, respectively, during these years. Overall age-standard incidence rates (ASRs) per 100,000 population according to primary site weres 125.6 and 113.4 in males and females, respectively. Between 2004 and 2009, the ten most common cancers (excluding skin cancer) were stomach (16.2), bladder (12.6), prostate (11), colon-rectum (10.14), hematopoeitic system (7.1), lung (6.1), esophagus (6.4), brain (3.2), lymph node (3.8) and larynx (3.4) in males; and in females were breast (27.4), colon-rectum (9.3), stomach (7.6), esophagus (6.4), hematopoeitic system (4.9), thyroid (3.9), ovary (3.6), corpus uteri (2.9), bladder (3.2) and lung (2.6). Moreover, results showed that skin cancer was estimated as the most common cancer in both sexes. The lowest and the highest incidence in females and males were reported respectively in 2004 and 2009. Over this period, the incidence of cancer in both sexes has been significantly increasing (p<0.01). Like other less developed and epidemiologically transitioning countries, the trend of age-standardized incidence rate of cancer in Iran is rising. Due to the increasing trends, the future burden of cancer in the Iran is going to be acute with the expected increases in aging populations. Determining and controlling potential risk factors of cancer should hopefully lead to decrease in its burden.

Semi-supervised domain adaptation using unlabeled data for end-to-end speech recognition (라벨이 없는 데이터를 사용한 종단간 음성인식기의 준교사 방식 도메인 적응)

  • Jeong, Hyeonjae;Goo, Jahyun;Kim, Hoirin
    • Phonetics and Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.29-37
    • /
    • 2020
  • Recently, the neural network-based deep learning algorithm has dramatically improved performance compared to the classical Gaussian mixture model based hidden Markov model (GMM-HMM) automatic speech recognition (ASR) system. In addition, researches on end-to-end (E2E) speech recognition systems integrating language modeling and decoding processes have been actively conducted to better utilize the advantages of deep learning techniques. In general, E2E ASR systems consist of multiple layers of encoder-decoder structure with attention. Therefore, E2E ASR systems require data with a large amount of speech-text paired data in order to achieve good performance. Obtaining speech-text paired data requires a lot of human labor and time, and is a high barrier to building E2E ASR system. Therefore, there are previous studies that improve the performance of E2E ASR system using relatively small amount of speech-text paired data, but most studies have been conducted by using only speech-only data or text-only data. In this study, we proposed a semi-supervised training method that enables E2E ASR system to perform well in corpus in different domains by using both speech or text only data. The proposed method works effectively by adapting to different domains, showing good performance in the target domain and not degrading much in the source domain.

An Intelligent Marking System based on Semantic Kernel and Korean WordNet (의미커널과 한글 워드넷에 기반한 지능형 채점 시스템)

  • Cho Woojin;Oh Jungseok;Lee Jaeyoung;Kim Yu-Seop
    • The KIPS Transactions:PartA
    • /
    • v.12A no.6 s.96
    • /
    • pp.539-546
    • /
    • 2005
  • Recently, as the number of Internet users are growing explosively, e-learning has been applied spread, as well as remote evaluation of intellectual capacity However, only the multiple choice and/or the objective tests have been applied to the e-learning, because of difficulty of natural language processing. For the intelligent marking of short-essay typed answer papers with rapidness and fairness, this work utilize heterogenous linguistic knowledges. Firstly, we construct the semantic kernel from un tagged corpus. Then the answer papers of students and instructors are transformed into the vector form. Finally, we evaluate the similarity between the papers by using the semantic kernel and decide whether the answer paper is correct or not, based on the similarity values. For the construction of the semantic kernel, we used latent semantic analysis based on the vector space model. Further we try to reduce the problem of information shortage, by integrating Korean Word Net. For the construction of the semantic kernel we collected 38,727 newspaper articles and extracted 75,175 indexed terms. In the experiment, about 0.894 correlation coefficient value, between the marking results from this system and the human instructors, was acquired.