• Title/Summary/Keyword: GLO

Search Result 146, Processing Time 0.028 seconds

Effects of Dietary Xanthophylls and See Weed By-Products on Growth Performance, Color and Antioxidant Properties in Broiler Chicks (Xanthophylls과 해조 부산물 첨가 급여가 육계의 사양성적, 육색 및 항산화 특성에 미치는 영향)

  • 김창혁;이성기;이규호
    • Food Science of Animal Resources
    • /
    • v.24 no.2
    • /
    • pp.128-134
    • /
    • 2004
  • This experiment was conducted to investigate the effects of dietary pigment sources on the performance, color and antioxidant properties in broiler chick. Experimental diet was formulated to have isocalories and isonitrogen during the experiment period. Total xanthophylls content in the experimental diet was formulated to have 30ppm. Experimental trials were done for five weeks with six treatment groups; T1 (Control), T2 (Olo Glo, natural yellow pigment), T3 (Kern Glo, natural red pigment), T4 (canthaxanthin, synthetic red pigment), T5 (asthaxanthine, natural red pigment), and T6 (seaweed by-products). Body weight gain and feed intake were significantly lower (p<0.05) in T6 group than in other treatments. Mortality was lower in T2, T3 and T4 than in control, but higher (p<0.05) in T5 and T6. The sources of pigments did not have any effects on the dressed carcass and abdominal fat pad (p>0.05). The gizzard weight was significantly lower in T6 (p<0.05) than in others. Pigmentation of leg skin was significantly lower (p<0.05) in control and T6. Effects of dietary pigments was greater with red pigments than with yellow pigments, and those were also greater with natural pigments than with synthetic ones. The peroxide value (POV), thiobarbituric acid reactive substances (TBARS) and pH values of chicken meat were increased (p<0.05) in all treatments at 12 day storage, and was higher (p<0.05) in pigments supplementation group. No differences of CIE L$\^$*/(lightness) and b$\^$*/(yellowness) were not found by storage days and xanthophylls sources. The a$\^$*/(redness) after 12 day storage was significantly (p<0.05) decreased in all treatments, but those of T4 and T5 were higher than those of others. These results showed that feeding of xanthophylls sources to chick could improve color intensity and inhibit lipid oxidation of leg meat.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

A Prediction of Northeast Asian Summer Precipitation Using Teleconnection (원격상관을 이용한 북동아시아 여름철 강수량 예측)

  • Lee, Kang-Jin;Kwon, MinHo
    • Atmosphere
    • /
    • v.25 no.1
    • /
    • pp.179-183
    • /
    • 2015
  • Even though state-of-the-art general circulation models is improved step by step, the seasonal predictability of the East Asian summer monsoon still remains poor. In contrast, the seasonal predictability of western North Pacific and Indian monsoon region using dynamic models is relatively high. This study builds canonical correlation analysis model for seasonal prediction using wind fields over western North Pacific and Indian Ocean from the Global Seasonal Forecasting System version 5 (GloSea5), and then assesses the predictability of so-called hybrid model. In addition, we suggest improvement method for forecast skill by introducing the lagged ensemble technique.

Word Embedding using word position information (단어의 위치정보를 이용한 Word Embedding)

  • Hwang, Hyunsun;Lee, Changki;Jang, HyunKi;Kang, Dongho
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.60-63
    • /
    • 2017
  • 자연어처리에 딥 러닝을 적용하기 위해 사용되는 Word embedding은 단어를 벡터 공간상에 표현하는 것으로 차원축소 효과와 더불어 유사한 의미의 단어는 유사한 벡터 값을 갖는다는 장점이 있다. 이러한 word embedding은 대용량 코퍼스를 학습해야 좋은 성능을 얻을 수 있기 때문에 기존에 많이 사용되던 word2vec 모델은 대용량 코퍼스 학습을 위해 모델을 단순화 하여 주로 단어의 등장 비율에 중점적으로 맞추어 학습하게 되어 단어의 위치 정보를 이용하지 않는다는 단점이 있다. 본 논문에서는 기존의 word embedding 학습 모델을 단어의 위치정보를 이용하여 학습 할 수 있도록 수정하였다. 실험 결과 단어의 위치정보를 이용하여 word embedding을 학습 하였을 경우 word-analogy의 syntactic 성능이 크게 향상되며 어순이 바뀔 수 있는 한국어에서 특히 큰 효과를 보였다.

  • PDF

Simulation Study on Ad Hoc Routing in Bluetooth Piconets (블루투스 피코넷에서의 Ad Hoc 라우팅 시뮬레이션)

  • Jeong, Kyeong-In;Jeong, Young-Sam;Lee, Hyuk-Joon;Chung, Kwang-Sue
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.10b
    • /
    • pp.1593-1596
    • /
    • 2001
  • 블루투스는 1 개의 마스터와 최대 7 개의 슬레이브로 구성되는 피코넷과 다수의 피코넷으로 구성되는 스캐터넷의 형성이 가능하여 이를 이용한 ad hoc PAN 네트워킹에 대한 연구가 시작되고 있다. 본 논문은 블루투스 피코넷에서의 ad hoc 네트워킹을 위한 GloMoSim 기반 시뮬레이터를 이용하여 ad hoc 라우팅 알고리즘을 피코넷에 적용한 시뮬레이션 결과에 관한 것이다 ad hoc 라우팅 알고리즘으로는 AODV와 DSR을 사용하였으며, 피코넷을 이루는 슬레이브의 개수를 1 개에서 7개까지 변화시면서 마스터-슬레이브와 슬레이브-슬레이브 간의 데이터 전송에 대한 성능을 평가 하였다.

  • PDF

Derivation of Optimal Distribution for the Frequency Analysis of Extreme Flood using LH-Moments (LH-모멘트에 의한 극치홍수량의 빈도분석을 위한 적정분포형 유도)

  • Maeng, Sung-Jin;Lee, Soon-Hyuk
    • Proceedings of the Korean Society of Agricultural Engineers Conference
    • /
    • 2002.10a
    • /
    • pp.229-232
    • /
    • 2002
  • This study was conducted to estimate the design flood by the determination of best fitting order of LH-moments of the annual maximum series at six and nine watersheds in Korea and Australia, respectively. Adequacy for flood flow data was confirmed by the tests of independence, homogeneity, and outliers. Gumbel (GUM), Generalized Extreme Value (GEV), Generalized Pareto (GPA), and Generalized Logistic (GLO) distributions were applied to get the best fitting frequency distribution for flood flow data. Theoretical bases of L, L1, L2, L3 and L4-moments were derived to estimate the parameters of 4 distributions. L, L1, L2, L3 and L4-moment ratio diagrams (LH-moments ratio diagram) were developed in this study.

  • PDF

Word Embedding using word position information (단어의 위치정보를 이용한 Word Embedding)

  • Hwang, Hyunsun;Lee, Changki;Jang, HyunKi;Kang, Dongho
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.60-63
    • /
    • 2017
  • 자연어처리에 딥 러닝을 적용하기 위해 사용되는 Word embedding은 단어를 벡터 공간상에 표현하는 것으로 차원축소 효과와 더불어 유사한 의미의 단어는 유사한 벡터 값을 갖는다는 장점이 있다. 이러한 word embedding은 대용량 코퍼스를 학습해야 좋은 성능을 얻을 수 있기 때문에 기존에 많이 사용되던 word2vec 모델은 대용량 코퍼스 학습을 위해 모델을 단순화 하여 주로 단어의 등장 비율에 중점적으로 맞추어 학습하게 되어 단어의 위치 정보를 이용하지 않는다는 단점이 있다. 본 논문에서는 기존의 word embedding 학습 모델을 단어의 위치정보를 이용하여 학습 할 수 있도록 수정하였다. 실험 결과 단어의 위치정보를 이용하여 word embedding을 학습 하였을 경우 word-analogy의 syntactic 성능이 크게 향상되며 어순이 바뀔 수 있는 한국어에서 특히 큰 효과를 보였다.

  • PDF

Flood Frequency Analysis using L, L1 and L2-Moment Methods (L, L1 및 L2-모멘트법에 의한 홍수빈도분석)

  • Lee, Soon-Hyuk;Maeng, Sung-Jin;Ryoo, Kyong-Sik;Jee, Ho-Keun
    • Proceedings of the Korean Society of Agricultural Engineers Conference
    • /
    • 2001.10a
    • /
    • pp.310-313
    • /
    • 2001
  • This study was conducted to derive optimal design floods by Gumbel, GEV, GLO and GPA distributions for the annual maximum series at sixteen watersheds. Adequacy for the analysis of flood data used in This study was established by the tests of Independence, Homogeneity, detection of Outliers. Parameters were estimated by the Methods of L, L1 and L2-moments. Design floods obtained by Methods of L, L1 and L2-moments using Gringorten methods for plotting positions in GEV distribution were compared by the Relative Mean Errors and Relative Absolute Errors.

  • PDF

Korean Semantic Similarity Measures for the Vector Space Models

  • Lee, Young-In;Lee, Hyun-jung;Koo, Myoung-Wan;Cho, Sook Whan
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.49-55
    • /
    • 2015
  • It is argued in this paper that, in determining semantic similarity, Korean words should be recategorized with a focus on the semantic relation to ontology in light of cross-linguistic morphological variations. It is proposed, in particular, that Korean semantic similarity should be measured on three tracks, human judgements track, relatedness track, and cross-part-of-speech relations track. As demonstrated in Yang et al. (2015), GloVe, the unsupervised learning machine on semantic similarity, is applicable to Korean with its performance being compared with human judgement results. Based on this compatability, it was further thought that the model's performance might most likely vary with different kinds of specific relations in different languages. An attempt was made to analyze them in terms of two major Korean-specific categories involved in their lexical and cross-POS-relations. It is concluded that languages must be analyzed by varying methods so that semantic components across languages may allow varying semantic distance in the vector space models.

Estimation of Design Rainfall Using 3 Parameter Probability Distributions (3변수 확률분포에 의한 설계강우량 추정)

  • Lee, Soon Hyuk;Maeng, Sung Jin;Ryoo, Kyong Sik
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2004.05b
    • /
    • pp.595-598
    • /
    • 2004
  • This research seeks to derive the design rainfalls through the L-moment with the test of homogeneity, independence and outlier of data on annual maximum daily rainfall at 38 rainfall stations in Korea. To select the appropriate distribution of annual maximum daily rainfall data by the rainfall stations, Generalized Extreme Value (GEV), Generalized Logistic (GLO), Generalized Pareto (GPA), Generalized Normal (GNO) and Pearson Type 3 (PT3) probability distributions were applied and their aptness were judged using an L-moment ratio diagram and the Kolmogorov-Smirnov (K-S) test. Parameters of appropriate distributions were estimated from the observed and simulated annual maximum daily rainfall using Monte Carlo techniques. Design rainfalls were finally derived by GEV distribution, which was proved to be more appropriate than the other distributions.

  • PDF