• Title/Summary/Keyword: 기계부분

Search Result 1,339, Processing Time 0.029 seconds

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Literature Analysis of Radiotherapy in Uterine Cervix Cancer for the Processing of the Patterns of Care Study in Korea (한국에서 자궁경부알 방사선치료의 Patterns of Care Study 진행을 위한 문헌 비교 연구)

  • Choi Doo Ho;Kim Eun Seog;Kim Yong Ho;Kim Jin Hee;Yang Dae Sik;Kang Seung Hee;Wu Hong Gyun;Kim Il Han
    • Radiation Oncology Journal
    • /
    • v.23 no.2
    • /
    • pp.61-70
    • /
    • 2005
  • Purpose: Uterine cervix cancer is one of the most prevalent women cancer in Korea. We analysed published papers in Korea with comparing Patterns of Care Study (PCS) articles of United States and Japan for the purpose of developing and processing Korean PCS. Materials and Methods: We searched PCS related foreign-produced papers in the PCS homepage (212 articles and abstracts) and from the Pub Med to find Structure and Process of the PCS. To compare their study with Korean papers, we used the internet site 'Korean Pub Med' to search 99 articles regarding uterine cervix cancer and radiation therapy. We analysed Korean paper by comparing them with selected PCS papers regarding Structure, Process and Outcome and compared their items between the period of before 1980's and 1990's. Results: Evaluable papers were 28 from United States, 10 from the Japan and 73 from the Korea which treated cervix PCS items. PCS papers for United States and Japan commonly stratified into $3\~4$ categories on the bases of the scales characteristics of the facilities, numbers of the patients, doctors, Researchers restricted eligible patients strictly. For the process of the study, they analysed factors regarding pretreatment staging in chronological order, treatment related factors, factors in addition to FIGO staging and treatment machine. Papers in United States dealt with racial characteristics, socioeconomic characteristics of the patients, tumor size (6), and bilaterality of parametrial or pelvic side wail invasion (5), whereas papers from Japan treated of the tumor markers. The common trend in the process of staging work-up was decreased use of lymphangiogram, barium enema and increased use of CT and MRI over the times. The recent subject from the Korean papers dealt with concurrent chemoradiotherapy (9 papers), treatment duration (4), tumor markers (B) and unconventional fractionation. Conclusion: By comparing papers among 3 nations, we collected items for Korean uterine cervix cancer PCS. By consensus meeting and close communication, survey items for cervix cancer PCS were developed to measure structure, process and outcome of the radiation treatment of the cervix cancer. Subsequent future research will focus on the use of brachytherapy and its impact on outcome including complications. These finding and future PCS studies will direct the development of educational programs aimed at correcting identified deficits in care.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

문헌검색(文獻檢索)에 있어서 Chemical Abstracts와 CA Condensates의 비교(比較)

  • Robert, B.E.
    • Journal of Information Management
    • /
    • v.9 no.1
    • /
    • pp.21-25
    • /
    • 1976
  • 1975년(年) 3월(月), 4년반(年半) 동안의 Chemical Abstracts 색인(索引)과 온-라인이 가능(可能)한 CA Condensates를 비교(比較)하였다. 두가지 데이터 베이스를 함께 이용(利用)하여 검색(檢索)하는 방법(方法)이 가장 효율적(效率的)이지만 실예(實例)에서 보는 바와 같이 CA Condensates를 검색(檢索)하는 것이 보다 실용적(實用的)이다. System Development Corp 사(社) (SDC)에 설치(設置)되어 있는 온-라인 형태(形態)인 CHEMCON과 CHEM7071을 Chemical Abstracts 색인(索引)과 비교(比較)하였다. 대부분(大部分)의 Chemical Abstracts 이용자(理容者)들은 Chemical Abstracts 책자나 우가색인(累加索引)에는 친숙(親熟)하지만 CA Condensates는 아마도 그리 친숙(親熟)하지 못할 것이다. CA Condensates는 서지적 사항을 기계(機械)로 읽을 수 있는 형태(形態)로 되어 있고 Chemical Abstracts에 따라서 색인(索引)되므로 매주 발행되는 Chemical Abstracts 책자의 뒷 부분이 있는 색인(索引)과 같이 우리에게 가장 친숙(親熟)한 형태(形態)로 되어 있다. Chemical Abstracts가 현재(現在) 사용(使用)하고 있는 데이터 데이스이지만 본고(本稿)에서는 Index와 Condensates를 둘 다 데이터 베이스로 정의(定義)한다. Condensates가 미국(美國)의 Chemical Abstracts Service 기관으로부터 상업적(商業的)으로 이용(利用)할 수 있게 되자 여러 정보(情報)센터에서는 이용자(利用者)들의 프로 파일을 뱃취방식(方式)으로 처리(處理)하여 매주 나오는 자기(磁氣)테이프에서 최신정보(最新情報)를 검색(檢索)하여 제공(提供)하는 서어비스 (SDI)를 시작하였다. 어떤 정보(情報)센터들은 지나간 자기(磁氣)테이프들을 모아서 역시 뱃취방식(方式)으로 소급(遡及) 문헌검색(文獻檢索) 서어비스를 한다. 자기(磁氣)테이프를 직접 취급(取扱)하는 사람들을 제외(除外)하고는 대부분(大部分) Condensates를 아직 잘 모르고 있다. 소급(遡及) 문헌검색(文獻檢索)은 비용이 다소 비싸고 두서없이 이것 저것 문헌(文獻)을 검색(檢索)하는 방법(方法)은 실용적(實用的)이 못된다. 매주 나오는 색인(索引)에 대해서 두 개나 그 이상의 개념(槪念)이나 물질(物質)을 조합(組合)하여 검색(檢索)하는 방법(方法)은 어렵고 실용적(實用的)이 못된다. 오히려 주어진 용어(用語) 아래에 있는 모든 인용어(引用語)들을 보고 초록(抄錄)과의 관련성(關連性)을 결정(決定)하는 것이 때때로 더 쉽다. 상호(相互) 작용(作用)하는 온-라인 검색(檢索)을 위한 Condensates의 유용성(有用性)은 많은 변화를 가져 왔다. 필요(必要)한 문헌(文獻)만을 검색(檢索)해 보는 것이 이제 가능(可能)하고 어떤 항목(項目)에 대해서도 완전(完全)히 색인(索引)할 수 있게 되었다. 뱃취 시스팀으로는 검색(檢索)을 시작해서 그 결과(結果)를 받아 볼 때 까지 수시간(數時間)에서 며칠까지 걸리는 번거로운 시간차(時間差)를 이제는 보통 단 몇 분으로 줄일 수 있다. 그리고 뱃취 시스팀과는 달리 부정확하거나 불충분한 검색방법(檢索方法)은 즉시 고칠 수가 있다. 연속적인 뱃취 형태의 검색방법(檢索方法)에 비해서 순서(順序)없이 온-라인으로 검색(檢索)하는 방법(方法)이 분명(分明)하고 정확(正確)한 장점(長點)이 있다. CA Condensates를 자주 이용(移用)하게 되자 그의 진정한 가치(價値)에, 대해 논의(論義)가 있었다. CA Condensates의 색인방법(索引方法)은 CA Abstract 책자나 우가색인(累加索引)의 방법(方法)보다 확실히 덜 체계적(體系的)이고 철저(徹底)하지 못하다. 더우기 두 데이터 베이스는 중복(重複)것이 많으므로, 중복(重複)해서 검색(檢索)할 가치(價値)가 없는지를 결정(決定)해야 한다. 다른 몇 개의 데이터 베이스와 CA Condensates를 비교(比較)한 논문(論文)들이 여러 번 발표(發表)되어 왔는데 일반적(一般的)으로 CA Condensates는 하위(下位)의 데이터 베이스로 나타났다. Buckley는 Chemical Abstracts의 색인(索引)이 CA Condensates 보다 더 좋은 문헌 (데라마이신의 제법에 관해서)을 제공(提供)한 실례(實例)를 인용(引用)하였다. 죠오지대학(大學)의 Search Center는 CA Condensates가 CA Integrated Subject File 보다 기능(機能)이 못하다는 것을 알았다. CA Condensates의 다른 여러 가지 형태(形態)들을 또한 비교(比較)하였다. Michaels은 CA Condensates를 온-라인으로 검색(檢索)한 것과 매주 나오는 Chemical Abstracts 책자의 색인(索引)은 수작업(手作業)으로 검색(檢索)한 것을 비교(比較)한 논문(論文)을 발표(發表)하였다. 그리고 Prewitt는 온-라인으로 축적(蓄積)한 두 개의 상업용(商業用) CA Condensates를 비교(比較)하였다. Amoco Research Center에서도 CA Condensates와 Chemical Abstracts 색인(索引)의 검색결과(檢索結果)를 비교(比較)하고 CA Condensates의 장점(長點)과 색인(索引)의 장점(長點), 그리고 사실상(事實上) 서로 동등(同等)하다는 실례(實例)를 발견(發見)하였다. 1975년(年) 3월(月), 적어도 4년분(年分)의 CA Condensates와 색인(索引)(Vols 72-79, 1970-1973)을 비교(比較)하였다. 저자(著者)와 일반(一般) 주제(主題) 대한 검색(檢索)은 Vol 80 (Jan-June, 1974)을 사용(使用)하여 비교(比較)하였다. CA Condensates는 보통 세분화(細分化)된 복합물(複合物)을 검색(檢索)하는 데 불편(不便)하다. Buckly가 제시(提示)한 실례(實例)가 그 대표적(代表的)인 예(例)이다. 그러나, 다른 형태(形態)의 검색실예(檢索實例)(단체저자(團?著者), 특허수탁저(特許受託著), 개인저자(個人著者), 일반적(一般的)인/세분화(細分化)된 화합물(化合物) 그리고 반응종류(反應種類)로 실제적(實際的)인 검색(檢索)을 위한 CA Condensates의 이점(利點)을 예시(例示)하였다. 다음 실례(實例)에서 CHEMCON과 CHEM7071은 CA Condensates를 온-라인으로 입력(入力)시킨 것이다.

  • PDF

Changes of Biomass of Green Manure and Rice Growth and Yield using Leguminous Crops and Barley Mixtures by Cutting Heights at Paddy (두과 녹비작물과 보리 혼파 이용 시 예취 높이에 따른 Biomass와 벼 생육 및 수량 변화)

  • Jeon, Weon-Tai;Seong, Ki-Yeong;Oh, Gye-Jeong;Kim, Min-Tae;Lee, Yong-Hwan;Kang, Ui-Gum;Lee, Hyun-Bok;Kang, Hang-Won
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.2
    • /
    • pp.192-197
    • /
    • 2012
  • The competition between green manure and forage crops frequently occurred at agricultural field because of soil fertility and livestock feeding selection. These experiments were carried out to evaluate the effects shoot and residue for green manure and forage production by leguminous crops and barley mixtures at paddy. Field experiments were conducted at paddy soil from 2008 to 2009. Treatments consisted of mixture and inter-seeding of barley and leguminous crops (hairy vetch and crimson clover). These treatments were divided into cutting height of 8 and 25 cm for using of green manure and forage at once. The residue biomass of 25 cm cutting height was higher than 8 cm and were no significantly between mixture and inter-seeding. However, residues of legume crop were significantly higher at inter-seeding than mixture. The shoot biomass of 8 cm cutting height was higher than 25 cm for forage using. The production of legume crop was high at the barley and hairy vetch seeding plot. The mixture of hairy vetch and barley showed the best biomass of shoot and residue for green manure and forage using at 25 cm of cutting height. Also this treatment could be possible to rice cultivation by no fertilization. Therefore, we suggested that 25 cm cutting of hairy vetch and barley mixture could be used for green manure and forage at the same time under rice-based cropping system.

An Observation Study on the Health Benefits of Fermented Milk in Relation to Gastrointestinal Diseases Prevention in Korean (유산균발효유(乳酸菌醱酵乳)의 음용(飮用)이 소화기질환(消化器疾患) 예방(豫防)에 미친 효과(效果)에 관한 조사연구(調査硏究))

  • Lee, Won-Chang;Yoon, Yoh-Chang
    • Journal of Dairy Science and Biotechnology
    • /
    • v.14 no.1
    • /
    • pp.57-69
    • /
    • 1996
  • This study was desiged to carry out an observation study on the cognitive level of health benefitis of fermented milk and subsequntly, on prevention of gastrointestinal diseases for the consumers in Korea. Data used in this study were collected from two different source ; 1) 987 university students living in Seoul, were selected randomly and interviwed induvidual from May 25 to September 30 1994, to investigate the awareness status as a consomer of fermented milk, and 2) health benefits of fermented milk with respect to prevention gastric intestinal diseases such as typhoid fever, paratyphoid fever and bacillary dysentery in Korea with the raw data from the Yearbook of Health and Social Statistics by Ministry of Health and Social Affairs (1976${\sim}$1995) and Ministry of Agriculture, Forestry & Fisheries (1976${\sim}$1995), Republic of Korea. The results of this study were as follows : The effects of femented milk on health appeared to be attributed to taking care of an environment of the gastrointestinal tract regarding diarrhea, constipation and digestion, with showing 75.1% of male and 71.1% of female of university students in Seoul answered positively, The correlation coefficients of statistics between amounts of consumption per capita of fermented milk and incidence rates of gastrointestinal diseases during the period from 1975 to 1994 in the whole country was r = -0.308 (p<0.05). those of Seoul a major comsuming area of fermented milk in the country and Kangwon province, a minor consuming area were r = -0.704 (p<0.01) and r = +0.262 (n.s.), respectively. In conclusion, the results obtained in this study suggest that fermented milk exert its effect on gastrointestinal tract. Furthermore, the results warrant a futher study in the field of preventive medicine and health science with respect to efficacy of fermented milk consumption.

  • PDF

Clinical and Arthroscopic Findings of Medial Meniscus Posterior Horn Insertion Tear (내측 반월상 연골판 후각 기시부 파열의 특징 및 관절경 소견)

  • Lee, Jun-Young;Kim, Dong-Hui;Ha, Sang-Ho;Lee, Sang-Hong;Gang, Joung-Hun
    • Journal of Korean Orthopaedic Sports Medicine
    • /
    • v.8 no.1
    • /
    • pp.33-38
    • /
    • 2009
  • Purpose: We wanted to report the clinical characteristics and arthroscopic findings of radial tear in medial meniscus posterior horn insertion, commonly occurs in patient over middle age with documentary review. Materials and Methods: Retrograde study using hospital records was done to 40 cases in 40 patients who visited our hospital and had been performed knee arthroscopic surgery due to medial meniscus posterior horn insertion tear between January, 2005 to April, 2007. Seven cases were male and 33 cases were female with the mean age of 61 (range, 47-80). Trauma history, stage of arthritis, period between pain and operation, MRI findings, clinical symptoms and operation methods were evaluated. Results : Six cases had trauma history while 34 cases didn't. In simple x-ray, using Kellgren-Lawrence classification, 31 cases were between stage 0 and II while 9 cases were stage III. In arthroscopic exam, there were 17 cases of Outerbridge grade IV, 4 cases of grade III, 9 cases of grade II, 9 cases of grade I. The mean duration of pain was 5.3 months. In MRI, at least one finding of cleft in axial or coronal view or ghost sign in sagittal view was found in all cases. The shape of meniscus tears were blunt in 18 cases, transverse in 12 and degenerative tear in 10. Subtotal meniscectomy was performed in 16 cases, partial meniscectomy in 10 cases and meniscal repair in 14 cases. Conclusion : Medial meniscus posterior horn insertion tear occurs in patients over middle age is rarely related to trauma history but causes painful mechanical symptom and usually accompany arthritis. Meniscectomy can be done for the treatment but repair can be considered is some cases. Further study on the treatment result will be needed.

  • PDF

Target Advertisement Service using a Viewer's Profile Reasoning (시청자 프로파일 추론 기법을 이용한 표적 광고 서비스)

  • Kim Munjo;Im Jeongyeon;Kang Sanggil;Kim Munchrul;Kang Kyungok
    • Journal of Broadcast Engineering
    • /
    • v.10 no.1 s.26
    • /
    • pp.43-56
    • /
    • 2005
  • In the existing broadcasting environment, it is not easy to serve the bi-directional service between a broadcasting server and a TV audience. In the uni-directional broadcasting environments, almost TV programs are scheduled depending on the viewers' popular watching time, and the advertisement contents in these TV programs are mainly arranged by the popularity and the ages of the audience. The audiences make an effort to sort and select their favorite programs. However, the advertisement programs which support the TV program the audience want are not served to the appropriate audiences efficiently. This randomly provided advertisement contents can occur to the audiences' indifference and avoidance. In this paper, we propose the target advertisement service for the appropriate distribution of the advertisement contents. The proposed target advertisement service estimates the audience's profile without any issuing the private information and provides the target-advertised contents by using his/her estimated profile. For the experimental results, we used the real audiences' TV usage history such as the ages, fonder and time of the programs from AC Neilson Korea. And we show the accuracy of the proposed target advertisement service algorithm. NDS (Normalized Distance Sum) and the Vector correlation method, and implementation of our target advertisement service system.