• Title/Summary/Keyword: KIM-1

Search Result 217,066, Processing Time 0.211 seconds

A Study on the Clustering Method of Row and Multiplex Housing in Seoul Using K-Means Clustering Algorithm and Hedonic Model (K-Means Clustering 알고리즘과 헤도닉 모형을 활용한 서울시 연립·다세대 군집분류 방법에 관한 연구)

  • Kwon, Soonjae;Kim, Seonghyeon;Tak, Onsik;Jeong, Hyeonhee
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.95-118
    • /
    • 2017
  • Recent centrally the downtown area, the transaction between the row housing and multiplex housing is activated and platform services such as Zigbang and Dabang are growing. The row housing and multiplex housing is a blind spot for real estate information. Because there is a social problem, due to the change in market size and information asymmetry due to changes in demand. Also, the 5 or 25 districts used by the Seoul Metropolitan Government or the Korean Appraisal Board(hereafter, KAB) were established within the administrative boundaries and used in existing real estate studies. This is not a district classification for real estate researches because it is zoned urban planning. Based on the existing study, this study found that the city needs to reset the Seoul Metropolitan Government's spatial structure in estimating future housing prices. So, This study attempted to classify the area without spatial heterogeneity by the reflected the property price characteristics of row housing and Multiplex housing. In other words, There has been a problem that an inefficient side has arisen due to the simple division by the existing administrative district. Therefore, this study aims to cluster Seoul as a new area for more efficient real estate analysis. This study was applied to the hedonic model based on the real transactions price data of row housing and multiplex housing. And the K-Means Clustering algorithm was used to cluster the spatial structure of Seoul. In this study, data onto real transactions price of the Seoul Row housing and Multiplex Housing from January 2014 to December 2016, and the official land value of 2016 was used and it provided by Ministry of Land, Infrastructure and Transport(hereafter, MOLIT). Data preprocessing was followed by the following processing procedures: Removal of underground transaction, Price standardization per area, Removal of Real transaction case(above 5 and below -5). In this study, we analyzed data from 132,707 cases to 126,759 data through data preprocessing. The data analysis tool used the R program. After data preprocessing, data model was constructed. Priority, the K-means Clustering was performed. In addition, a regression analysis was conducted using Hedonic model and it was conducted a cosine similarity analysis. Based on the constructed data model, we clustered on the basis of the longitude and latitude of Seoul and conducted comparative analysis of existing area. The results of this study indicated that the goodness of fit of the model was above 75 % and the variables used for the Hedonic model were significant. In other words, 5 or 25 districts that is the area of the existing administrative area are divided into 16 districts. So, this study derived a clustering method of row housing and multiplex housing in Seoul using K-Means Clustering algorithm and hedonic model by the reflected the property price characteristics. Moreover, they presented academic and practical implications and presented the limitations of this study and the direction of future research. Academic implication has clustered by reflecting the property price characteristics in order to improve the problems of the areas used in the Seoul Metropolitan Government, KAB, and Existing Real Estate Research. Another academic implications are that apartments were the main study of existing real estate research, and has proposed a method of classifying area in Seoul using public information(i.e., real-data of MOLIT) of government 3.0. Practical implication is that it can be used as a basic data for real estate related research on row housing and multiplex housing. Another practical implications are that is expected the activation of row housing and multiplex housing research and, that is expected to increase the accuracy of the model of the actual transaction. The future research direction of this study involves conducting various analyses to overcome the limitations of the threshold and indicates the need for deeper research.

The Classification System and Information Service for Establishing a National Collaborative R&D Strategy in Infectious Diseases: Focusing on the Classification Model for Overseas Coronavirus R&D Projects (국가 감염병 공동R&D전략 수립을 위한 분류체계 및 정보서비스에 대한 연구: 해외 코로나바이러스 R&D과제의 분류모델을 중심으로)

  • Lee, Doyeon;Lee, Jae-Seong;Jun, Seung-pyo;Kim, Keun-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.127-147
    • /
    • 2020
  • The world is suffering from numerous human and economic losses due to the novel coronavirus infection (COVID-19). The Korean government established a strategy to overcome the national infectious disease crisis through research and development. It is difficult to find distinctive features and changes in a specific R&D field when using the existing technical classification or science and technology standard classification. Recently, a few studies have been conducted to establish a classification system to provide information about the investment research areas of infectious diseases in Korea through a comparative analysis of Korea government-funded research projects. However, these studies did not provide the necessary information for establishing cooperative research strategies among countries in the infectious diseases, which is required as an execution plan to achieve the goals of national health security and fostering new growth industries. Therefore, it is inevitable to study information services based on the classification system and classification model for establishing a national collaborative R&D strategy. Seven classification - Diagnosis_biomarker, Drug_discovery, Epidemiology, Evaluation_validation, Mechanism_signaling pathway, Prediction, and Vaccine_therapeutic antibody - systems were derived through reviewing infectious diseases-related national-funded research projects of South Korea. A classification system model was trained by combining Scopus data with a bidirectional RNN model. The classification performance of the final model secured robustness with an accuracy of over 90%. In order to conduct the empirical study, an infectious disease classification system was applied to the coronavirus-related research and development projects of major countries such as the STAR Metrics (National Institutes of Health) and NSF (National Science Foundation) of the United States(US), the CORDIS (Community Research & Development Information Service)of the European Union(EU), and the KAKEN (Database of Grants-in-Aid for Scientific Research) of Japan. It can be seen that the research and development trends of infectious diseases (coronavirus) in major countries are mostly concentrated in the prediction that deals with predicting success for clinical trials at the new drug development stage or predicting toxicity that causes side effects. The intriguing result is that for all of these nations, the portion of national investment in the vaccine_therapeutic antibody, which is recognized as an area of research and development aimed at the development of vaccines and treatments, was also very small (5.1%). It indirectly explained the reason of the poor development of vaccines and treatments. Based on the result of examining the investment status of coronavirus-related research projects through comparative analysis by country, it was found that the US and Japan are relatively evenly investing in all infectious diseases-related research areas, while Europe has relatively large investments in specific research areas such as diagnosis_biomarker. Moreover, the information on major coronavirus-related research organizations in major countries was provided by the classification system, thereby allowing establishing an international collaborative R&D projects.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

A Study on the Yousang-Dae Goksuro(Curve-Waterway) in Gangneung, Yungok-Myun, Yoodung Ri (강릉 연곡면 유등리 '유상대(流觴臺)' 곡수로(曲水路)의 조명(照明))

  • Rho, Jae-Hyun;Shin, Sang-Sup;Lee, Jung-Han;Huh, Jun;Park, Joo-Sung
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.30 no.1
    • /
    • pp.14-21
    • /
    • 2012
  • The object of the study, Yousang-Dae(流觴臺) and engraved Go broad text on the flat rock in Gangneung-si Yungok-myun Yoodung-ri Baemgol, reveals that the place was for appreciating arts like Yusang Goksu and Taoist hermit's games. three times of detail reconnaissance survey brought about the results as follows. There is a the text, Manwolsan(滿月山) Baegundongcheon(白雲洞天), engraved on the rock in Baegunsa(白雲寺) that had been built by Doun at the first year of King Hungang(in 875) of the United Shilla, became in ruins in the middle of Joseon, and then was rebuilt in 1954. The text is an invaluable evidence indicating that the tradition of Taoist hermit and Sunbee(classical scholars) culture has been generated in Baemgol Valley. According to the 2nd vol. of Donghoseungram(東湖勝覽), the chronicle of Gangneung published by Choi Baeksoon in 1934, there is a record saying that 'Baegunsa in Namjeonhyeon is the classroom where famous teachers like Yulgok Lee Yi or Seongje Choi Ok were teaching' that verifies the historic property of the place. In addition, the management of Nujeong(樓亭) and Dongcheon can be traced through Baegunjeong(白雲亭) constructed by Kim Yoonkyung(金潤卿) in Muo year, the 9th year of Cheoljong(1858) according to Donghoseungram and the completed version of Jeungboyimyoungji(增補臨瀛誌). Also, Baegundongdongcheon(白雲亭洞天), the text engraved on the standing stone across the stream from Yousang-Dae stone, was created 3 years after the Baegunjeong construction in the 12th year of Cheoljong(1861), which refers a symbolic sign closely related with Yousang-Dae. Based on this premise and circumstance, with careful studying the remains of 'Yusang-dae' Goksuro, we discovered that the Sebun-seok(細分石) controling the amount and the speed of moving water and the remains of furrows of Keumbae-soek(擒盃石) and Yubae-gong(留盃孔) containing water stream with cups through the mountain stream and rocks around Yusang-Dae. In addition, as 21 people's names engraved under the statement of 'Oh-Seong(午星)' were discovered on the bottom of the rock, this clearly confirms that the place was one of the main cultural footholds of tasting the arts which have characteristics of Yu-Sang-Gok-Su-Yeon(流觴曲水宴) until the middle of the 20th century. It implies that the arts tasting culture of Sunbees had been inherited centering on Yusang-dae in this particular place until the middle of the 20th century. It is necessary to be studied in depth because the place is a historic and unique cultural place where 'Confucianism, Buddhism, and Zen'were combined together. Based on the result of the study, the identification of 23 people as well as the writer of Yusang-Dae text should be carefully studied in depth in terms of the characteristics of the place through gathering data about appreciation of arts like Yusanggoksu. Likewise, we should make efforts to discover the chess board engraved on the rock described on the documents, thus we should consider to establish plans to recover the original shape of the place, for example, breaking the cement pavement of the road, additional excavation, changing the existing route, and so fourth.

The Results and Prognostic Factors of Chemo-radiation Therapy in the Management of Small Cell Lung Cancer (항암화학요법과 방사선 치료를 시행한 소세포폐암 환자의 치료 성적 -생존율과 예후인자, 실패양상-)

  • Kim Eun-Seog;Choi Doo-Ho;Won Jong-Ho;Uh Soo-Taek;Hong Dae-Sik;Park Choon-Sik;Park Hee-Sook;Youm Wook
    • Radiation Oncology Journal
    • /
    • v.16 no.4
    • /
    • pp.433-440
    • /
    • 1998
  • Purpose : Although small ceil lung cancer (SCLC) has high response rate to chemotherapy and radiotherapy (RT), the prognosis is dismal. The authors evaluated survival and failure patterns according to the prognostic factors in SCLC patients who had thoracic radiation therapy with chemotherapy. Materials and Methods : One hundred and twenty nine patients with SCLC had received thoracic radiation therapy from August 1985 to December 1990. Seventy-seven accessible patients were evaluated retrospectively among 87 patients who completed RT. Median follow-up period was 14 months (2-87months). Results : The two years survival rate was 13$\%$ with a median survival time of 14 months. The two year survival rates of limited disease and extensive disease were 20$\%$ and 8$\%$, respectively, with median survival time of 14 months and 9 months, respectively. Twenty two patients (88$\%$) of limited disease showed complete response (CR) and 3 patients (12$\%$) did partial response (PR). The two year survival rates on CR and PR groups were 24$\%$ and 0$\%$, with median survival times of 14 months and 5 months. respectively (p=0.005). No patients with serum sodium were lower than 135 mmol/L survived 2years and their median survival time was 7 months (p=0.002). Patients whose alkaline phophatase lower than 130 IU/L showed 26$\%$ of 2 year survival rate and showed median survival time of 14 months and those with alkaline phosphatase higher than 130 IU/L showed no 2 year survival and median survival time of 5 the months, respectively (p=0.019). No statistical differences were found according to the age, sex, and performance status. Among the patients with extensive disease, two rear survivals according to the metastatic sites were 14$\%$, 0$\%$, and 7$\%$ in brain, liver, and other metastatic sites, respectively, with median survival time of 9 months, 9 months, and 8 months, respectively (p>0.05). Two year survivals on CR group and PR group were 15$\%$ and 4$\%$, respectively, with a median survival time of 11 months and 7 months, respectively (p=0.01). Conclusion : For SCLC, complete response after chemoradiotherapy was the most significant prognostic tactor. To achieve this goal. there should be further investigation about hyperfractionation, dose escalation, and compatible chemo-radiation schedule such as concurrent chemo-radiation and early radiation therapy with chemotherapy.

  • PDF

An Empirical Study on Motivation Factors and Reward Structure for User's Createve Contents Generation: Focusing on the Mediating Effect of Commitment (창의적인 UCC 제작에 영향을 미치는 동기 및 보상 체계에 대한 연구: 몰입에 매개 효과를 중심으로)

  • Kim, Jin-Woo;Yang, Seung-Hwa;Lim, Seong-Taek;Lee, In-Seong
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.141-170
    • /
    • 2010
  • User created content (UCC) is created and shared by common users on line. From the user's perspective, the increase of UCCs has led to an expansion of alternative means of communications, while from the business perspective UCCs have formed an environment in which an abundant amount of new contents can be produced. Despite outward quantitative growth, however, many aspects of UCCs do not meet the expectations of general users in terms of quality, and this can be observed through pirated contents and user-copied contents. The purpose of this research is to investigate effective methods for fostering production of creative user-generated content. This study proposes two core elements, namely, reward and motivation, which are believed to enhance content creativity as well as the mediating factor and users' committement, which will be effective for bridging the increasing motivation and content creativity. Based on this perspective, this research takes an in-depth look at issues related to constructing the dimensions of reward and motivation in UCC services for creative content product, which are identified in three phases. First, three dimensions of rewards have been proposed: task dimension, social dimension, and organizational dimention. The task dimension rewards are related to the inherent characteristics of a task such as writing blog articles and pasting photos. Four concrete ways of providing task-related rewards in UCC environments are suggested in this study, which include skill variety, task significance, task identity, and autonomy. The social dimensioni rewards are related to the connected relationships among users. The organizational dimension consists of monetary payoff and recognition from others. Second, the two types of motivations are suggested to be affected by the diverse rewards schemes: intrinsic motivation and extrinsic motivation. Intrinsic motivation occurs when people create new UCC contents for its' own sake, whereas extrinsic motivation occurs when people create new contents for other purposes such as fame and money. Third, commitments are suggested to work as important mediating variables between motivation and content creativity. We believe commitments are especially important in online environments because they have been found to exert stronger impacts on the Internet users than other relevant factors do. Two types of commitments are suggested in this study: emotional commitment and continuity commitment. Finally, content creativity is proposed as the final dependent variable in this study. We provide a systematic method to measure the creativity of UCC content based on the prior studies in creativity measurement. The method includes expert evaluation of blog pages posted by the Internet users. In order to test the theoretical model of our study, 133 active blog users were recruited to participate in a group discussion as well as a survey. They were asked to fill out a questionnaire on their commitment, motivation and rewards of creating UCC contents. At the same time, their creativity was measured by independent experts using Torrance Tests of Creative Thinking. Finally, two independent users visited the study participants' blog pages and evaluated their content creativity using the Creative Products Semantic Scale. All the data were compiled and analyzed through structural equation modeling. We first conducted a confirmatory factor analysis to validate the measurement model of our research. It was found that measures used in our study satisfied the requirement of reliability, convergent validity as well as discriminant validity. Given the fact that our measurement model is valid and reliable, we proceeded to conduct a structural model analysis. The results indicated that all the variables in our model had higher than necessary explanatory powers in terms of R-square values. The study results identified several important reward shemes. First of all, skill variety, task importance, task identity, and automony were all found to have significant influences on the intrinsic motivation of creating UCC contents. Also, the relationship with other users was found to have strong influences upon both intrinsic and extrinsic motivation. Finally, the opportunity to get recognition for their UCC work was found to have a significant impact on the extrinsic motivation of UCC users. However, different from our expectation, monetary compensation was found not to have a significant impact on the extrinsic motivation. It was also found that commitment was an important mediating factor in UCC environment between motivation and content creativity. A more fully mediating model was found to have the highest explanation power compared to no-mediation or partially mediated models. This paper ends with implications of the study results. First, from the theoretical perspective this study proposes and empirically validates the commitment as an important mediating factor between motivation and content creativity. This result reflects the characteristics of online environment in which the UCC creation activities occur voluntarily. Second, from the practical perspective this study proposes several concrete reward factors that are germane to the UCC environment, and their effectiveness to the content creativity is estimated. In addition to the quantitive results of relative importance of the reward factrs, this study also proposes concrete ways to provide the rewards in the UCC environment based on the FGI data that are collected after our participants finish asnwering survey questions. Finally, from the methodological perspective, this study suggests and implements a way to measure the UCC content creativity independently from the content generators' creativity, which can be used later by future research on UCC creativity. In sum, this study proposes and validates important reward features and their relations to the motivation, commitment, and the content creativity in UCC environment, which is believed to be one of the most important factors for the success of UCC and Web 2.0. As such, this study can provide significant theoretical as well as practical bases for fostering creativity in UCC contents.

A Study on the 'Zhe Zhong Pai'(折衷派) of the Traditional Medicine of Japan (일본(日本) 의학醫學의 '절충파(折衷派)'에 관(關)한 연구(硏究))

  • Park, Hyun-Kuk;Kim, Ki-Wook
    • Journal of Korean Medical classics
    • /
    • v.20 no.3
    • /
    • pp.121-141
    • /
    • 2007
  • The outline and characteristics of the important doctors of the 'Zhe Zhong Pai'(折衷派) are as follows. Part 1. In the late Edo(江戶) period The 'Zhe Zhong Pai', which tried to take the theory and clinical treatment of the 'Hou Shi Pai (後世派)' and the 'Gu Fang Pai (古方派)' and get their strong points to make treatments perfect, appeared. Their point was 'The main part is the art of the ancients, The latter prescriptions are to be used'(以古法爲主, 後世方爲用) and the "Shang Han Lun(傷寒論)" was revered for its treatments but in actual use it was not kept at that. As mentioned above The 'Zhe Zhong Pai ' viewed treatments as the base, which was the view of most doctors in the Edo period, However, the reason the 'Zhe Zhong Pai' is not valued as much as the 'Gu Fang Pai' by medical history books in Japan is because the 'Zhe Zhong Pai' does not have the substantiation or uniqueness of the 'Gu Fang Pai', and also because the view of 'gather as well as store up' was the same as the 'Kao Zheng Pai', Moreover, the 'compromise'(折衷) point of view was from taking in both Chinese and western medical knowledge systems(漢蘭折衷), Generally the pioneer of the 'Zhe Zhong Pai' is seen as Mochizuki Rokumon(望月鹿門) and after that was Fukui Futei(福井楓亭), Wadato Kaku(和田東郭), Yamada Seichin(山田正珍) and Taki Motohiro(多紀元簡), Part 2. The lives of Wada Tokaku(和田東郭), Nakagame Kinkei(中神琴溪), Nei Teng Xi Zhe(內藤希哲), the important doctors of the 'Zhe Zhong Pai', are as follows First. Wada Tokaku(和田東郭, 1743-1803) was born when the 'Hou Shi Pai' was already declining and the 'Gu Fang Pai' was flourishing and learned medicine from a 'Hou Shi Pai' doctor, Hu Tian Xu Shan(戶田旭山) and a 'Gu Fang Pai' doctor, Yoshimasu Todo(吉益東洞). He was not hindered by 'the old ways(古方), and did not lean towards 'the new ways(後世方)' and formed a way of compromise that 'looked at hardness and softness as the same'(剛柔相摩) by setting 'the cure of the disease' as the base, and said that to cure diseases 'the old way' must be used, but 'the new way' was necessary to supplement its shortcomings. His works include "Dao Shui Suo Yan", "Jiao Chiang Fang Yi Je" and "Yi Xue Sho(醫學說)" Second. Nakagame Kinkei(中神琴溪, 1744-1833) was famous for leaving Yoshirnasu Todo(吉益東洞) and changing to the 'Zhe Zhong Pai', and in his early years used qing fen(輕粉) to cure geisha(妓女) of syphilis. His argument was "the "Shang Han Lun" must be revered but needs to be adapted", "Zhong jing can be made into a follower but I cannot become his follower", "the later medical texts such as "Ru Men Shi Qin(儒門事親)" should only be used for its prescriptions and not its theories". His works include "Shang Han Lun Yue Yan(傷寒論約言) Third. Nei Teng Xi Zhe(內藤希哲, 1701-1735) learned medicine from Qing Shui Xian Sheng(淸水先生) and went out to Edo. In his book "Yi Jing Jie Huo Lun(醫經解惑論)" he tells of how he went from 'learning'(學) to 'skepticism'(惑) and how skepticism made him learn in 'the six skepticisms'(六惑). In the latter years Xi Zhe(希哲) combines the "Shen Nong Ben Cao jing(神農本草經)", the main text for herbal medicine, "Ming Tang jing(明堂經)" of accupuncture, basic theory texts "Huang Dui Nei jing(黃帝內徑)" and "Nan jing(難經)" with the "Shang Han Za Bing Lun", a book that the 'Gu Fang Pai' saw as opposing to the rest, and became 'an expert of five scriptures'(五經一貫). Part 3. Asada Showhaku(淺田宗伯, 1815-1894) started medicine at Zhong Cun Zhong(中村中倧) and learned 'the old way'(古方) from Yoshirnasu Todo and got experience through Chuan Yue(川越) and Fu jing(福井) and received teachings in texts, history and Wang Yangmin's principles(陽明學) from famous teachers. Showhaku(宗伯) meets a medical official of the makufu(幕府), Ben Kang Zong Yuan(本康宗圓), and recieves help from the 3 great doctors of the Edo period, Taki Motokato(多紀元堅), Xiao Dao Xue GU(小島學古) and Xi Duo Cun Kao Chuang and further develops his arts. At 47 he diagnoses the general Jia Mao(家茂) with 'heart failure from beriberi'(脚氣衝心) and becomes a Zheng Shi(徵I), at 51 he cures a minister from France and received a present from Napoleon, at 65 he becomes the court physician and saves Ming Gong(明宮) jia Ren Qn Wang(嘉仁親王, later the 大正犬皇) from bodily convulsions and becomes 'the vassal of merit who saved the national polity(國體)' At the 7th year of the Meiji(明治) he becomes the 2nd owner of Wen Zhi She(溫知社) and takes part in the 'kampo continuation movement'. In his latter years he saw 14000 patients a year, so we can estimate the quality and quantity of his clinical skills Showhaku(宗伯) wrote over 80 books including the "Ju Chuang Shu Ying(橘窓書影)", "WU Wu Yao Shi Fang Han(勿誤藥室方函)", "Shang Han Biang Shu(傷寒辨術)", "jing Qi Shen Lun(精氣神論)", "Hunag Guo Ming Yi Chuan(皇國名醫傳)" and the "Xian Jhe Yi Hua(先哲醫話)". Especially in the "Ju Chuang Shu Ying(橘窓書影)" he says "the old theories are the main, and the new prescriptions are to be used"(以古法爲主, 後世方爲用), stating the 'Zhe Zhong Pai' way of thinking. In the first volume of "Shung Han Biang Shu(傷寒辨術) and "Za Bing Lun Shi(雜病論識)", 'Zong Ping'(總評), He discerns the parts that are not Zhang Zhong Jing's writings and emphasizes his theories and practical uses.

  • PDF

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.

Documentation of Intangible Cultural Heritage Using Motion Capture Technology Focusing on the documentation of Seungmu, Salpuri and Taepyeongmu (부록 3. 모션캡쳐를 이용한 무형문화재의 기록작성 - 국가지정 중요무형문화재 승무·살풀이·태평무를 중심으로 -)

  • Park, Weonmo;Go, Jungil;Kim, Yongsuk
    • Korean Journal of Heritage: History & Science
    • /
    • v.39
    • /
    • pp.351-378
    • /
    • 2006
  • With the development of media, the methods for the documentation of intangible cultural heritage have been also developed and diversified. As well as the previous analogue ways of documentation, the have been recently applying new multi-media technologies focusing on digital pictures, sound sources, movies, etc. Among the new technologies, the documentation of intangible cultural heritage using the method of 'Motion Capture' has proved itself prominent especially in the fields that require three-dimensional documentation such as dances and performances. Motion Capture refers to the documentation technology which records the signals of the time varing positions derived from the sensors equipped on the surface of an object. It converts the signals from the sensors into digital data which can be plotted as points on the virtual coordinates of the computer and records the movement of the points during a certain period of time, as the object moves. It produces scientific data for the preservation of intangible cultural heritage, by displaying digital data which represents the virtual motion of a holder of an intangible cultural heritage. National Research Institute of Cultural Properties (NRICP) has been working on for the development of new documentation method for the Important Intangible Cultural Heritage designated by Korean government. This is to be done using 'motion capture' equipments which are also widely used for the computer graphics in movie or game industries. This project is designed to apply the motion capture technology for 3 years- from 2005 to 2007 - for 11 performances from 7 traditional dances of which body gestures have considerable values among the Important Intangible Cultural Heritage performances. This is to be supported by lottery funds. In 2005, the first year of the project, accumulated were data of single dances, such as Seungmu (monk's dance), Salpuri(a solo dance for spiritual cleansing dance), Taepyeongmu (dance of peace), which are relatively easy in terms of performing skills. In 2006, group dances, such as Jinju Geommu (Jinju sword dance), Seungjeonmu (dance for victory), Cheoyongmu (dance of Lord Cheoyong), etc., will be documented. In the last year of the project, 2007, education programme for comparative studies, analysis and transmission of intangible cultural heritage and three-dimensional contents for public service will be devised, based on the accumulated data, as well as the documentation of Hakyeonhwadae Habseolmu (crane dance combined with the lotus blossom dance). By describing the processes and results of motion capture documentation of Salpuri dance (Lee Mae-bang), Taepyeongmu (Kang seon-young) and Seungmu (Lee Mae-bang, Lee Ae-ju and Jung Jae-man) conducted in 2005, this report introduces a new approach for the documentation of intangible cultural heritage. During the first year of the project, two questions have been raised. First, how can we capture motions of a holder (dancer) without cutoffs during quite a long performance? After many times of tests, the motion capture system proved itself stable with continuous results. Second, how can we reproduce the accurate motion without the re-targeting process? The project re-created the most accurate motion of the dancer's gestures, applying the new technology to drew out the shape of the dancers's body digital data before the motion capture process for the first time in Korea. The accurate three-dimensional body models for four holders obtained by the body scanning enhanced the accuracy of the motion capture of the dance.