• Title/Summary/Keyword: Traditional Information

Search Result 6,425, Processing Time 0.036 seconds

Perception of Korean Residential Gardens and Gardening in the 1920~30s (1920~30년대 한국 주택정원 인식과 정원가꾸기 양상)

  • Gil, Jihye;Park, Hee-Soung
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.50 no.2
    • /
    • pp.138-148
    • /
    • 2022
  • The 1920s and 1930s were when new trends became prominent in Korean housing architecture. This study began with a curiosity about the appearance of residential gardens during the transition period, when housing types were changing. Since gardens are constantly evolving and living spaces, it is not easy to give a clear picture of their evolution. However, through popular magazines and newspaper articles published in the 1920-30s, this study investigated how people perceived the gardens socially and how they engaged in gardening. First, the study of Gyeongseong's urbanization process revealed that people perceived gardens as a way to give natural beauty to the urban environment. Therefore, the creation of a residential garden was strongly encouraged. Second, the housing improvement movement, which the architects actively discussed during this period, emphasized that a garden is a factor that can help improve the quality of the residential environment in terms of hygiene and landscape aesthetics. Third, since the media provided information on gardening, it was confirmed that the number of people engaged in gardening as a hobby increased. As designers and gardeners who had received a modern education became more active, the concept of "designed gardens" was formed. Lastly, although the houses were divided into various types, the shapes of the gardens did not show a significant difference according to the architecture type. They tended to embrace the time's ideal garden design and style. Therefore, even in a traditional hanok, Western-style gardens were naturally harmonized into the overall architecture, and exotic plant species could be found. Although the gardens found in media images were limited to those belonging to the homes of the intelligentsia, it can be seen that representativeness was secured, considering the popularity and ripple effect of the media. Therefore, this study contributes to the literature as it confirmed the ideal gardens and gardening methods in the 1920s and 30s.

Social Backgrounds and Clan Politics of Kazakhstan Elites: Focusing on Elites from Junior Zhuz (카자흐스탄 엘리트의 사회배경과 씨족 정치: 소주즈(Zhuz) 출신 엘리트를 중심으로)

  • Bang, Ilkwon
    • Journal of International Area Studies (JIAS)
    • /
    • v.14 no.1
    • /
    • pp.77-106
    • /
    • 2010
  • As for the matter of guardianship-benefit network which has been at the heart of the discussion of power elites and clan politics in Kazakhstan, it has been often maintained that it is basically formed by the framework of the regional and descent connection net called Zhuz or at least it has been heavily under Zhuz's influence. But it is pointed out that the controversy of Zhuz suffers from a lot of limitations in explaining the surface of power elites in the recent process of political changes and the rearrangement of power relations. Consequently, this paper tried to take a closer look at the matter focusing on the social backgrounds of elites from Junior zhuz, who have been estimated to be relatively pushed back in terms of the advancement into the central power. As a result, it was found that the backgrounds of clan and tribe origin within Zhuz couldn't have any foundation to be seen as a decisive element through which they could grow into power elites. The phenomenon of Kazakhstani elites is a legacy of concrete historic situations. The important consideration points for analyzing the emergence of elites which could be applied to a nomadic and traditional society can hardly be an invariable framework for analyzing modern elites since independence. Since 2000, Kazakhstan has experienced economic changes including privatization due to the absolute strengthening of presidential influence which turned into a foundation for a new authoritarian system, the rearrangement of the inner circle of power, and their decisions. These changes in situations have had profound effects on the character of power elites. The phenomenon that clandestine connections have shown their appearances as they have gotten intertwined with various factors, in particular, in the economic field which has been heavily under Junior zhuz makes us convinced that the elite organization in Kazakhstan has always been the product of political and economic changes. In reality, the behaviors of elites were the outcome continuously reflecting environmental situations surrounding them, and those situations lie in a complicated and multiple-layered connection net. Therefore, it is believed that having interests in elites' social backgrounds and maintaining many pieces of information on them will be able to be a more useful approach to analyzing the elite society in the future in that interests in their social backgrounds become an informant of various network formation nets which reflect real situations.

Biochemical Assessment of Deer Velvet Antler Extract and its Cytotoxic Effect including Acute Oral Toxicity using an ICR Mice Model (ICR 마우스 모델을 이용한 녹용 추출물의 생화학적 평가 및 급성 경구 독성을 포함한 세포 독성 효과)

  • Ramakrishna Chilakala;Hyeon Jeong Moon;Hwan Lee;Dong-Sung Lee;Sun Hee Cheong
    • Journal of Food Hygiene and Safety
    • /
    • v.38 no.6
    • /
    • pp.430-441
    • /
    • 2023
  • Velvet antler is widely used as a traditional medicine, and numerous studies have demonstrated its tremendous nutritional and medicinal values including immunity-enhancing effects. This study aimed to investigate different deer velvet extracts (Sample 1: raw extract, Sample 2: dried extract, and Sample 3: freeze-dried extract) for proximate composition, uronic acid, sulfated glycosaminoglycan, sialic acid, collagen levels, and chemical components using ultra-performance liquid chromatography-quadrupole-time-of-light mass spectrometry. In addition, we evaluated the cytotoxic effect of the deer velvet extracts on BV2 microglia, HT22 hippocampal cells, HaCaT keratinocytes, and RAW264.7 macrophages using the cell viability MTT assay. Furthermore, we evaluated acute toxicity of the deer velvet extracts at different doses (0, 500, 1000, and 2000 mg/kg) administered orally to both male and female ICR mice for 14 d (five mice per group). After treatment, we evaluated general toxicity, survival rate, body weight changes, mortality, clinical signs, and necropsy findings in the experimental mice based on OECD guidelines. The results suggested that in vitro treatment with the evaluated extracts had no cytotoxic effect in HaCaT keratinocytes cells, whereas Sample-2 had a cytotoxic effect at 500 and 1000 ㎍/mL on HT22 hippocampal cells and RAW264.7 macrophages. Sample 3 was also cytotoxic at concentrations of 500 and 1000 ㎍/mL to RAW264.7 and BV2 microglial cells. However, the mice treated in vivo with the velvet extracts at doses of 500-2000 mg/kg BW showed no clinical signs, mortality, or necropsy findings, indicating that the LD50 is higher than this dosage. These findings indicate that there were no toxicological abnormalities connected with the deer velvet extract treatment in mice. However, further human and animal studies are needed before sufficient safety information is available to justify its use in humans.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

A Contemplation on Measures to Advance Logistics Centers (물류센터 선진화를 위한 발전 방안에 대한 소고)

  • Sun, Il-Suck;Lee, Won-Dong
    • Journal of Distribution Science
    • /
    • v.9 no.1
    • /
    • pp.17-27
    • /
    • 2011
  • As the world becomes more globalized, business competition becomes fiercer, while consumers' needs for less expensive quality products are on the increase. Business operations make an effort to secure a competitive edge in costs and services, and the logistics industry, that is, the industry operating the storing and transporting of goods, once thought to be an expense, begins to be considered as the third cash cow, a source of new income. Logistics centers are central to storage, loading and unloading of deliveries, packaging operations, and dispensing goods' information. As hubs for various deliveries, they also serve as a core infrastructure to smoothly coordinate manufacturing and selling, using varied information and operation systems. Logistics centers are increasingly on the rise as centers of business supply activities, growing beyond their previous role of primarily storing goods. They are no longer just facilities; they have become logistics strongholds that encompass various features from demand forecast to the regulation of supply, manufacturing, and sales by realizing SCM, taking into account marketability and the operation of service and products. However, despite these changes in logistics operations, some centers have been unable to shed their past roles as warehouses. For the continuous development of logistics centers, various measures would be needed, including a revision of current supporting policies, formulating effective management plans, and establishing systematic standards for founding, managing, and controlling logistics centers. To this end, the research explored previous studies on the use and effectiveness of logistics centers. From a theoretical perspective, an evaluation of the overall introduction, purposes, and transitions in the use of logistics centers found issues to ponder and suggested measures to promote and further advance logistics centers. First, a fact-finding survey to establish demand forecast and standardization is needed. As logistics newspapers predicted that after 2012 supply would exceed demand, causing rents to fall, the business environment for logistics centers has faltered. However, since there is a shortage of fact-finding surveys regarding actual demand for domestic logistic centers, it is hard to predict what the future holds for this industry. Accordingly, the first priority should be to get to the essence of the current market situation by conducting accurate domestic and international fact-finding surveys. Based on those, management and evaluation indicators should be developed to build the foundation for the consistent advancement of logistics centers. Second, many policies for logistics centers should be revised or developed. Above all, a guideline for fair trade between a shipper and a commercial logistics center should be enacted. Since there are no standards for fair trade between them, rampant unfair trades according to market practices have brought chaos to market orders, and now the logistics industry is confronting its own difficulties. Therefore, unfair trade cases that currently plague logistics centers should be gathered by the industry and fair trade guidelines should be established and implemented. In addition, restrictive employment regulations for foreign workers should be eased, and logistics centers should be charged industry rates for the use of electricity. Third, various measures should be taken to improve the management environment. First, we need to find out how to activate value-added logistics. Because the traditional purpose of logistics centers was storage and loading/unloading of goods, their profitability had a limit, and the need arose to find a new angle to create a value added service. Logistic centers have been perceived as support for a company's storage, manufacturing, and sales needs, not as creators of profits. The center's role in the company's economics has been lowering costs. However, as the logistics' management environment spiraled, along with its storage purpose, developing a new feature of profit creation should be a desirable goal, and to achieve that, value added logistics should be promoted. Logistics centers can also be improved through cost estimation. In the meantime, they have achieved some strides in facility development but have still fallen behind in others, particularly in management functioning. Lax management has been rampant because the industry has not developed a concept of cost estimation. The centers have since made an effort toward unification, standardization, and informatization while realizing cost reductions by establishing systems for effective management, but it has been hard to produce profits. Thus, there is an urgent need to estimate costs by determining a basic cost range for each division of work at logistics centers. This undertaking can be the first step to improving the ineffective aspects of how they operate. Ongoing research and constant efforts have been made to improve the level of effectiveness in the manufacturing industry, but studies on resource management in logistics centers are hardly enough. Thus, a plan to calculate the optimal level of resources necessary to operate a logistics center should be developed and implemented in management behavior, for example, by standardizing the hours of operation. If logistics centers, shippers, related trade groups, academic figures, and other experts could launch a committee to work with the government and maintain an ongoing relationship, the constraint and cooperation among members would help lead to coherent development plans for logistics centers. If the government continues its efforts to provide financial support, nurture professional workers, and maintain safety management, we can anticipate the continuous advancement of logistics centers.

  • PDF

A study on the second edition of Koryo Dae-Jang-Mock-Lock (고려재조대장목록고)

  • Jeong Pil-mo
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.17
    • /
    • pp.11-47
    • /
    • 1989
  • This study intends to examine the background and the procedure of the carving of the tablets of the second edition of Dae-Jang-Mock­Lock(재조대장목록). the time and the route of the moving of the tablets. into Haein-sa, and the contents and the system of it. This study is mainly based on the second edition of Dae-Jang-Mock-Lock. But the other closely related materials such as restored first. edition of the Dae- Jang-Mock-Lock, Koryo Sin-Jo-Dae-Jang-Byeol-Lock (고려신조대장교정별록). Kae-Won-Seok-Kyo-Lock (개원석교록). Sok-Kae­Won-Seok-Kyo-Lock (속개원석교록). Jeong-Won-Sin-Jeong-Seok-Kyo­Lock(정원신정석교록), Sok-Jeong-Won-Seok-Kyo-Lock(속정원석교록), Dea-Jung-Sang-Bu-Beob-Bo-Lock(대중상부법보록), and Kyeong-Woo-Sin-Su-Beob-Bo-Lock(경우신수법보록), are also analysed and closely examined. The results of this study can be summarized as follows: 1. The second edition of Tripitaka Koreana(고려대장경) was carved for the purpose of defending the country from Mongolia with the power of Buddhism, after the tablets of the first edition in Buin-sa(부이사) was destroyed by fire. 2. In 1236. Dae-Jang-Do-Gam(대장도감) was established, and the preparation for the recarving of the tablets such as comparison between the content, of the first edition of Tripitalk Koreana, Gal-Bo-Chik-Pan-Dae­Jang-Kyeong and Kitan Dae- Jang-Kyeong, transcription of the original copy and the preparation of the wood, etc. was started. 3. In 1237 after the announcement of Dae-Jang-Gyeong-Gak-Pan-Gun­Sin-Gi-Go-Mun(대장경핵판군신석고문), the carving was started on a full scale. And seven years later (1243), Bun-Sa-Dae-Jang-Do-Gam(분사대장도감) was established in the area of the South to expand and hasten the work. And a large number of the tablets were carved in there. 4. It took 16 years to carve the main text and the supplements of the second edition of Tripitaka Koreana, the main text being carved from 1237 to 1248 and the supplement from 1244 to 1251. 5. It can be supposed that the tablets of the second edition of Tripitaka Koreana, stored in Seon-Won-Sa(선원사), Kang-Wha(강화), for about 140 years, was moved to Ji-Cheon-Sa(지천사), Yong-San(용산), and to Hae-In-Sa(해인사) again, through the west and the south sea and Jang-Gyeong-Po(장경포), Go-Ryeong(고령), in the autumn of the same year. 6. The second edition of Tripitaka Koreana was carved mainly based on the first edition, comparing with Gae-Bo-Chik-Pan-Dae-Jang-Kyeong(개보판대장경) and Kitan Dae-Jang-Kyeong(계단대장경). And the second edition of Dae-Jang-Mock-Lock also compiled mainly based on the first edition with the reference to Kae-Won-Seok-Kyo-Lock and Sok-Jeong-Won-Seok-Kyo-Lock. 7. Comparing with the first edition of Dae-Jang-Mock-Lock, in the second edition 7 items of 9 volumes of Kitan text such as Weol-Deung­Sam-Mae-Gyeong-Ron(월증삼매경론) are added and 3 items of 60 volumes such as Dae-Jong-Ji-Hyeon-Mun-Ron(대종지현문논) are substituted into others from Cheon chest(천함) to Kaeng chest(경함), and 92 items of 601 volumes such as Beob-Won-Ju-Rim-Jeon(법원주임전) are added after Kaeng chest. And 4 items of 50 volumes such as Yuk-Ja-Sin-Ju-Wang-Kyeong(육자신주왕경) are ommitted in the second edition. 8. Comparing with Kae-Won-Seok-Kyo-Lock, Cheon chest to Young chest (영함) of the second edition is compiled according to Ib-Jang-Lock(입장록) of Kae-Won-Seok-Kyo-Lock. But 15 items of 43 vol­umes such as Bul-Seol-Ban-Ju-Sam-Mae-Kyeong(불설반주삼매경) are ;added and 7 items of 35 volumes such as Dae-Bang-Deung-Dae-Jib-Il­Jang-Kyeong(대방등대집일장경) are ommitted. 9. Comparing with Sok-Jeong-Won-Seok-Kyo-Lock, 3 items of the 47 volumes (or 49 volumes) are ommitted and 4 items of 96 volumes are ;added in Caek chest(책함) to Mil chest(밀함) of the second edition. But the items are arranged in the same order. 10. Comparing with Dae- Jung-Sang-Bo-Beob-Bo-Lock, the arrangement of the second edition is entirely different from it. But 170 items of 329 volumes are also included in Doo chest(두함) to Kyeong chest(경함) of the second edition, and 53 items of 125 volumes in Jun chest(존함) to Jeong chest(정함). And 10 items of 108 volumes in the last part of Dae-Jung-Sang-Bo-Beob-Bo-Lock are ommitted and 3 items of 131 volumes such as Beob-Won-Ju-Rim-Jeon(법원주임전) are added in the second edition. 11. Comparing with Kyeong-Woo-Sin-Su-Beob-Bo-Lock, all of the items (21 items of 161 volumes) are included in the second edition without ;any classificatory system. And 22 items of 172 volumes in the Seong­Hyeon-Jib-Jeon(성현집전) part such as Myo-Gak-Bi-Cheon(묘각비전) are ommitted. 12. The last part of the second edition, Joo chest(주함) to Dong chest (동함), includes 14 items of 237 volumes. But these items cannot be found in any other former Buddhist catalog. So it might be supposed as the Kitan texts. 13. Besides including almost all items in Kae-Won-Seok-Kyo-Lock and all items in Sok-Jeong-Won-Seok-Kyo-Lock, Dae-Jung-Sang-Bo­Beob-Bo-Lock, and Kyeong-Woo-Sin-Su-Beob-Bo-Lock, the second edition of Dae-Jang-Mock-Lock includes more items, at least 20 items of about 300 volumes of Kitan Tripitaka and 15 items of 43 volumes of traditional Korean Tripitake that cannot be found any others. Therefore, Tripitaka Koreana can be said as a comprehensive Tripitaka covering all items of Tripitakas translated in Chinese character.

  • PDF

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Product Community Analysis Using Opinion Mining and Network Analysis: Movie Performance Prediction Case (오피니언 마이닝과 네트워크 분석을 활용한 상품 커뮤니티 분석: 영화 흥행성과 예측 사례)

  • Jin, Yu;Kim, Jungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.49-65
    • /
    • 2014
  • Word of Mouth (WOM) is a behavior used by consumers to transfer or communicate their product or service experience to other consumers. Due to the popularity of social media such as Facebook, Twitter, blogs, and online communities, electronic WOM (e-WOM) has become important to the success of products or services. As a result, most enterprises pay close attention to e-WOM for their products or services. This is especially important for movies, as these are experiential products. This paper aims to identify the network factors of an online movie community that impact box office revenue using social network analysis. In addition to traditional WOM factors (volume and valence of WOM), network centrality measures of the online community are included as influential factors in box office revenue. Based on previous research results, we develop five hypotheses on the relationships between potential influential factors (WOM volume, WOM valence, degree centrality, betweenness centrality, closeness centrality) and box office revenue. The first hypothesis is that the accumulated volume of WOM in online product communities is positively related to the total revenue of movies. The second hypothesis is that the accumulated valence of WOM in online product communities is positively related to the total revenue of movies. The third hypothesis is that the average of degree centralities of reviewers in online product communities is positively related to the total revenue of movies. The fourth hypothesis is that the average of betweenness centralities of reviewers in online product communities is positively related to the total revenue of movies. The fifth hypothesis is that the average of betweenness centralities of reviewers in online product communities is positively related to the total revenue of movies. To verify our research model, we collect movie review data from the Internet Movie Database (IMDb), which is a representative online movie community, and movie revenue data from the Box-Office-Mojo website. The movies in this analysis include weekly top-10 movies from September 1, 2012, to September 1, 2013, with in total. We collect movie metadata such as screening periods and user ratings; and community data in IMDb including reviewer identification, review content, review times, responder identification, reply content, reply times, and reply relationships. For the same period, the revenue data from Box-Office-Mojo is collected on a weekly basis. Movie community networks are constructed based on reply relationships between reviewers. Using a social network analysis tool, NodeXL, we calculate the averages of three centralities including degree, betweenness, and closeness centrality for each movie. Correlation analysis of focal variables and the dependent variable (final revenue) shows that three centrality measures are highly correlated, prompting us to perform multiple regressions separately with each centrality measure. Consistent with previous research results, our regression analysis results show that the volume and valence of WOM are positively related to the final box office revenue of movies. Moreover, the averages of betweenness centralities from initial community networks impact the final movie revenues. However, both of the averages of degree centralities and closeness centralities do not influence final movie performance. Based on the regression results, three hypotheses, 1, 2, and 4, are accepted, and two hypotheses, 3 and 5, are rejected. This study tries to link the network structure of e-WOM on online product communities with the product's performance. Based on the analysis of a real online movie community, the results show that online community network structures can work as a predictor of movie performance. The results show that the betweenness centralities of the reviewer community are critical for the prediction of movie performance. However, degree centralities and closeness centralities do not influence movie performance. As future research topics, similar analyses are required for other product categories such as electronic goods and online content to generalize the study results.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

A Study on the Improvement of Recommendation Accuracy by Using Category Association Rule Mining (카테고리 연관 규칙 마이닝을 활용한 추천 정확도 향상 기법)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.27-42
    • /
    • 2020
  • Traditional companies with offline stores were unable to secure large display space due to the problems of cost. This limitation inevitably allowed limited kinds of products to be displayed on the shelves, which resulted in consumers being deprived of the opportunity to experience various items. Taking advantage of the virtual space called the Internet, online shopping goes beyond the limits of limitations in physical space of offline shopping and is now able to display numerous products on web pages that can satisfy consumers with a variety of needs. Paradoxically, however, this can also cause consumers to experience the difficulty of comparing and evaluating too many alternatives in their purchase decision-making process. As an effort to address this side effect, various kinds of consumer's purchase decision support systems have been studied, such as keyword-based item search service and recommender systems. These systems can reduce search time for items, prevent consumer from leaving while browsing, and contribute to the seller's increased sales. Among those systems, recommender systems based on association rule mining techniques can effectively detect interrelated products from transaction data such as orders. The association between products obtained by statistical analysis provides clues to predicting how interested consumers will be in another product. However, since its algorithm is based on the number of transactions, products not sold enough so far in the early days of launch may not be included in the list of recommendations even though they are highly likely to be sold. Such missing items may not have sufficient opportunities to be exposed to consumers to record sufficient sales, and then fall into a vicious cycle of a vicious cycle of declining sales and omission in the recommendation list. This situation is an inevitable outcome in situations in which recommendations are made based on past transaction histories, rather than on determining potential future sales possibilities. This study started with the idea that reflecting the means by which this potential possibility can be identified indirectly would help to select highly recommended products. In the light of the fact that the attributes of a product affect the consumer's purchasing decisions, this study was conducted to reflect them in the recommender systems. In other words, consumers who visit a product page have shown interest in the attributes of the product and would be also interested in other products with the same attributes. On such assumption, based on these attributes, the recommender system can select recommended products that can show a higher acceptance rate. Given that a category is one of the main attributes of a product, it can be a good indicator of not only direct associations between two items but also potential associations that have yet to be revealed. Based on this idea, the study devised a recommender system that reflects not only associations between products but also categories. Through regression analysis, two kinds of associations were combined to form a model that could predict the hit rate of recommendation. To evaluate the performance of the proposed model, another regression model was also developed based only on associations between products. Comparative experiments were designed to be similar to the environment in which products are actually recommended in online shopping malls. First, the association rules for all possible combinations of antecedent and consequent items were generated from the order data. Then, hit rates for each of the associated rules were predicted from the support and confidence that are calculated by each of the models. The comparative experiments using order data collected from an online shopping mall show that the recommendation accuracy can be improved by further reflecting not only the association between products but also categories in the recommendation of related products. The proposed model showed a 2 to 3 percent improvement in hit rates compared to the existing model. From a practical point of view, it is expected to have a positive effect on improving consumers' purchasing satisfaction and increasing sellers' sales.