• Title/Summary/Keyword: Leading Journal

Search Result 10,502, Processing Time 0.046 seconds

Impact of Shortly Acquired IPO Firms on ICT Industry Concentration (ICT 산업분야 신생기업의 IPO 이후 인수합병과 산업 집중도에 관한 연구)

  • Chang, YoungBong;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.51-69
    • /
    • 2020
  • Now, it is a stylized fact that a small number of technology firms such as Apple, Alphabet, Microsoft, Amazon, Facebook and a few others have become larger and dominant players in an industry. Coupled with the rise of these leading firms, we have also observed that a large number of young firms have become an acquisition target in their early IPO stages. This indeed results in a sharp decline in the number of new entries in public exchanges although a series of policy reforms have been promulgated to foster competition through an increase in new entries. Given the observed industry trend in recent decades, a number of studies have reported increased concentration in most developed countries. However, it is less understood as to what caused an increase in industry concentration. In this paper, we uncover the mechanisms by which industries have become concentrated over the last decades by tracing the changes in industry concentration associated with a firm's status change in its early IPO stages. To this end, we put emphasis on the case in which firms are acquired shortly after they went public. Especially, with the transition to digital-based economies, it is imperative for incumbent firms to adapt and keep pace with new ICT and related intelligent systems. For instance, after the acquisition of a young firm equipped with AI-based solutions, an incumbent firm may better respond to a change in customer taste and preference by integrating acquired AI solutions and analytics skills into multiple business processes. Accordingly, it is not unusual for young ICT firms become an attractive acquisition target. To examine the role of M&As involved with young firms in reshaping the level of industry concentration, we identify a firm's status in early post-IPO stages over the sample periods spanning from 1990 to 2016 as follows: i) being delisted, ii) being standalone firms and iii) being acquired. According to our analysis, firms that have conducted IPO since 2000s have been acquired by incumbent firms at a relatively quicker time than those that did IPO in previous generations. We also show a greater acquisition rate for IPO firms in the ICT sector compared with their counterparts in other sectors. Our results based on multinomial logit models suggest that a large number of IPO firms have been acquired in their early post-IPO lives despite their financial soundness. Specifically, we show that IPO firms are likely to be acquired rather than be delisted due to financial distress in early IPO stages when they are more profitable, more mature or less leveraged. For those IPO firms with venture capital backup have also become an acquisition target more frequently. As a larger number of firms are acquired shortly after their IPO, our results show increased concentration. While providing limited evidence on the impact of large incumbent firms in explaining the change in industry concentration, our results show that the large firms' effect on industry concentration are pronounced in the ICT sector. This result possibly captures the current trend that a few tech giants such as Alphabet, Apple and Facebook continue to increase their market share. In addition, compared with the acquisitions of non-ICT firms, the concentration impact of IPO firms in early stages becomes larger when ICT firms are acquired as a target. Our study makes new contributions. To our best knowledge, this is one of a few studies that link a firm's post-IPO status to associated changes in industry concentration. Although some studies have addressed concentration issues, their primary focus was on market power or proprietary software. Contrast to earlier studies, we are able to uncover the mechanism by which industries have become concentrated by placing emphasis on M&As involving young IPO firms. Interestingly, the concentration impact of IPO firm acquisitions are magnified when a large incumbent firms are involved as an acquirer. This leads us to infer the underlying reasons as to why industries have become more concentrated with a favor of large firms in recent decades. Overall, our study sheds new light on the literature by providing a plausible explanation as to why industries have become concentrated.

Analysis of Success Cases of InsurTech and Digital Insurance Platform Based on Artificial Intelligence Technologies: Focused on Ping An Insurance Group Ltd. in China (인공지능 기술 기반 인슈어테크와 디지털보험플랫폼 성공사례 분석: 중국 평안보험그룹을 중심으로)

  • Lee, JaeWon;Oh, SangJin
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.71-90
    • /
    • 2020
  • Recently, the global insurance industry is rapidly developing digital transformation through the use of artificial intelligence technologies such as machine learning, natural language processing, and deep learning. As a result, more and more foreign insurers have achieved the success of artificial intelligence technology-based InsurTech and platform business, and Ping An Insurance Group Ltd., China's largest private company, is leading China's global fourth industrial revolution with remarkable achievements in InsurTech and Digital Platform as a result of its constant innovation, using 'finance and technology' and 'finance and ecosystem' as keywords for companies. In response, this study analyzed the InsurTech and platform business activities of Ping An Insurance Group Ltd. through the ser-M analysis model to provide strategic implications for revitalizing AI technology-based businesses of domestic insurers. The ser-M analysis model has been studied so that the vision and leadership of the CEO, the historical environment of the enterprise, the utilization of various resources, and the unique mechanism relationships can be interpreted in an integrated manner as a frame that can be interpreted in terms of the subject, environment, resource and mechanism. As a result of the case analysis, Ping An Insurance Group Ltd. has achieved cost reduction and customer service development by digitally innovating its entire business area such as sales, underwriting, claims, and loan service by utilizing core artificial intelligence technologies such as facial, voice, and facial expression recognition. In addition, "online data in China" and "the vast offline data and insights accumulated by the company" were combined with new technologies such as artificial intelligence and big data analysis to build a digital platform that integrates financial services and digital service businesses. Ping An Insurance Group Ltd. challenged constant innovation, and as of 2019, sales reached $155 billion, ranking seventh among all companies in the Global 2000 rankings selected by Forbes Magazine. Analyzing the background of the success of Ping An Insurance Group Ltd. from the perspective of ser-M, founder Mammingz quickly captured the development of digital technology, market competition and changes in population structure in the era of the fourth industrial revolution, and established a new vision and displayed an agile leadership of digital technology-focused. Based on the strong leadership led by the founder in response to environmental changes, the company has successfully led InsurTech and Platform Business through innovation of internal resources such as investment in artificial intelligence technology, securing excellent professionals, and strengthening big data capabilities, combining external absorption capabilities, and strategic alliances among various industries. Through this success story analysis of Ping An Insurance Group Ltd., the following implications can be given to domestic insurance companies that are preparing for digital transformation. First, CEOs of domestic companies also need to recognize the paradigm shift in industry due to the change in digital technology and quickly arm themselves with digital technology-oriented leadership to spearhead the digital transformation of enterprises. Second, the Korean government should urgently overhaul related laws and systems to further promote the use of data between different industries and provide drastic support such as deregulation, tax benefits and platform provision to help the domestic insurance industry secure global competitiveness. Third, Korean companies also need to make bolder investments in the development of artificial intelligence technology so that systematic securing of internal and external data, training of technical personnel, and patent applications can be expanded, and digital platforms should be quickly established so that diverse customer experiences can be integrated through learned artificial intelligence technology. Finally, since there may be limitations to generalization through a single case of an overseas insurance company, I hope that in the future, more extensive research will be conducted on various management strategies related to artificial intelligence technology by analyzing cases of multiple industries or multiple companies or conducting empirical research.

Study of East Asia Climate Change for the Last Glacial Maximum Using Numerical Model (수치모델을 이용한 Last Glacial Maximum의 동아시아 기후변화 연구)

  • Kim, Seong-Joong;Park, Yoo-Min;Lee, Bang-Yong;Choi, Tae-Jin;Yoon, Young-Jun;Suk, Bong-Chool
    • The Korean Journal of Quaternary Research
    • /
    • v.20 no.1 s.26
    • /
    • pp.51-66
    • /
    • 2006
  • The climate of the last glacial maximum (LGM) in northeast Asia is simulated with an atmospheric general circulation model of NCAR CCM3 at spectral truncation of T170, corresponding to a grid cell size of roughly 75 km. Modern climate is simulated by a prescribed sea surface temperature and sea ice provided from NCAR, and contemporary atmospheric CO2, topography, and orbital parameters, while LGM simulation was forced with the reconstructed CLIMAP sea surface temperatures, sea ice distribution, ice sheet topography, reduced $CO_2$, and orbital parameters. Under LGM conditions, surface temperature is markedly reduced in winter by more than $18^{\circ}C$ in the Korean west sea and continental margin of the Korean east sea, where the ocean exposed to land in the LGM, whereas in these areas surface temperature is warmer than present in summer by up to $2^{\circ}C$. This is due to the difference in heat capacity between ocean and land. Overall, in the LGM surface is cooled by $4{\sim}6^{\circ}C$ in northeast Asia land and by $7.1^{\circ}C$ in the entire area. An analysis of surface heat fluxes show that the surface cooling is due to the increase in outgoing longwave radiation associated with the reduced $CO_2$ concentration. The reduction in surface temperature leads to a weakening of the hydrological cycle. In winter, precipitation decreases largely in the southeastern part of Asia by about $1{\sim}4\;mm/day$, while in summer a larger reduction is found over China. Overall, annual-mean precipitation decreases by about 50% in the LGM. In northeast Asia, evaporation is also overall reduced in the LGM, but the reduction of precipitation is larger, eventually leading to a drier climate. The drier LGM climate simulated in this study is consistent with proxy evidence compiled in other areas. Overall, the high-resolution model captures the climate features reasonably well under global domain.

  • PDF

Risk Factor Analysis for Operative Death and Brain Injury after Surgery of Stanford Type A Aortic Dissection (스탠포드 A형 대동맥 박리증 수술 후 수술 사망과 뇌손상의 위험인자 분석)

  • Kim Jae-Hyun;Oh Sam-Sae;Lee Chang-Ha;Baek Man-Jong;Hwang Seong-Wook;Lee Cheul;Lim Hong-Gook;Na Chan-Young
    • Journal of Chest Surgery
    • /
    • v.39 no.4 s.261
    • /
    • pp.289-297
    • /
    • 2006
  • Background: Surgery for Stanford type A aortic dissection shows a high operative mortality rate and frequent postoperative brain injury. This study was designed to find out the risk factors leading to operative mortality and brain injury after surgical repair in patients with type A aortic dissection. Material and Method: One hundred and eleven patients with type A aortic dissection who underwent surgical repair between February, 1995 and January 2005 were reviewed retrospectively. There were 99 acute dissections and 12 chronic dissections. Univariate and multivariate analysis were performed to identify risk factors of operative mortality and brain injury. Resuit: Hospital mortality occurred in 6 patients (5.4%). Permanent neurologic deficit occurred in 8 patients (7.2%) and transient neurologic deficit in 4 (3.6%). Overall 1, 5, 7 year survival rate was 94.4, 86.3, and 81.5%, respectively. Univariate analysis revealed 4 risk factors to be statistically significant as predictors of mortality: previous chronic type III dissection, emergency operation, intimal tear in aortic arch, and deep hypothemic circulatory arrest (DHCA) for more than 45 minutes. Multivariate analysis revealed previous chronic type III aortic dissection (odds ratio (OR) 52.2), and DHCA for more than 45 minutes (OR 12.0) as risk factors of operative mortality. Pathological obesity (OR 12.9) and total arch replacement (OR 8.5) were statistically significant risk factors of brain injury in multivariate analysis. Conclusion: The result of surgical repair for Stanford type A aortic dissection was good when we took into account the mortality rate, the incidence of neurologic injury, and the long-term survival rate. Surgery of type A aortic dissection in patients with a history of chronic type III dissection may increase the risk of operative mortality. Special care should be taken and efforts to reduce the hypothermic circulatory arrest time should alway: be kept in mind. Surgeons who are planning to operate on patients with pathological obesity, or total arch replacement should be seriously consider for there is a higher risk of brain injury.

Effect of Cellulose Derivatives to Reduce the Oil Uptake of Deep Fat Fried Batter of Pork Cutlet (셀룰로오스 유도체가 돈가스 튀김옷의 흡유량 감소에 미치는 영향)

  • Kim, Byung-Sook;Lee, Young-Eun
    • Korean journal of food and cookery science
    • /
    • v.25 no.4
    • /
    • pp.488-495
    • /
    • 2009
  • Pork cutlet is a favorite deep fat fried food item among Korean children, and an excellent protein-containing food, and as well as a simple and economical cuisine. However, the frying process adds a significant amount of calories. We added MC (Methylcellulose) and HPMC (Hydroxypropyl Methylcellulose) to the batter in an effort to reduce oil uptake in prepared pork cutlets. After additions of MC and HPMC at concentrations of 0.5, 1, and 1.5% respectively, we assessed the viscosity of batter, color after frying, the increases in moisture retention and oil uptake, and sensory characteristics, comparing each quality. The viscosity of batter with 0.5% HPMC added (w/w) was similar to that of the controls, but the viscosity of all the batter with added MC was so much higher that it was difficult to use the batter for coating at the same temperature, leading to a failure even to prepare a sample. After frying, the batter with added HPMC provided significantly less oil uptake and more moisture retention than the batter to which MC was added. Additionally, with regard to color and sensory characteristics, the pork cutlet with 0.5% added HPMC was superior to the other samples. According to these results, we concluded that when cellulose derivatives are added in order to reduce oil uptake and to raise the moisture retention of the batter of pork cutlet, HPMC is more useful in this regard than MC. Additionally, the batter with 0.5% HPMC added appears to be the best of the tested choices, for three reasons: first, the viscosity of the batter is similar to that of the controls; second, the taste is not greasy after frying as the result of the reduced oil uptake and higher moisture retention; and third, the sensory characteristics of this sample, such as, color, crispiness, and hardness were the best among samples.

A study on the Success Factors and Strategy of Information Technology Investment Based on Intelligent Economic Simulation Modeling (지능형 시뮬레이션 모형을 기반으로 한 정보기술 투자 성과 요인 및 전략 도출에 관한 연구)

  • Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.35-55
    • /
    • 2013
  • Information technology is a critical resource necessary for any company hoping to support and realize its strategic goals, which contribute to growth promotion and sustainable development. The selection of information technology and its strategic use are imperative for the enhanced performance of every aspect of company management, leading a wide range of companies to have invested continuously in information technology. Despite researchers, managers, and policy makers' keen interest in how information technology contributes to organizational performance, there is uncertainty and debate about the result of information technology investment. In other words, researchers and managers cannot easily identify the independent factors that can impact the investment performance of information technology. This is mainly owing to the fact that many factors, ranging from the internal components of a company, strategies, and external customers, are interconnected with the investment performance of information technology. Using an agent-based simulation technique, this research extracts factors expected to affect investment performance on information technology, simplifies the analyses of their relationship with economic modeling, and examines the performance dependent on changes in the factors. In terms of economic modeling, I expand the model that highlights the way in which product quality moderates the relationship between information technology investments and economic performance (Thatcher and Pingry, 2004) by considering the cost of information technology investment and the demand creation resulting from product quality enhancement. For quality enhancement and its consequences for demand creation, I apply the concept of information quality and decision-maker quality (Raghunathan, 1999). This concept implies that the investment on information technology improves the quality of information, which, in turn, improves decision quality and performance, thus enhancing the level of product or service quality. Additionally, I consider the effect of word of mouth among consumers, which creates new demand for a product or service through the information diffusion effect. This demand creation is analyzed with an agent-based simulation model that is widely used for network analyses. Results show that the investment on information technology enhances the quality of a company's product or service, which indirectly affects the economic performance of that company, particularly with regard to factors such as consumer surplus, company profit, and company productivity. Specifically, when a company makes its initial investment in information technology, the resultant increase in the quality of a company's product or service immediately has a positive effect on consumer surplus, but the investment cost has a negative effect on company productivity and profit. As time goes by, the enhancement of the quality of that company's product or service creates new consumer demand through the information diffusion effect. Finally, the new demand positively affects the company's profit and productivity. In terms of the investment strategy for information technology, this study's results also reveal that the selection of information technology needs to be based on analysis of service and the network effect of customers, and demonstrate that information technology implementation should fit into the company's business strategy. Specifically, if a company seeks the short-term enhancement of company performance, it needs to have a one-shot strategy (making a large investment at one time). On the other hand, if a company seeks a long-term sustainable profit structure, it needs to have a split strategy (making several small investments at different times). The findings from this study make several contributions to the literature. In terms of methodology, the study integrates both economic modeling and simulation technique in order to overcome the limitations of each methodology. It also indicates the mediating effect of product quality on the relationship between information technology and the performance of a company. Finally, it analyzes the effect of information technology investment strategies and information diffusion among consumers on the investment performance of information technology.

Compilation of Books on Military Arts and Science and Ideology of Military Science in the late Joseon Dynasty (조선(朝鮮) 후기(後期)의 병서(兵書) 편찬(編纂)과 병학(兵學) 사상(思想))

  • Yun, Muhak
    • The Journal of Korean Philosophical History
    • /
    • no.36
    • /
    • pp.101-133
    • /
    • 2013
  • In this paper, the writer investigated the thoughts on military art and science with a focus on the typical books on military art and science, which was published in the latter period of Joseon, and the discussion of literati in that time. Joseon had been happy to enjoy the piping times of peace for about 200 years ever since the establishment of the dynasty. However, having had to gone through two major wars, the Joseon Dynasty, revolving around scholarly people, had awakened the limits of military art and science of Joseon. It can be said that the countermeasure against Japanese pirates, which were reflected in the "Jingbirok" (懲毖錄 - Records of the 1592 Japanese Invasion) written by Yu Seong-ryong, and the experiences of war had formed the basis of the thoughts on military art and science in the latter period. Regrettably, there were no suggestions or proposals of preparing countermeasure against Japanese raiders in the books of military art and science in the early period of the Joseon Dynasty. Meanwhile, as the argument about the battle formation in the early period of Joseon, the process of establishing the military science had not gone smoothly in the latter period of Joseon. Right after the Japanese invasion of 1592, "Gihyo-Sinseo" (紀效新書 - New Text of Practical Tactics written by Cheok Gye-gwang) was brought into the country by the army of Ming (明) Dynasty. At first, this was used in the form of its original edition, or of abstract version in the military drill. But, later, it was published under the title of "Byeonghak-jinam" (兵學指南 - Military Training Manual about Action Rules by combat situation). This book, same as in Zhejian (浙江) province in China, had achieved a positive effect on counteracting the Japanese raiders in our country. However, these military tactics were conflicted with "Owi Jinbeop" - Rules of Deployment of the Five Military Commands, which had been handed down ever since the early period of the Joseon Dynasty, and, at the same time, it was pointed out that those tactics would not be able to apply to the situation uniformly, since Korea and China were geographically different. Furthermore, having gone through Manchu Invasion of 1636 (丙子胡亂, Byeongja horan) Joseon had used "Yeonbyeongsilgi" (練兵實記 - the Actual Records of Training Army), which was compiled in China on the basis of the experiences of wars against the nomad, including Mongolia and so on. And, this had become a typical training manual together with "Byeonghak-jinam". King Yeong Jo and King Jeong Jo of the Joseon Dynasty had tried to establish uniformity in military training by publishing the books of military science representing the latter period of Joseon such as "Sokbyeongjangdoseol" (續兵將圖說- Revision of the Illustrated Manual of Military Training and Tactics,) "Byeonghaktong" (兵學通 Book on Military Art and Science,) "Byeonghakjinamyeonui" (兵學指南演義 - Commentary on 'Byeonghak-jinam') and "Muyedobotongji"(武藝圖譜通志 - Comprehensive Illustrated Manual of Martial Arts,) and so on. King Jeong Jo had actively participated in the arguments in those days. So then the arguments that had been continued for about 200 years, ever since King Seon Jo, put to an end. To sum up the distinctive features of military art and science in both former and latter period of the Joseon Dynasty, in the former period of Joseon, the reasoning military science was proceeded with the initiative of civic official based on "Mugyeongchilseo"(武經七書- the Seven Military Classics). However, in the latter period of Joseon, "Gihyo-Sinseo"(紀效新書 - New Text of Practical Tactics written by Cheok Gye-gwang) had served as a momentum, and also comparatively a large numbers of military official had participated in arguments, so then such an occasion had made the military science turn into the Practical Theory. Meanwhile, King Sejo and King Jeong Jo had played a leading role in the process of establishing the theory of military science of Joseon, however, there are something in common that their succession to the throne was not smooth. This is the part that reminds us "War is an extension of politics," the thesis of Clausewitz

Converting Ieodo Ocean Research Station Wind Speed Observations to Reference Height Data for Real-Time Operational Use (이어도 해양과학기지 풍속 자료의 실시간 운용을 위한 기준 고도 변환 과정)

  • BYUN, DO-SEONG;KIM, HYOWON;LEE, JOOYOUNG;LEE, EUNIL;PARK, KYUNG-AE;WOO, HYE-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.23 no.4
    • /
    • pp.153-178
    • /
    • 2018
  • Most operational uses of wind speed data require measurements at, or estimates generated for, the reference height of 10 m above mean sea level (AMSL). On the Ieodo Ocean Research Station (IORS), wind speed is measured by instruments installed on the lighthouse tower of the roof deck at 42.3 m AMSL. This preliminary study indicates how these data can best be converted into synthetic 10 m wind speed data for operational uses via the Korea Hydrographic and Oceanographic Agency (KHOA) website. We tested three well-known conventional empirical neutral wind profile formulas (a power law (PL); a drag coefficient based logarithmic law (DCLL); and a roughness height based logarithmic law (RHLL)), and compared their results to those generated using a well-known, highly tested and validated logarithmic model (LMS) with a stability function (${\psi}_{\nu}$), to assess the potential use of each method for accurately synthesizing reference level wind speeds. From these experiments, we conclude that the reliable LMS technique and the RHLL technique are both useful for generating reference wind speed data from IORS observations, since these methods produced very similar results: comparisons between the RHLL and the LMS results showed relatively small bias values ($-0.001m\;s^{-1}$) and Root Mean Square Deviations (RMSD, $0.122m\;s^{-1}$). We also compared the synthetic wind speed data generated using each of the four neutral wind profile formulas under examination with Advanced SCATterometer (ASCAT) data. Comparisons revealed that the 'LMS without ${\psi}_{\nu}^{\prime}$ produced the best results, with only $0.191m\;s^{-1}$ of bias and $1.111m\;s^{-1}$ of RMSD. As well as comparing these four different approaches, we also explored potential refinements that could be applied within or through each approach. Firstly, we tested the effect of tidal variations in sea level height on wind speed calculations, through comparison of results generated with and without the adjustment of sea level heights for tidal effects. Tidal adjustment of the sea levels used in reference wind speed calculations resulted in remarkably small bias (<$0.0001m\;s^{-1}$) and RMSD (<$0.012m\;s^{-1}$) values when compared to calculations performed without adjustment, indicating that this tidal effect can be ignored for the purposes of IORS reference wind speed estimates. We also estimated surface roughness heights ($z_0$) based on RHLL and LMS calculations in order to explore the best parameterization of this factor, with results leading to our recommendation of a new $z_0$ parameterization derived from observed wind speed data. Lastly, we suggest the necessity of including a suitable, experimentally derived, surface drag coefficient and $z_0$ formulas within conventional wind profile formulas for situations characterized by strong wind (${\geq}33m\;s^{-1}$) conditions, since without this inclusion the wind adjustment approaches used in this study are only optimal for wind speeds ${\leq}25m\;s^{-1}$.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.