• Title/Summary/Keyword: Flow System

Search Result 14,975, Processing Time 0.047 seconds

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

Mid-term results of IntracardiacLateral Tunnel Fontan Procedure in the Treatment of Patients with a Functional Single Ventricle (기능적 단심실 환자에 대한 심장내 외측통로 폰탄술식의 중기 수술성적)

  • 이정렬;김용진;노준량
    • Journal of Chest Surgery
    • /
    • v.31 no.5
    • /
    • pp.472-480
    • /
    • 1998
  • We reviewed the surgical results of intracardiac lateral tunnel Fontan procedure for the repair of functional single ventricles. Between 1990 and 1996, 104 patients underwent total cavopulmonary anastomosis. Patients' age and body weight averaged 35.9(range 10 to 173) months and 12.8(range 6.5 to 37.8) kg. Preoperative diagnoses included 18 tricuspid atresias and 53 double inlet ventricles with univentricular atrioventricular connection and 33 other complex lesions. Previous palliative operations were performed in 50 of these patients, including 37 systemic to pulmonary artery shunts, 13 pulmonary artery bandings, 15 surgical atrial septectomies, 2 arterial switch procedures, 2 resections of subaortic conus, 2 repairs of total anomalous pulmonary venous connection and 1 Damus-Stansel-Kaye procedure. In 19 patients bidirectional cavopulmonary shunt operation was performed before the Fontan procedure and in 1 patient a Kawashima procedure was required. Preoperative hemodynamics revealed a mean pulmonary artery pressure of 14.6(range 5 to 28) mmHg, a mean pulmonary vascular resistance of 2.2(range 0.4 to 6.9) wood-unit, a mean pulmonary to systemic flow ratio of 0.9(range 0.3 to 3.0), a mean ventricular end-diastolic pressure of 9.0 (range 3.0 to 21.0) mmHg, and a mean arterial oxygen saturation of 76.0(range 45.6 to 88.0)%. The operative procedure consisted of a longitudinal right atriotomy 2cm lateral to the terminal crest up to the right atrial auricle, followed by the creation of a lateral tunnel connecting the orifices of either the superior caval vein or the right atrial auricle to the inferior caval vein, using a Gore-Tex vascular graft with or without a fenestration. Concomitant procedures at the time of Fontan procedure included 22 pulmonary artery angioplasties, 21 atrial septectomies, 4 atrioventricular valve replacements or repairs, 4 corrections of anomalous pulmonary venous connection, and 3 permanent pacemaker implantations. In 31, a fenestration was created, and in 1 an adjustable communication was made in the lateral tunnel pathway. One lateral tunnel conversion was performed in a patient with recurrent intractable tachyarrhythmia 4 years after the initial atriopulmonary connection. Post-extubation hemodynamic data revealed a mean pulmonary artery pressure of 12.7(range 8 to 21) mmHg, a mean ventricular end-diastolic pressure of 7.6(range 4 to 12) mmHg, and a mean room-air arterial oxygen saturation of 89.9(range 68 to 100) %. The follow-up duration was, on average, 27(range 1 to 85) months. Post-Fontan complications included 11 prolonged pleural effusions, 8 arrhythmias, 9 chylothoraces, 5 of damage to the central nervous system, 5 infectious complications, and 4 of acute renal failure. Seven early(6.7%) and 5 late(4.8%) deaths occured. These results proved that the lateral tunnel Fontan procedure provided excellent hemodynamic improvements with acceptable mortality and morbidity for hearts with various types of functional single ventricle.

  • PDF

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Analysis of Metadata Standards of Record Management for Metadata Interoperability From the viewpoint of the Task model and 5W1H (메타데이터 상호운용성을 위한 기록관리 메타데이터 표준 분석 5W1H와 태스크 모델의 관점에서)

  • Baek, Jae-Eun;Sugimoto, Shigeo
    • The Korean Journal of Archival Studies
    • /
    • no.32
    • /
    • pp.127-176
    • /
    • 2012
  • Metadata is well recognized as one of the foundational factors in archiving and long-term preservation of digital resources. There are several metadata standards for records management, archives and preservation, e.g. ISAD(G), EAD, AGRkMs, PREMIS, and OAIS. Consideration is important in selecting appropriate metadata standards in order to design metadata schema that meet the requirements of a particular archival system. Interoperability of metadata with other systems should be considered in schema design. In our previous research, we have presented a feature analysis of metadata standards by identifying the primary resource lifecycle stages where each standard is applied. We have clarified that any single metadata standard cannot cover the whole records lifecycle for archiving and preservation. Through this feature analysis, we analyzed the features of metadata in the whole records lifecycle, and we clarified the relationships between the metadata standards and the stages of the lifecycle. In the previous study, more detailed analysis was left for future study. This paper proposes to analyze the metadata schemas from the viewpoint of tasks performed in the lifecycle. Metadata schemas are primarily defined to describe properties of a resource in accordance with the purposes of description, e.g. finding aids, records management, preservation and so forth. In other words, the metadata standards are resource- and purpose-centric, and the resource lifecycle is not explicitly reflected in the standards. There are no systematic methods for mapping between different metadata standards in accordance with the lifecycle. This paper proposes a method for mapping between metadata standards based on the tasks contained in the resource lifecycle. We first propose a Task Model to clarify tasks applied to resources in each stage of the lifecycle. This model is created as a task-centric model to identify features of metadata standards and to create mappings among elements of those standards. It is important to categorize the elements in order to limit the semantic scope of mapping among elements and decrease the number of combinations of elements for mapping. This paper proposes to use 5W1H (Who, What, Why, When, Where, How) model to categorize the elements. 5W1H categories are generally used for describing events, e.g. news articles. As performing a task on a resource causes an event and metadata elements are used in the event, we consider that the 5W1H categories are adequate to categorize the elements. By using these categories, we determine the features of every element of metadata standards which are AGLS, AGRkMS, PREMIS, EAD, OAIS and an attribute set extracted from DPC decision flow. Then, we perform the element mapping between the standards, and find the relationships between the standards. In this study, we defined a set of terms for each of 5W1H categories, which typically appear in the definition of an element, and used those terms to categorize the elements. For example, if the definition of an element includes the terms such as person and organization that mean a subject which contribute to create, modify a resource the element is categorized into the Who category. A single element can be categorized into one or more 5W1H categories. Thus, we categorized every element of the metadata standards using the 5W1H model, and then, we carried out mapping among the elements in each category. We conclude that the Task Model provides a new viewpoint for metadata schemas and is useful to help us understand the features of metadata standards for records management and archives. The 5W1H model, which is defined based on the Task Model, provides us a core set of categories to semantically classify metadata elements from the viewpoint of an event caused by a task.

The Process of Establishing a Japanese-style Garden and Embodying Identity in Modern Japan (일본 근대 시기 일본풍 정원의 확립과정과 정체성 구현)

  • An, Joon-Young;Jun, Da-Seul
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.41 no.3
    • /
    • pp.59-66
    • /
    • 2023
  • This study attempts to examine the process of establishing a Japanese-style garden in the modern period through the perspectives of garden designers, spatial composition, spatial components, and materials used in their works, and to use it as data for embodying the identity of Korean garden. The results are as follows: First, by incorporating elements associated with Koreanness into the modern garden culture, there are differences in location, presence, and subjectivity when compared to Japan. This reflects Japan's relatively seamless cultural continuity compared to Korea's cultural disconnection during the modern period. Second, prior to the modern period, Japan's garden culture spread and continued to develop throughout the country without significant interruptions. However, during the modern period, the Meiji government promoted the policy of 'civilization and enlightenment (Bunmei-kaika, 文明開化)' and introduced advanced European and American civilizations, leading to the popularity of Western-style architectural techniques. Unfortunately, the rapid introduction of Western culture caused the traditional Japanese culture to be overshadowed. In 1879, British architect Josiah Condor guided Japanese architects and introduced atelier and traditional designs of Japanese gardens into the design. The garden style of Ogawa Jihei VII, a garden designer in Kyoto during the Meiji and Taisho periods, was accepted by influential political and business leaders who sought to preserve Japan's traditional culture. And a protection system of garden was established through the preparation of various laws and regulations. Third, as a comprehensive analysis of Japanese modern gardens, the examination of garden designers, Japanese components, materials, elements, and the Japanese-style showed that Yamagata Aritomo, Ogawa Jihei VII, and Mirei Shigemori were representative garden designers who preserved the Japanese-style in their gardens. They introduced features such as the creation of a Daejicheon(大池泉) garden, which involves a large pond on a spacious land, as well as the naturalistic borrowed scenery method and water flow. Key components of Japanese-style gardens include the use of turf, winding garden paths, and the variation of plant species. Fourth, an analysis of the Japanese-style elements in the target sites revealed that the use of flowing water had the highest occurrence at 47.06% among the individual elements of spatial composition. Daejicheon and naturalistic borrowed scenery were also shown. The use of turf and winding paths were at 65.88% and 78.82%, respectively. The alteration of tree species was relatively less common at 28.24% compared to the application of turf or winding paths. Fifth, it is essential to discover more gardens from the modern period and meticulously document the creators or owners of the gardens, the spatial composition, spatial components, and materials used. This information will be invaluable in uncovering the identity of our own gardens. This study was conducted based on the analysis of the process of establishing the Japanese-style during Japan's modern period, utilizing examples of garden designers and gardens. While this study has limitations, such as the absence of in-depth research and more case studies or specific techniques, it sets the stage for future exploration.