• Title/Summary/Keyword: Adaptive Structure

Search Result 1,229, Processing Time 0.029 seconds

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

4-Dimensional dose evaluation using deformable image registration in respiratory gated radiotherapy for lung cancer (폐암의 호흡동조방사선치료 시 변형영상정합을 이용한 4차원 선량평가)

  • Um, Ki Cheon;Yoo, Soon Mi;Yoon, In Ha;Back, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.83-95
    • /
    • 2018
  • Purpose : After planning the Respiratory Gated Radiotherapy for Lung cancer, the movement and volume change of sparing normal structures nearby target are not often considered during dose evaluation. This study carried out 4-D dose evaluation which reflects the movement of normal structures at certain phase of Respiratory Gated Radiotherapy, by using Deformable Image Registration that is well used for Adaptive Radiotherapy. Moreover, the study discussed the need of analysis and established some recommendations, regarding the normal structures's movement and volume change due to Patient's breathing pattern during evaluation of treatment plans. Materials and methods : The subjects were taken from 10 lung cancer patients who received Respiratory Gated Radiotherapy. Using Eclipse(Ver 13.6 Varian, USA), the structures seen in the top phase of CT image was equally set via Propagation or Segmentation Wizard menu, and the structure's movement and volume were analyzed by Center-to Center method. Also, image from each phase and the dose distribution were deformed into top phase CT image, for 4-dimensional dose evaluation, via VELOCITY Program. Also, Using $QUASAR^{TM}$ Phantom(Modus Medical Devices) and $GAFCHROMIC^{TM}$ EBT3 Film(Ashland, USA), verification carried out 4-D dose distribution for 4-D gamma pass rate. Result : The movement of the Inspiration and expiration phase was the most significant in axial direction of right lung, as $0.989{\pm}0.34cm$, and was the least significant in lateral direction of spinal cord, as -0.001 cm. The volume of right lung showed the greatest rate of change as 33.5 %. The maximal and minimal difference in PTV Conformity Index and Homogeneity Index between 3-dimensional dose evaluation and 4-dimensional dose evaluation, was 0.076, 0.021 and 0.011, 0.0 respectfully. The difference of 0.0045~2.76 % was determined in normal structures, using 4-D dose evaluation. 4-D gamma pass rate of every patients passed reference of 95 % gamma pass rate. Conclusion : PTV Conformity Index was more significant in all patients using 4-D dose evaluation, but no significant difference was observed between two dose evaluations for Homogeneity Index. 4-D dose distribution was shown more homogeneous dose compared to 3D dose distribution, by considering the movement from breathing which helps to fill out the PTV margin area. There was difference of 0.004~2.76 % in 4D evaluation of normal structure, and there was significant difference between two evaluation methods in all normal structures, except spinal cord. This study shows that normal structures could be underestimated by 3-D dose evaluation. Therefore, 4-D dose evaluation with Deformable Image Registration will be considered when the dose change is expected in normal structures due to patient's breathing pattern. 4-D dose evaluation with Deformable Image Registration is considered to be a more realistic dose evaluation method by reflecting the movement of normal structures from patient's breathing pattern.

  • PDF

Influence of substituted phenylcarbamoyl group on the fungicidal activites of a new 5,6-dihydro-2-trifluoromethyl-1,4-oxathiincarboxanilide derivatives (새로운 5,6-dihydro-2-trifluoromethyl-1,4-oxathiincarboxanilide 유도체의 항균활성에 미치는 치환-phenylcarbamoyl group의 영향)

  • Sung, Nack-Do;Yu, Seong-Jae;Nam, Kee-Dal;Chang, Kee-Hyuk;Hahn, Hoh-Gyu
    • The Korean Journal of Pesticide Science
    • /
    • v.2 no.3
    • /
    • pp.64-69
    • /
    • 1998
  • New thirty derivatives of 5,6-dihydro-2-trifluoromethyl-1,4-oxathiin carboxanilide as substrate(S) were synthesized and their fungicidal activities in vivo against rice sheath blight(Rhizoctonia solani) and wheat leaf rust(Puccinia recondita) were examined. The structure activity relationships(SAR) between the activities($pI_{50}$) and a physicochemical parameters of substituents(X) at the phenylcarbamoyl group were analyzed using the adaptive regression analysis method. The 3-methoxy, 11, 3-isopropyloxy, 13 and 3-isopropyl substituent, 25 as X on the phenylcarbamoyl group exhibited the most highest fungicidal activity against the two fungi. The fungicidal potency of the (S) against Puccinia recondita was higher than Rhizoctonia solani. In case of Rhizoctonia solani, the molecular hydrophobicity(${\pi}>0$) and resonance effect(R<0) by meta-alkyl substitutents with electron donating were important factors in determining fungicidal activity. And the HOMO energy(HOMO>0), ABSQ, sum of absolute values of the atomic charges on each atom and specific polarizability(Sp.Pol<0) of (S) were significantly influential towards fungicidal activity against Puccinia recondita.. The interaction between (S) and receptor agonist from the based on SAR studies proceeds through charge-control reaction, and conditions to show higher activity has been also discussed.

  • PDF

A Study on the Army Tactical C4I System Information Security Plan for Future Information Warfare (미래 정보전에 대비한 육군전술지휘정보체계(C4I) 정보보호대책 연구)

  • Woo, Hee-Choul
    • Journal of Digital Convergence
    • /
    • v.10 no.9
    • /
    • pp.1-13
    • /
    • 2012
  • This study aims to analyze actual conditions of the present national defense information network operation, the structure and management of the system, communication lines, security equipments for the lines, the management of network and software, stored data and transferred data and even general vulnerable factors of our army tactical C4I system. Out of them, by carrying out an extensive analysis of the army tactical C4I system, likely to be the core of future information warfare, this study suggested plans adaptive to better information security, based on the vulnerable factors provided. Firstly, by suggesting various information security factor technologies, such as VPN (virtual private network), IPDS (intrusion prevention & detection system) and firewall system against virus and malicious software as well as security operation systems and validation programs, this study provided plans to improve the network, hardware (computer security), communication lines (communication security). Secondly, to prepare against hacking warfare which has been a social issue recently, this study suggested plans to establish countermeasures to increase the efficiency of the army tactical C4I system by investigating possible threats through an analysis of hacking techniques. Thirdly, to establish a more rational and efficient national defense information security system, this study provided a foundation by suggesting several priority factors, such as information security-related institutions and regulations and organization alignment and supplementation. On the basis of the results above, this study came to the following conclusion. To establish a successful information security system, it is essential to compose and operate an efficient 'Integrated Security System' that can detect and promptly cope with intrusion behaviors in real time through various different-type security systems and sustain the component information properly by analyzing intrusion-related information.

Postprocessing of Inter-Frame Coded Images Based on Convex Projection and Regularization (POCS와 정규화를 기반으로한 프레임간 압출 영사의 후처리)

  • Kim, Seong-Jin;Jeong, Si-Chang;Hwang, In-Gyeong;Baek, Jun-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.58-65
    • /
    • 2002
  • In order to reduce blocking artifacts in inter-frame coded images, we propose a new image restoration algorithm, which directly processes differential images before reconstruction. We note that blocking artifact in inter-frame coded images is caused by both 8$\times$8 DCT and 16$\times$16 macroblock based motion compensation, while that of intra-coded images is caused by 8$\times$8 DCT only. According to the observation, we Propose a new degradation model for differential images and the corresponding restoration algorithm that utilizes additional constraints and convex sets for discontinuity inside blocks. The proposed restoration algorithm is a modified version of standard regularization that incorporate!; spatially adaptive lowpass filtering with consideration of edge directions by utilizing a part of DCT coefficients. Most of video coding standard adopt a hybrid structure of block-based motion compensation and block discrete cosine transform (BDCT). By this reason, blocking artifacts are occurred on both block boundary and block interior For more complete removal of both kinds of blocking artifacts, the restored differential image must satisfy two constraints, such as, directional discontinuities on block boundary and block interior Those constraints have been used for defining convex sets for restoring differential images.

Adaptive Design Techniques for High-speed Toggle 2.0 NAND Flash Interface Considering Dynamic Internal Voltage Fluctuations (고속 Toggle 2.0 낸드 플래시 인터페이스에서 동적 전압 변동성을 고려한 설계 방법)

  • Yi, Hyun Ju;Han, Tae Hee
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.251-258
    • /
    • 2012
  • Recently, NAND Flash memory structure is evolving from SDR (Single Data Rate) to high speed DDR(Double Data Rate) to fulfill the high performance requirement of SSD and SSS. Accordingly, the proper ways of transferring data that latches valid data stably and minimizing data skew between pins by using PHY(Physical layer) circuit techniques have became new issues. Also, rapid growth of speed in NAND flash increases the operating frequency and power consumption of NAND flash controller. Internal voltage variation margin of NAND flash controller will be narrowed through the smaller geometry and lower internal operating voltage below 1.5V. Therefore, the increase of power budge deviation limits the normal operation range of internal circuit. Affection of OCV(On Chip Variation) deteriorates the voltage variation problem and thus causes internal logic errors. In this case, it is too hard to debug, because it is not functional faults. In this paper, we propose new architecture that maintains the valid timing window in cost effective way under sudden power fluctuation cases. Simulation results show that the proposed technique minimizes the data skew by 379% with reduced area by 20% compared to using PHY circuits.

An Hybrid Clustering Using Meta-Data Scheme in Ubiquitous Sensor Network (유비쿼터스 센서 네트워크에서 메타 데이터 구조를 이용한 하이브리드 클러스터링)

  • Nam, Do-Hyun;Min, Hong-Ki
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.9 no.4
    • /
    • pp.313-320
    • /
    • 2008
  • The dynamic clustering technique has some problems regarding energy consumption. In the cluster configuration aspect the cluster structure must be modified every time the head nodes are re-selected resulting in high energy consumption. Also, there is excessive energy consumption when a cluster head node receives identical data from adjacent cluster sources nodes. This paper proposes a solution to the problems described above from the energy efficiency perspective. The round-robin cluster header(RRCH) technique, which fixes the initially structured cluster and sequentially selects duster head nodes, is suggested for solving the energy consumption problem regarding repetitive cluster construction. Furthermore, the issue of redundant data occurring at the cluster head node is dealt with by broadcasting metadata of the initially received data to prevent reception by a sensor node with identical data. A simulation experiment was performed to verify the validity of the proposed approach. The results of the simulation experiments were compared with the performances of two of the must widely used conventional techniques, the LEACH(Low Energy Adaptive Clustering Hierarchy) and HEED(Hybrid, Energy Efficient Distributed Clustering) algorithms, based on energy consumption, remaining energy for each node and uniform distribution. The evaluation confirmed that in terms of energy consumption, the technique proposed in this paper was 29.3% and 21.2% more efficient than LEACH and HEED, respectively.

  • PDF

A Neuro-Fuzzy System Modeling using Gaussian Mixture Model and Clustering Method (GMM과 클러스터링 기법에 의한 뉴로-퍼지 시스템 모델링)

  • Kim, Sung-Suk;Kwak, Keun-Chang;Ryu, Jeong-Woong;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.6
    • /
    • pp.571-576
    • /
    • 2002
  • There have been a lot of considerations dealing with improving the performance of neuro-fuzzy system. The studies on the neuro-fuzzy modeling have largely been devoted to two approaches. First is to improve performance index of system. The other is to reduce the structure size. In spite of its satisfactory result, it should be noted that these are difficult to extend to high dimensional input or to increase the membership functions. We propose a novel neuro-fuzzy system based on the efficient clustering method for initializing the parameters of the premise part. It is a very useful method that maintains a few number of rules and improves the performance. It combine the various algorithms to improve the performance. The Expectation-Maximization algorithm of Gaussian mixture model is an efficient estimation method for unknown parameter estimation of mirture model. The obtained parameters are used for fuzzy clustering method. The proposed method satisfies these two requirements using the Gaussian mixture model and neuro-fuzzy modeling. Experimental results indicate that the proposed method is capable of giving reliable performance.

Adaptive Strategy Game Engine Using Non-monotonic Reasoning and Inductive Machine Learning (비단조 추론과 귀납적 기계학습 기반 적응형 전략 게임 엔진)

  • Kim, Je-Min;Park, Young-Tack
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.83-90
    • /
    • 2004
  • Strategic games are missing special qualities of genre these days. Game engines neither reason about behaviors of computer objects nor have learning ability that can prepare countermeasure in variously command user's strategy. This paper suggests a strategic game engine that applies non-monotonic reasoning and inductive machine learning. The engine emphasizes three components -“user behavior monitor”to abstract user's objects behavior,“learning engine”to learn user's strategy,“behavior display handler”to reflect abstracted behavior of computer objects on game. Especially, this paper proposes two layered-structure to apply non-monotonic reasoning and inductive learning to make behaviors of computer objects that learns strategy behaviors of user objects exactly, and corresponds in user's objects. The engine decides actions and strategies of computer objects with created information through inductive learning. Main contribution of this paper is that computer objects command excellent strategies and reveal differentiation with behavior of existing computer objects to apply non-monotonic reasoning and inductive machine learning.

A study of Vertical Handover between LTE and Wireless LAN Systems using Adaptive Fuzzy Logic Control and Policy based Multiple Criteria Decision Making Method (LTE/WLAN 이종망 환경에서 퍼지제어와 정책적 다기준 의사결정법을 이용한 적응적 VHO 방안 연구)

  • Lee, In-Hwan;Kim, Tae-Sub;Cho, Sung-Ho
    • The KIPS Transactions:PartC
    • /
    • v.17C no.3
    • /
    • pp.271-280
    • /
    • 2010
  • For the next generation mobile communication system, diverse wireless network techniques such as beyond 3G LTE, WiMAX/WiBro, and next generation WLAN etc. are proceeding to the form integrated into the All-IP core network. According to this development, Beyond 3G integrated into heterogeneous wireless access technologies must support the vertical handover and network to be used of several radio networks. However, unified management of each network is demanded since it is individually serviced. Therefore, in order to solve this problem this study is introducing the theory of Common Radio Resource Management (CRRM) based on Generic Link Layer (GLL). This study designs the structure and functions to support the vertical handover and propose the vertical handover algorithm of which policy-based and MCDM are composed between LTE and WLAN systems using GLL. Finally, simulation results are presented to show the improved performance over the data throughput, handover success rate, the system service cost and handover attempt number.