• Title/Summary/Keyword: dynamic parameters

Search Result 3,972, Processing Time 0.031 seconds

Thyrotropin-Binding Inhibiting Immunoglobulin(TBII) in Patients with Autoimmune Thyroid Diseases (자가면역성 갑상선질환에서의 혈청 Thyrotropin-Binding Inhibiting Immunoglobulin치)

  • Jang, Dae-Sung;Ahn, Byeong-Cheol;Sohn, Sang-Kyun;Lee, Jae-Tae;Lee, Kyu-Bo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.30 no.1
    • /
    • pp.65-76
    • /
    • 1996
  • In order to evaluate the significance of thyrotropin-binding inhibiting immunoglobulin (TBII) in the patients with autoimmune thyroid diseases, the authors investigated 402 cases of Graves' disease and 230 cases of Hashimoto's thyroiditis comparing 30 cases of normal healthy adult at Kyung Pook University Hospital from February 1993 to August 1994. The TBII was tested by radioimmunoassay and assesed on the dynamic change with the disease course, thyroid functional parameters, and other thyroid autoantibodies : antithyroglobulin antibody (ATAb) and antimicrosomal antibody (AMAb) including thyroglobulin. The serum level of TBII was $40.82{\pm}21.651(mean{\pm}SD)%$ in hyperthyroid Graves' disease and $8.89{\pm}14.522%$ in Hashimoto's thyroiditis and both were significant different from normal control of which was $3.21{\pm}2.571%$. The frequency of abnormally increased TBII level was 92.2% in hyperthyroid Graves' disease, 46.7% in euthyroid Graves' disease or remission state of hyperthyroidism, and 23.9% in Hashimoto's thyroiditis. The serum levels of increased TBII in Graves' disease were positively correlated with RAIU, serum T3, T4, and FT4, but negatively correlated with serum TSH(each P<0.001). The TBII in Graves' disease had significant positive correlation with serum thyroglobulin and AMAb, but no significant correlation with ATAb. In the Hashimoto's thyroiditis, the serum levels of TBII were positively correlated with RAIU, serum T3, TSH and AMAb, but not significantly correlated with serum T4, FT4, thyroglobulin and ATAb. Therefore serum level of TBII seemed to be a useful mean of assessing the degree of hyperthyroidism in Graves' disease and correlated well with thyroidal stimulation. The serum level of TBII in Hashimoto's thyroiditis is meaningful for the degree of both functional abnormality reflecting either hyperfunction or hypofunction and the immune logic abnormality.

  • PDF

Synthetic Application of Seismic Piezo-cone Penetration Test for Evaluating Shear Wave Velocity in Korean Soil Deposits (국내 퇴적 지반의 전단파 속도 평가를 위한 탄성파 피에조콘 관입 시험의 종합적 활용)

  • Sun, Chang-Guk;Kim, Hong-Jong;Jung, Jong-Hong;Jung, Gyung-Ja
    • Geophysics and Geophysical Exploration
    • /
    • v.9 no.3
    • /
    • pp.207-224
    • /
    • 2006
  • It has been widely known that the seismic piezo-cone penetration test (SCPTu) is one of the most useful techniques for investigating the geotechnical characteristics such as static and dynamic soil properties. As practical applications in Korea, SCPTu was carried out at two sites in Busan and four sites in Incheon, which are mainly composed of alluvial or marine soil deposits. From the SCPTu waveform data obtained from the testing sites, the first arrival times of shear waves and the corresponding time differences with depth were determined using the cross-over method, and the shear wave velocity $(V_S)$ profiles with depth were derived based on the refracted ray path method based on Snell's law. Comparing the determined $V_S$ profile with the cone tip resistance $(q_t)$ profile, both trends of profiles with depth were similar. For the application of the conventional CPTu to earthquake engineering practices, the correlations between $V_S$ and CPTu data were deduced based on the SCPTu results. For the empirical evaluation of $V_S$ for all soils together with clays and sands which are classified unambiguously in this study by the soil behavior type classification index $(I_C)$, the authors suggested the $V_S-CPTu$ data correlations expressed as a function of four parameters, $q_t,\;f_s,\;\sigma'_{v0}$ and $B_q$, determined by multiple statistical regression modeling. Despite the incompatible strain levels of the downhole seismic test during SCPTu and the conventional CPTu, it is shown that the $V_S-CPTu$ data correlations for all soils, clays and sands suggested in this study is applicable to the preliminary estimation of $V_S$ for the soil deposits at a part in Korea and is more reliable than the previous correlations proposed by other researchers.

A Study on the Behaviour of Prebored and Precast Steel Pipe Piles from Full-Scale Field Tests and Class-A and C1 Type Numerical Analyses (현장시험과 Class-A 및 C1 type 수치해석을 통한 강관매입말뚝의 거동에 대한 연구)

  • Kim, Sung-Hee;Jung, Gyoung-Ja;Jeong, Sang-Seom;Jeon, Young-Jin;Kim, Jeong-Sub;Lee, Cheol-Ju
    • Journal of the Korean GEO-environmental Society
    • /
    • v.18 no.7
    • /
    • pp.37-47
    • /
    • 2017
  • In this study, a series of full-scale field tests on prebored and precast steel pipe piles and the corresponding numerical analysis have been conducted in order to study the characteristics of pile load-settlement relations and shear stress transfer at the pile-soil interface. Dynamic pile load tests (EOID and restrike) have been performed on the piles and the estimated design pile loads from EOID and restrike tests were analysed. Class-A type numerical analyses conducted prior to the pile loading tests were 56~105%, 65~121% and 38~142% respectively of those obtained from static load tests. In addition, design loads estimated from the restrike tests indicate increases of 12~60% compared to those estimated in the EOID tests. The EOID tests show large end bearing capacity while the restrike tests demonstrate increased skin friction. When impact energy is insufficient during the restrike tests, the end bearing capacity may be underestimated. It has been found that total pile capacity would be reasonably estimated if skin friction from the restrike tests and end bearing capacity from the EOID are combined. The load-settlement relation measured from the static pile load tests and estimated from the numerical modelling is in general agreement until yielding occurs, after which results from the numerical analyses substantially deviated away from those obtained from the static load tests. The measured pile behaviour from the static load tests shows somewhat similar behaviour of perfectly-elastic plastic materials after yielding with a small increase in the pile load, while the numerical analyses demonstrates a gradual increase in the pile load associated with strain hardening approaching ultimate pile load. It has been discussed that the load-settlement relation mainly depends upon the stiffness of the ground, whilst the shear transfer mechanism depends on shear strength parameters.

$^{99m}Tc-DTPA$ Galactosyl Human Serum Albumin Scintigyaphy in Mushiroom Poisoning Patient : Comparison with Liver Ultrasonography (버섯 중독 환자에서의 $^{99m}Tc-galactosyl$ human serum albumin (GSA) scintigraphy 소견 : 간초음파 소견과의 비교)

  • Jeong, Shin-Young;Lee, Jea-Tae;Bae, Jin-Ho;Chun, Kyung-Ah;Ahn, Byeong-Cheol;Kang, Young-Mo;Jeong, Jae-Min;Lee, Kyu-Bo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.4
    • /
    • pp.254-259
    • /
    • 2003
  • $^{99m}Tc-galactosyl$ human serum albumin (Tc-GSA) is a radiopharmaceutical that binds to asialoglycoprotein receptors, which are specifically present in the hepatocyte membrane. Because these receptors are decreased in hepatic parenchymal damage, the degree of Tc-GSA accumulation in the liver correlates with findings of liver function test. Hepatic images were performed with Tc-GSA in patients with acute hepatic dysfunction by Amantia Subjunquillea poisoning, and compared with these of liver ultrasonography (USG). Tc-GSA (185 MBq, 3 mg of GSA) was injected intravenously, and dynamic images were recorded for 30 minutes. Time-activity curves for the heart and liver were generated from regions of interest for the whole liver and precordium. Degree of hepatic uptake and clearance rate of Tc-GSA were generated by visual interpretation and semiquantitative analysis parameters (receptor index : LHL15 and index of blood clearance : HH15). Visual assessment of GSA scintigraphy revealed mildly decreased liver uptake in all of subjects. The mean LHL15 and HH15 were 0.886 and 0.621, graded as mild dysfunction in 2, and mild to moderate dysfunction in 1 subject. In contrast, liver USG showed no remarkable changes of hepatic parenchyme. Tc-GSA scintigraphy was considered as a useful imaging modality in the assessment of the hepatic dysfunction.

A Dynamic Prefetch Filtering Schemes to Enhance Usefulness Of Cache Memory (캐시 메모리의 유용성을 높이는 동적 선인출 필터링 기법)

  • Chon Young-Suk;Lee Byung-Kwon;Lee Chun-Hee;Kim Suk-Il;Jeon Joong-Nam
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.123-136
    • /
    • 2006
  • The prefetching technique is an effective way to reduce the latency caused memory access. However, excessively aggressive prefetch not only leads to cache pollution so as to cancel out the benefits of prefetch but also increase bus traffic leading to overall performance degradation. In this thesis, a prefetch filtering scheme is proposed which dynamically decides whether to commence prefetching by referring a filtering table to reduce the cache pollution due to unnecessary prefetches In this thesis, First, prefetch hashing table 1bitSC filtering scheme(PHT1bSC) has been shown to analyze the lock problem of the conventional scheme, this scheme such as conventional scheme used to be N:1 mapping, but it has the two state to 1bit value of each entries. A complete block address table filtering scheme(CBAT) has been introduced to be used as a reference for the comparative study. A prefetch block address lookup table scheme(PBALT) has been proposed as the main idea of this paper which exhibits the most exact filtering performance. This scheme has a length of the table the same as the PHT1bSC scheme, the contents of each entry have the fields the same as CBAT scheme recently, never referenced data block address has been 1:1 mapping a entry of the filter table. On commonly used prefetch schemes and general benchmarks and multimedia programs simulates change cache parameters. The PBALT scheme compared with no filtering has shown enhanced the greatest 22%, the cache miss ratio has been decreased by 7.9% by virtue of enhanced filtering accuracy compared with conventional PHT2bSC. The MADT of the proposed PBALT scheme has been decreased by 6.1% compared with conventional schemes to reduce the total execution time.

Comparative Assessment of Linear Regression and Machine Learning for Analyzing the Spatial Distribution of Ground-level NO2 Concentrations: A Case Study for Seoul, Korea (서울 지역 지상 NO2 농도 공간 분포 분석을 위한 회귀 모델 및 기계학습 기법 비교)

  • Kang, Eunjin;Yoo, Cheolhee;Shin, Yeji;Cho, Dongjin;Im, Jungho
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1739-1756
    • /
    • 2021
  • Atmospheric nitrogen dioxide (NO2) is mainly caused by anthropogenic emissions. It contributes to the formation of secondary pollutants and ozone through chemical reactions, and adversely affects human health. Although ground stations to monitor NO2 concentrations in real time are operated in Korea, they have a limitation that it is difficult to analyze the spatial distribution of NO2 concentrations, especially over the areas with no stations. Therefore, this study conducted a comparative experiment of spatial interpolation of NO2 concentrations based on two linear-regression methods(i.e., multi linear regression (MLR), and regression kriging (RK)), and two machine learning approaches (i.e., random forest (RF), and support vector regression (SVR)) for the year of 2020. Four approaches were compared using leave-one-out-cross validation (LOOCV). The daily LOOCV results showed that MLR, RK, and SVR produced the average daily index of agreement (IOA) of 0.57, which was higher than that of RF (0.50). The average daily normalized root mean square error of RK was 0.9483%, which was slightly lower than those of the other models. MLR, RK and SVR showed similar seasonal distribution patterns, and the dynamic range of the resultant NO2 concentrations from these three models was similar while that from RF was relatively small. The multivariate linear regression approaches are expected to be a promising method for spatial interpolation of ground-level NO2 concentrations and other parameters in urban areas.

Contrast Media in Abdominal Computed Tomography: Optimization of Delivery Methods

  • Joon Koo Han;Byung Ihn Choi;Ah Young Kim;Soo Jung Kim
    • Korean Journal of Radiology
    • /
    • v.2 no.1
    • /
    • pp.28-36
    • /
    • 2001
  • Objective: To provide a systematic overview of the effects of various parameters on contrast enhancement within the same population, an animal experiment as well as a computer-aided simulation study was performed. Materials and Methods: In an animal experiment, single-level dynamic CT through the liver was performed at 5-second intervals just after the injection of contrast medium for 3 minutes. Combinations of three different amounts (1, 2, 3 mL/kg), concentrations (150, 200, 300 mgI/mL), and injection rates (0.5, 1, 2 mL/sec) were used. The CT number of the aorta (A), portal vein (P) and liver (L) was measured in each image, and time-attenuation curves for A, P and L were thus obtained. The degree of maximum enhancement (Imax) and time to reach peak enhancement (Tmax) of A, P and L were determined, and times to equilibrium (Teq) were analyzed. In the computed-aided simulation model, a program based on the amount, flow, and diffusion coefficient of body fluid in various compartments of the human body was designed. The input variables were the concentrations, volumes and injection rates of the contrast media used. The program generated the time-attenuation curves of A, P and L, as well as liver-to-hepatocellular carcinoma (HCC) contrast curves. On each curve, we calculated and plotted the optimal temporal window (time period above the lower threshold, which in this experiment was 10 Hounsfield units), the total area under the curve above the lower threshold, and the area within the optimal range. Results: A. Animal Experiment: At a given concentration and injection rate, an increased volume of contrast medium led to increases in Imax A, P and L. In addition, Tmax A, P, L and Teq were prolonged in parallel with increases in injection time The time-attenuation curve shifted upward and to the right. For a given volume and injection rate, an increased concentration of contrast medium increased the degree of aortic, portal and hepatic enhancement, though Tmax A, P and L remained the same. The time-attenuation curve shifted upward. For a given volume and concentration of contrast medium, changes in the injection rate had a prominent effect on aortic enhancement, and that of the portal vein and hepatic parenchyma also showed some increase, though the effect was less prominent. A increased in the rate of contrast injection led to shifting of the time enhancement curve to the left and upward. B. Computer Simulation: At a faster injection rate, there was minimal change in the degree of hepatic attenuation, though the duration of the optimal temporal window decreased. The area between 10 and 30 HU was greatest when contrast media was delivered at a rate of 2 3 mL/sec. Although the total area under the curve increased in proportion to the injection rate, most of this increase was above the upper threshould and thus the temporal window was narrow and the optimal area decreased. Conclusion: Increases in volume, concentration and injection rate all resulted in improved arterial enhancement. If cost was disregarded, increasing the injection volume was the most reliable way of obtaining good quality enhancement. The optimal way of delivering a given amount of contrast medium can be calculated using a computer-based mathematical model.

  • PDF

Evaluation of Liver Function Using $^{99m}-Lactosylated$ Serum Albumin Liver Scintigraphy in Rat with Acute Hepatic Injury Induced by Dimethylnitrosamine (Dimethylnitrosamine 유발 급성 간 손상 흰쥐에서 $^{99m}-Lactosylated$ Serum Albumin을 이용한 간 기능의 평가)

  • Jeong, Shin-Young;Seo, Myung-Rang;Yoo, Jeong-Ah;Bae, Jin-Ho;Ahn, Byeong-Cheol;Hwang, Jae-Seok;Jeong, Jae-Min;Ha, Jeong-Hee;Lee, Kyu-Bo;Lee, Jae-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.6
    • /
    • pp.418-427
    • /
    • 2003
  • Objects: $^{99m}-lactosylated$ human serum albumin (LSA) is a newly synthesized radiopharmaceutical that binds to asialoglycoprotein receptors, which are specifically presented on the hepatocyte membrane. Hepatic uptake and blood clearance of LSA were evaluated in rat with acute hepatic injury induced by dimethylnitrosamine (DMN) and results were compared with corresponding findings of liver enzyme profile and these of histologic changes. Materials and Methods: DMN (27 mg/kg) was injected intraperitoneally in Sprague-Dawley rat to induce acute hepatic injury. At 3(DMN-3), 8(DMN-8), and 21 (DMN-21) days after injection of DMN, LSA injected intravenously, and dynamic images of the liver and heart were recorded for 30 minutes. Time-activity curves of the heart and liver were generated from regions of interest drawn over liver and heart area. Degree of hepatic uptake and blood clearance of LSA were evaluated with visual interpretation and semiquantitative analysis using parameters (receptor index : LHL3 and index of blood clearance : HH3), analysis of time-activity curve was also performed with curve fitting using Prism program. Results: Visual assessment of LSA images revealed decreased hepatic uptake in DMN treated rat, compared to control group. In semiquantitative analysis, LHL3 was significantly lower in DMN treated rat group than control rat group (DMN-3: 0.842, DMN-8: 0.898, DMN-21: 0.91, Control: 0.96, p<0.05), whereas HH3 was significantly higher than control rat group (DMN-3: 0.731,.DMN-8: 0.654, DMN-21: 0.604, Control: 0.473, p<0.05). AST and ALT were significantly higher in DMN-3 group than those of control group. Centrilobular necrosis and infiltration of inflammatory cells were most prominent in DMN-3 group, and were decreased over time. Conclusion: The degree of hepatic uptake of LSA was inversely correlated with liver transaminase and degree of histologic liver injury in rat with acute hepatic injury.

Measuring Consumer-Brand Relationship Quality (소비자-브랜드 관계 품질 측정에 관한 연구)

  • Kang, Myung-Soo;Kim, Byoung-Jai;Shin, Jong-Chil
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.2
    • /
    • pp.111-131
    • /
    • 2007
  • As a brand becomes a core asset in creating a corporation's value, brand marketing has become one of core strategies that corporations pursue. Recently, for customer relationship management, possession and consumption of goods were centered on brand for the management. Thus, management related to this matter was developed. The main reason of the increased interest on the relationship between the brand and the consumer is due to acquisition of individual consumers and development of relationship with those consumers. Along with the development of relationship, a corporation is able to establish long-term relationships. This has become a competitive advantage for the corporation. All of these processes became the strategic assets of corporations. The importance and the increase of interest of a brand have also become a big issue academically. Brand equity, brand extension, brand identity, brand relationship, and brand community are the results derived from the interest of a brand. More specifically, in marketing, the study of brands has been led to the study of factors related to building of powerful brands and the process of building the brand. Recently, studies concentrated primarily on the consumer-brand relationship. The reason is that brand loyalty can not explain the dynamic quality aspects of loyalty, the consumer-brand relationship building process, and especially interactions between the brands and the consumers. In the studies of consumer-brand relationship, a brand is not just limited to possession or consumption objectives, but rather conceptualized as partners. Most of the studies from the past concentrated on the results of qualitative analysis of consumer-brand relationship to show the depth and width of the performance of consumer-brand relationship. Studies in Korea have been the same. Recently, studies of consumer-brand relationship started to concentrate on quantitative analysis rather than qualitative analysis or even go further with quantitative analysis to show effecting factors of consumer-brand relationship. Studies of new quantitative approaches show the possibilities of using the results as a new concept of viewing consumer-brand relationship and possibilities of applying these new concepts on marketing. Studies of consumer-brand relationship with quantitative approach already exist, but none of them include sub-dimensions of consumer-brand relationship, which presents theoretical proofs for measurement. In other words, most studies add up or average out the sub-dimensions of consumer-brand relationship. However, to do these kind of studies, precondition of sub-dimensions being in identical constructs is necessary. Therefore, most of the studies from the past do not meet conditions of sub-dimensions being as one dimension construct. From this, we question the validity of past studies and their limits. The main purpose of this paper is to overcome the limits shown from the past studies by practical use of previous studies on sub-dimensions in a one-dimensional construct (Naver & Slater, 1990; Cronin & Taylor, 1992; Chang & Chen, 1998). In this study, two arbitrary groups were classified to evaluate reliability of the measurements and reliability analyses were pursued on each group. For convergent validity, correlations, Cronbach's, one-factor solution exploratory analysis were used. For discriminant validity correlation of consumer-brand relationship was compared with that of an involvement, which is a similar concept with consumer-based relationship. It also indicated dependent correlations by Cohen and Cohen (1975, p.35) and results showed that it was different constructs from 6 sub-dimensions of consumer-brand relationship. Through the results of studies mentioned above, we were able to finalize that sub-dimensions of consumer-brand relationship can viewed from one-dimensional constructs. This means that the one-dimensional construct of consumer-brand relationship can be viewed with reliability and validity. The result of this research is theoretically meaningful in that it assumes consumer-brand relationship in a one-dimensional construct and provides the basis of methodologies which are previously preformed. It is thought that this research also provides the possibility of new research on consumer-brand relationship in that it gives root to the fact that it is possible to manipulate one-dimensional constructs consisting of consumer-brand relationship. In the case of previous research on consumer-brand relationship, consumer-brand relationship is classified into several types on the basis of components consisting of consumer-brand relationship and a number of studies have been performed with priority given to the types. However, as we can possibly manipulate a one-dimensional construct through this research, it is expected that various studies which make the level or strength of consumer-brand relationship practical application of construct will be performed, and not research focused on separate types of consumer-brand relationship. Additionally, we have the theoretical basis of probability in which to manipulate the consumer-brand relationship with one-dimensional constructs. It is anticipated that studies using this construct, which is consumer-brand relationship, practical use of dependent variables, parameters, mediators, and so on, will be performed.

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.