• Title/Summary/Keyword: non-linear problem

Search Result 677, Processing Time 0.025 seconds

Rice Yield Estimation Using Sentinel-2 Satellite Imagery, Rainfall and Soil Data (Sentinel-2 위성영상과 강우 및 토양자료를 활용한 벼 수량 추정)

  • KIM, Kyoung-Seop;CHOUNG, Yun-Jae;JUN, Byong-Woon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.1
    • /
    • pp.133-149
    • /
    • 2022
  • Existing domestic studies on estimating rice yield were mainly implemented at the level of cities and counties in the entire nation using MODIS satellite images with low spatial resolution. Unlike previous studies, this study tried to estimate rice yield at the level of eup-myon-dong in Gimje-si, Jeollabuk-do using Sentinel-2 satellite images with medium spatial resolution, rainfall and soil data, and then to evaluate its accuracy. Five vegetation indices such as NDVI, LAI, EVI2, MCARI1 and MCARI2 derived from Sentinel-2 images of August 1, 2018 for Gimje-si, Jeollabuk-do, rainfall and paddy soil-type data were aggregated by the level of eup-myon-dong and then rice yield was estimated with gamma generalized linear model, an expanded variant of multi-variate regression analysis to solve the non-normality problem of dependent variable. In the rice yield model finally developed, EVI2, rainfall days in September, and saline soils ratio were used as significant independent variables. The coefficient of determination representing the model fit was 0.68 and the RMSE for showing the model accuracy was 62.29kg/10a. This model estimated the total rice production in Gimje-si in 2018 to be 96,914.6M/T, which was very close to 94,470.3M/T the actual amount specified in the Statistical Yearbook with an error of 0.46%. Also, the rice production per unit area of Gimje-si was amounted to 552kg/10a, which was almost consistent with 550kg/10a of the statistical data. This result is similar to that of the previous studies and it demonstrated that the rice yield can be estimated using Sentinel-2 satellite images at the level of cities and counties or smaller districts in Korea.

A Study of Factors Associated with Software Developers Job Turnover (데이터마이닝을 활용한 소프트웨어 개발인력의 업무 지속수행의도 결정요인 분석)

  • Jeon, In-Ho;Park, Sun W.;Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.191-204
    • /
    • 2015
  • According to the '2013 Performance Assessment Report on the Financial Program' from the National Assembly Budget Office, the unfilled recruitment ratio of Software(SW) Developers in South Korea was 25% in the 2012 fiscal year. Moreover, the unfilled recruitment ratio of highly-qualified SW developers reaches almost 80%. This phenomenon is intensified in small and medium enterprises consisting of less than 300 employees. Young job-seekers in South Korea are increasingly avoiding becoming a SW developer and even the current SW developers want to change careers, which hinders the national development of IT industries. The Korean government has recently realized the problem and implemented policies to foster young SW developers. Due to this effort, it has become easier to find young SW developers at the beginning-level. However, it is still hard to recruit highly-qualified SW developers for many IT companies. This is because in order to become a SW developing expert, having a long term experiences are important. Thus, improving job continuity intentions of current SW developers is more important than fostering new SW developers. Therefore, this study surveyed the job continuity intentions of SW developers and analyzed the factors associated with them. As a method, we carried out a survey from September 2014 to October 2014, which was targeted on 130 SW developers who were working in IT industries in South Korea. We gathered the demographic information and characteristics of the respondents, work environments of a SW industry, and social positions for SW developers. Afterward, a regression analysis and a decision tree method were performed to analyze the data. These two methods are widely used data mining techniques, which have explanation ability and are mutually complementary. We first performed a linear regression method to find the important factors assaociated with a job continuity intension of SW developers. The result showed that an 'expected age' to work as a SW developer were the most significant factor associated with the job continuity intention. We supposed that the major cause of this phenomenon is the structural problem of IT industries in South Korea, which requires SW developers to change the work field from developing area to management as they are promoted. Also, a 'motivation' to become a SW developer and a 'personality (introverted tendency)' of a SW developer are highly importantly factors associated with the job continuity intention. Next, the decision tree method was performed to extract the characteristics of highly motivated developers and the low motivated ones. We used well-known C4.5 algorithm for decision tree analysis. The results showed that 'motivation', 'personality', and 'expected age' were also important factors influencing the job continuity intentions, which was similar to the results of the regression analysis. In addition to that, the 'ability to learn' new technology was a crucial factor for the decision rules of job continuity. In other words, a person with high ability to learn new technology tends to work as a SW developer for a longer period of time. The decision rule also showed that a 'social position' of SW developers and a 'prospect' of SW industry were minor factors influencing job continuity intensions. On the other hand, 'type of an employment (regular position/ non-regular position)' and 'type of company (ordering company/ service providing company)' did not affect the job continuity intension in both methods. In this research, we demonstrated the job continuity intentions of SW developers, who were actually working at IT companies in South Korea, and we analyzed the factors associated with them. These results can be used for human resource management in many IT companies when recruiting or fostering highly-qualified SW experts. It can also help to build SW developer fostering policy and to solve the problem of unfilled recruitment of SW Developers in South Korea.

Development of New Device to Improve Sucess Rate of Maze Procedure with Radiofrequency Energy (고주파에너지를 이용한 미로술식의 성적향상을 위한 새로운 기구의 개발)

  • 박남희;유양기;이재원
    • Journal of Chest Surgery
    • /
    • v.37 no.6
    • /
    • pp.467-473
    • /
    • 2004
  • Background: The sinus conversion rate after the maze procedure in chronic atrial fibrillation using radiofrequency energy is lower than with either conventional 'cut and saw' technique or cryothermia. The creation of incomplete transmural lesions due to poor tissue-catheter contact is thought to be the main cause. To address this problem, the current study was aimed to evaluate the effectiveness of a specially constructed compression device designed to enhance tissue catheter contact during unipolar radiofrequency catheter ablation. Material and Method: Circum-ferential right auricular epicardial lesions were created with a linear radiofrequency catheter in 10 anesthetized pigs. A device specially designed to increase contact by compression of the catheter to the atrial wall was used in 5 pigs (study group). This device was not used in the control group (5 pigs). Conduction block across the right auricular lesion was assessed by pacing, and the transmurality of the lesions were confirmed by microscopic examination. Result: Conduction block was observed in a total of 8 pigs; 5 in study group and 3 in control group. Transmural injury was confirmed microscopically by the accumulation of acute inflammatory cells and loss of elastic fibers in the endocardium. In two pigs with failed conduction block, microscopic examination of the endocardium appeared normal. Conclusion: Failed radiofrequency ablation is strongly related to non-transmural energy delivery. The specially constructed compression device in the current study was successful in creating firm tissue-catheter contact and thereby generating transmural lesions during unipolar radiofrequency ablation.

Interactive analysis tools for the wide-angle seismic data for crustal structure study (Technical Report) (지각 구조 연구에서 광각 탄성파 자료를 위한 대화식 분석 방법들)

  • Fujie, Gou;Kasahara, Junzo;Murase, Kei;Mochizuki, Kimihiro;Kaneda, Yoshiyuki
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.1
    • /
    • pp.26-33
    • /
    • 2008
  • The analysis of wide-angle seismic reflection and refraction data plays an important role in lithospheric-scale crustal structure study. However, it is extremely difficult to develop an appropriate velocity structure model directly from the observed data, and we have to improve the structure model step by step, because the crustal structure analysis is an intrinsically non-linear problem. There are several subjective processes in wide-angle crustal structure modelling, such as phase identification and trial-and-error forward modelling. Because these subjective processes in wide-angle data analysis reduce the uniqueness and credibility of the resultant models, it is important to reduce subjectivity in the analysis procedure. From this point of view, we describe two software tools, PASTEUP and MODELING, to be used for developing crustal structure models. PASTEUP is an interactive application that facilitates the plotting of record sections, analysis of wide-angle seismic data, and picking of phases. PASTEUP is equipped with various filters and analysis functions to enhance signal-to-noise ratio and to help phase identification. MODELING is an interactive application for editing velocity models, and ray-tracing. Synthetic traveltimes computed by the MODELING application can be directly compared with the observed waveforms in the PASTEUP application. This reduces subjectivity in crustal structure modelling because traveltime picking, which is one of the most subjective process in the crustal structure analysis, is not required. MODELING can convert an editable layered structure model into two-way traveltimes which can be compared with time-sections of Multi Channel Seismic (MCS) reflection data. Direct comparison between the structure model of wide-angle data with the reflection data will give the model more credibility. In addition, both PASTEUP and MODELING are efficient tools for handling a large dataset. These software tools help us develop more plausible lithospheric-scale structure models using wide-angle seismic data.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

A Servicism Model of the New Legal System (서비스주의 법제도 구조와 운용 연구)

  • Hyunsoo Kim
    • Journal of Service Research and Studies
    • /
    • v.11 no.4
    • /
    • pp.1-20
    • /
    • 2021
  • This study was conducted to derive a model of the legal system that is the basis for realizing the service economy, political administration, and social education system. Based on the experience of mankind's legal system operation in the historical era for the past 5,000 years, a legal system model that will make the future human society sustainable has been established. The problems of the current legal system were analyzed at the fundamental level. The root cause of injustice and unfairness was analyzed and a new legal system was designed. Through the legal systems of various national societies that have been attempted in the history of mankind, the structure of the legal system that is desirable for the modern society was designed. Human society, which has experienced how much good legal system has been and is being abused by human irrationality and nonsense, needs to make an effort to change the legal system paradigm itself by learning lessons from failure. This study derives the basis for a legal system that can realize justice and a fair society in the long term. It proposed a model for improving the legal system that allows human society to be happy for a long time. To this end, the fundamental role of the legal system was analyzed at the ideological level and the problems of the current legal system were presented. In addition, the problem of fundamental assumptions about human nature was analyzed and improved assumptions were presented. The structural system of the current legal system was analyzed and a new structure was proposed. In addition, a plan for the operation of a new legal system based on a new structure was suggested. The new legal system was named servicism system. This is because it is a model centered on thorough checks and balances between all opponents, not a simple linear one-dimensional legal system, but a multidimensional legal system, and because it is a viewpoint that clearly recognizes both human reason and desire. The new system is a model that reflects the confrontation between the rule of law and the non-law rule and the confrontation between the power people and the general public. A follow-up study is needed on a concrete plan for transitioning from the current legal system to a new legal system.

Multi-Variate Tabular Data Processing and Visualization Scheme for Machine Learning based Analysis: A Case Study using Titanic Dataset (기계 학습 기반 분석을 위한 다변량 정형 데이터 처리 및 시각화 방법: Titanic 데이터셋 적용 사례 연구)

  • Juhyoung Sung;Kiwon Kwon;Kyoungwon Park;Byoungchul Song
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.121-130
    • /
    • 2024
  • As internet and communication technology (ICT) is improved exponentially, types and amount of available data also increase. Even though data analysis including statistics is significant to utilize this large amount of data, there are inevitable limits to process various and complex data in general way. Meanwhile, there are many attempts to apply machine learning (ML) in various fields to solve the problems according to the enhancement in computational performance and increase in demands for autonomous systems. Especially, data processing for the model input and designing the model to solve the objective function are critical to achieve the model performance. Data processing methods according to the type and property have been presented through many studies and the performance of ML highly varies depending on the methods. Nevertheless, there are difficulties in deciding which data processing method for data analysis since the types and characteristics of data have become more diverse. Specifically, multi-variate data processing is essential for solving non-linear problem based on ML. In this paper, we present a multi-variate tabular data processing scheme for ML-aided data analysis by using Titanic dataset from Kaggle including various kinds of data. We present the methods like input variable filtering applying statistical analysis and normalization according to the data property. In addition, we analyze the data structure using visualization. Lastly, we design an ML model and train the model by applying the proposed multi-variate data process. After that, we analyze the passenger's survival prediction performance of the trained model. We expect that the proposed multi-variate data processing and visualization can be extended to various environments for ML based analysis.