• Title/Summary/Keyword: 학습기반

Search Result 10,129, Processing Time 0.038 seconds

Water resources monitoring technique using multi-source satellite image data fusion (다종 위성영상 자료 융합 기반 수자원 모니터링 기술 개발)

  • Lee, Seulchan;Kim, Wanyub;Cho, Seongkeun;Jeon, Hyunho;Choi, Minhae
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.8
    • /
    • pp.497-508
    • /
    • 2023
  • Agricultural reservoirs are crucial structures for water resources monitoring especially in Korea where the resources are seasonally unevenly distributed. Optical and Synthetic Aperture Radar (SAR) satellites, being utilized as tools for monitoring the reservoirs, have unique limitations in that optical sensors are sensitive to weather conditions and SAR sensors are sensitive to noises and multiple scattering over dense vegetations. In this study, we tried to improve water body detection accuracy through optical-SAR data fusion, and quantitatively analyze the complementary effects. We first detected water bodies at Edong, Cheontae reservoir using the Compact Advanced Satellite 500(CAS500), Kompsat-3/3A, and Sentinel-2 derived Normalized Difference Water Index (NDWI), and SAR backscattering coefficient from Sentinel-1 by K-means clustering technique. After that, the improvements in accuracies were analyzed by applying K-means clustering to the 2-D grid space consists of NDWI and SAR. Kompsat-3/3A was found to have the best accuracy (0.98 at both reservoirs), followed by Sentinel-2(0.83 at Edong, 0.97 at Cheontae), Sentinel-1(both 0.93), and CAS500(0.69, 0.78). By applying K-means clustering to the 2-D space at Cheontae reservoir, accuracy of CAS500 was improved around 22%(resulting accuracy: 0.95) with improve in precision (85%) and degradation in recall (14%). Precision of Kompsat-3A (Sentinel-2) was improved 3%(5%), and recall was degraded 4%(7%). More precise water resources monitoring is expected to be possible with developments of high-resolution SAR satellites including CAS500-5, developments of image fusion and water body detection techniques.

Derivation of Inherent Optical Properties Based on Deep Neural Network (심층신경망 기반의 해수 고유광특성 도출)

  • Hyeong-Tak Lee;Hey-Min Choi;Min-Kyu Kim;Suk Yoon;Kwang-Seok Kim;Jeong-Eon Moon;Hee-Jeong Han;Young-Je Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.695-713
    • /
    • 2023
  • In coastal waters, phytoplankton,suspended particulate matter, and dissolved organic matter intricately and nonlinearly alter the reflectivity of seawater. Neural network technology, which has been rapidly advancing recently, offers the advantage of effectively representing complex nonlinear relationships. In previous studies, a three-stage neural network was constructed to extract the inherent optical properties of each component. However, this study proposes an algorithm that directly employs a deep neural network. The dataset used in this study consists of synthetic data provided by the International Ocean Color Coordination Group, with the input data comprising above-surface remote-sensing reflectance at nine different wavelengths. We derived inherent optical properties using this dataset based on a deep neural network. To evaluate performance, we compared it with a quasi-analytical algorithm and analyzed the impact of log transformation on the performance of the deep neural network algorithm in relation to data distribution. As a result, we found that the deep neural network algorithm accurately estimated the inherent optical properties except for the absorption coefficient of suspended particulate matter (R2 greater than or equal to 0.9) and successfully separated the sum of the absorption coefficient of suspended particulate matter and dissolved organic matter into the absorption coefficient of suspended particulate matter and dissolved organic matter, respectively. We also observed that the algorithm, when directly applied without log transformation of the data, showed little difference in performance. To effectively apply the findings of this study to ocean color data processing, further research is needed to perform learning using field data and additional datasets from various marine regions, compare and analyze empirical and semi-analytical methods, and appropriately assess the strengths and weaknesses of each algorithm.

Improvement of Mid-Wave Infrared Image Visibility Using Edge Information of KOMPSAT-3A Panchromatic Image (KOMPSAT-3A 전정색 영상의 윤곽 정보를 이용한 중적외선 영상 시인성 개선)

  • Jinmin Lee;Taeheon Kim;Hanul Kim;Hongtak Lee;Youkyung Han
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1283-1297
    • /
    • 2023
  • Mid-wave infrared (MWIR) imagery, due to its ability to capture the temperature of land cover and objects, serves as a crucial data source in various fields including environmental monitoring and defense. The KOMPSAT-3A satellite acquires MWIR imagery with high spatial resolution compared to other satellites. However, the limited spatial resolution of MWIR imagery, in comparison to electro-optical (EO) imagery, constrains the optimal utilization of the KOMPSAT-3A data. This study aims to create a highly visible MWIR fusion image by leveraging the edge information from the KOMPSAT-3A panchromatic (PAN) image. Preprocessing is implemented to mitigate the relative geometric errors between the PAN and MWIR images. Subsequently, we employ a pre-trained pixel difference network (PiDiNet), a deep learning-based edge information extraction technique, to extract the boundaries of objects from the preprocessed PAN images. The MWIR fusion imagery is then generated by emphasizing the brightness value corresponding to the edge information of the PAN image. To evaluate the proposed method, the MWIR fusion images were generated in three different sites. As a result, the boundaries of terrain and objects in the MWIR fusion images were emphasized to provide detailed thermal information of the interest area. Especially, the MWIR fusion image provided the thermal information of objects such as airplanes and ships which are hard to detect in the original MWIR images. This study demonstrated that the proposed method could generate a single image that combines visible details from an EO image and thermal information from an MWIR image, which contributes to increasing the usage of MWIR imagery.

An Exploratory Study on the Success Factors of Silicon Valley Platform Business Ecosystem: Focusing on IPA Analysis and Qualitative Analysis (실리콘밸리 플랫폼 기업생태계의 성공요인에 관한 탐색적 연구: IPA 분석과 질적 분석을 중심으로)

  • Yeonsung, Jung;Seong Ho, Lee
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.1
    • /
    • pp.203-223
    • /
    • 2023
  • Recently, the platform industry is rapidly growing in the global market, and competition is intensifying at the same time. Therefore, in order for domestic platform companies to have global competitiveness in the platform market, it is necessary to study the platform business ecosystem and success factors. However, most of the recent platform-related studies have been theoretical studies on the characteristics of platform business status analysis, platform economy, and indirect network externalities of platforms. Therefore, this study comprehensively analyzed the success factors of Silicon Valley's business ecosystem proposed in previous studies, and at the same time analyzed the success factors extracted from stakeholders in the actual Silicon Valley platform business ecosystem. And based on these factors, an IPA analysis was conducted as a way to propose a success plan to stakeholders in the platform business ecosystem. As a result of the analysis, among the success factors collected through previous studies, manpower, capital, and challenge culture were identified as factors that are relatively well maintained in both importance and satisfaction in Silicon Valley. In the end, it can be seen that the creation of an environment and culture in which Silicon Valley can use it to challenge itself based on excellent human resources and abundant capital contributes the most to the success of Silicon Valley's platform business. On the other hand, although it is of high importance to Silicon Valley's platform corporate ecosystem, the factors that show relatively low satisfaction among stakeholders are 'learning and benchmarking among active companies' and 'strong ties and cooperation between members', and it is analyzed that interest and effort are needed to strengthen these factors in the future. Finally, the systems and policies necessary for market autonomous competition, 'business support service industry', 'name value', and 'spin-off start-up' were important factors in literature research, but the importance and satisfaction of these factors were lowered due to changes in the times and environment. This study has academic implications in that it comprehensively analyzes the success factors of Silicon Valley's business ecosystem proposed in previous studies, and at the same time analyzes the success factors extracted from stakeholders in the actual Silicon Valley platform business ecosystem. In addition, there is another academic implications that importance and satisfaction were simultaneously examined through IPA analysis based on these various extracted factors. As for academic implications, it is meaningful in that it contributed to the formation of the domestic platform ecosystem by providing the government and companies with concrete information on the success factors of the platform business ecosystem and the theoretical grounds for the growth of domestic platform businesses.

  • PDF

From a Defecation Alert System to a Smart Bottle: Understanding Lean Startup Methodology from the Case of Startup "L" (배변알리미에서 스마트바틀 출시까지: 스타트업 L사 사례로 본 린 스타트업 실천방안)

  • Sunkyung Park;Ju-Young Park
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.5
    • /
    • pp.91-107
    • /
    • 2023
  • Lean startup is a concept that combines the words "lean," meaning an efficient way of running a business, and "startup," meaning a new business. It is often cited as a strategy for minimizing failure in early-stage businesses, especially in software-based startups. By scrutinizing the case of a startup L, this study suggests that lean startup methodology(LSM) can be useful for hardware and manufacturing companies and identifies ways for early startups to successfully implement LSM. To this end, the study explained the core of LSM including the concepts of hypothesis-driven approach, BML feedback loop, minimum viable product(MVP), and pivot. Five criteria to evaluate the successful implementation of LSM were derived from the core concepts and applied to evaluate the case of startup L . The early startup L pivoted its main business model from defecation alert system for patients with limited mobility to one for infants or toddlers, and finally to a smart bottle for infants. In developing the former two products, analyzed from LSM's perspective, company L neither established a specific customer value proposition for its startup idea and nor verified it through MVP experiment, thus failed to create a BML feedback loop. However, through two rounds of pivots, startup L discovered new target customers and customer needs, and was able to establish a successful business model by repeatedly experimenting with MVPs with minimal effort and time. In other words, Company L's case shows that it is essential to go through the customer-market validation stage at the beginning of the business, and that it should be done through an MVP method that does not waste the startup's time and resources. It also shows that it is necessary to abandon and pivot a product or service that customers do not want, even if it is technically superior and functionally complete. Lastly, the study proves that the lean startup methodology is not limited to the software industry, but can also be applied to technology-based hardware industry. The findings of this study can be used as guidelines and methodologies for early-stage companies to minimize failures and to accelerate the process of establishing a business model, scaling up, and going global.

  • PDF

Construction of a Standard Dataset for Liver Tumors for Testing the Performance and Safety of Artificial Intelligence-Based Clinical Decision Support Systems (인공지능 기반 임상의학 결정 지원 시스템 의료기기의 성능 및 안전성 검증을 위한 간 종양 표준 데이터셋 구축)

  • Seung-seob Kim;Dong Ho Lee;Min Woo Lee;So Yeon Kim;Jaeseung Shin;Jin‑Young Choi;Byoung Wook Choi
    • Journal of the Korean Society of Radiology
    • /
    • v.82 no.5
    • /
    • pp.1196-1206
    • /
    • 2021
  • Purpose To construct a standard dataset of contrast-enhanced CT images of liver tumors to test the performance and safety of artificial intelligence (AI)-based algorithms for clinical decision support systems (CDSSs). Materials and Methods A consensus group of medical experts in gastrointestinal radiology from four national tertiary institutions discussed the conditions to be included in a standard dataset. Seventy-five cases of hepatocellular carcinoma, 75 cases of metastasis, and 30-50 cases of benign lesions were retrieved from each institution, and the final dataset consisted of 300 cases of hepatocellular carcinoma, 300 cases of metastasis, and 183 cases of benign lesions. Only pathologically confirmed cases of hepatocellular carcinomas and metastases were enrolled. The medical experts retrieved the medical records of the patients and manually labeled the CT images. The CT images were saved as Digital Imaging and Communications in Medicine (DICOM) files. Results The medical experts in gastrointestinal radiology constructed the standard dataset of contrast-enhanced CT images for 783 cases of liver tumors. The performance and safety of the AI algorithm can be evaluated by calculating the sensitivity and specificity for detecting and characterizing the lesions. Conclusion The constructed standard dataset can be utilized for evaluating the machine-learning-based AI algorithm for CDSS.

Analysis of Success Cases of InsurTech and Digital Insurance Platform Based on Artificial Intelligence Technologies: Focused on Ping An Insurance Group Ltd. in China (인공지능 기술 기반 인슈어테크와 디지털보험플랫폼 성공사례 분석: 중국 평안보험그룹을 중심으로)

  • Lee, JaeWon;Oh, SangJin
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.71-90
    • /
    • 2020
  • Recently, the global insurance industry is rapidly developing digital transformation through the use of artificial intelligence technologies such as machine learning, natural language processing, and deep learning. As a result, more and more foreign insurers have achieved the success of artificial intelligence technology-based InsurTech and platform business, and Ping An Insurance Group Ltd., China's largest private company, is leading China's global fourth industrial revolution with remarkable achievements in InsurTech and Digital Platform as a result of its constant innovation, using 'finance and technology' and 'finance and ecosystem' as keywords for companies. In response, this study analyzed the InsurTech and platform business activities of Ping An Insurance Group Ltd. through the ser-M analysis model to provide strategic implications for revitalizing AI technology-based businesses of domestic insurers. The ser-M analysis model has been studied so that the vision and leadership of the CEO, the historical environment of the enterprise, the utilization of various resources, and the unique mechanism relationships can be interpreted in an integrated manner as a frame that can be interpreted in terms of the subject, environment, resource and mechanism. As a result of the case analysis, Ping An Insurance Group Ltd. has achieved cost reduction and customer service development by digitally innovating its entire business area such as sales, underwriting, claims, and loan service by utilizing core artificial intelligence technologies such as facial, voice, and facial expression recognition. In addition, "online data in China" and "the vast offline data and insights accumulated by the company" were combined with new technologies such as artificial intelligence and big data analysis to build a digital platform that integrates financial services and digital service businesses. Ping An Insurance Group Ltd. challenged constant innovation, and as of 2019, sales reached $155 billion, ranking seventh among all companies in the Global 2000 rankings selected by Forbes Magazine. Analyzing the background of the success of Ping An Insurance Group Ltd. from the perspective of ser-M, founder Mammingz quickly captured the development of digital technology, market competition and changes in population structure in the era of the fourth industrial revolution, and established a new vision and displayed an agile leadership of digital technology-focused. Based on the strong leadership led by the founder in response to environmental changes, the company has successfully led InsurTech and Platform Business through innovation of internal resources such as investment in artificial intelligence technology, securing excellent professionals, and strengthening big data capabilities, combining external absorption capabilities, and strategic alliances among various industries. Through this success story analysis of Ping An Insurance Group Ltd., the following implications can be given to domestic insurance companies that are preparing for digital transformation. First, CEOs of domestic companies also need to recognize the paradigm shift in industry due to the change in digital technology and quickly arm themselves with digital technology-oriented leadership to spearhead the digital transformation of enterprises. Second, the Korean government should urgently overhaul related laws and systems to further promote the use of data between different industries and provide drastic support such as deregulation, tax benefits and platform provision to help the domestic insurance industry secure global competitiveness. Third, Korean companies also need to make bolder investments in the development of artificial intelligence technology so that systematic securing of internal and external data, training of technical personnel, and patent applications can be expanded, and digital platforms should be quickly established so that diverse customer experiences can be integrated through learned artificial intelligence technology. Finally, since there may be limitations to generalization through a single case of an overseas insurance company, I hope that in the future, more extensive research will be conducted on various management strategies related to artificial intelligence technology by analyzing cases of multiple industries or multiple companies or conducting empirical research.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

A Study of Factors Associated with Software Developers Job Turnover (데이터마이닝을 활용한 소프트웨어 개발인력의 업무 지속수행의도 결정요인 분석)

  • Jeon, In-Ho;Park, Sun W.;Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.191-204
    • /
    • 2015
  • According to the '2013 Performance Assessment Report on the Financial Program' from the National Assembly Budget Office, the unfilled recruitment ratio of Software(SW) Developers in South Korea was 25% in the 2012 fiscal year. Moreover, the unfilled recruitment ratio of highly-qualified SW developers reaches almost 80%. This phenomenon is intensified in small and medium enterprises consisting of less than 300 employees. Young job-seekers in South Korea are increasingly avoiding becoming a SW developer and even the current SW developers want to change careers, which hinders the national development of IT industries. The Korean government has recently realized the problem and implemented policies to foster young SW developers. Due to this effort, it has become easier to find young SW developers at the beginning-level. However, it is still hard to recruit highly-qualified SW developers for many IT companies. This is because in order to become a SW developing expert, having a long term experiences are important. Thus, improving job continuity intentions of current SW developers is more important than fostering new SW developers. Therefore, this study surveyed the job continuity intentions of SW developers and analyzed the factors associated with them. As a method, we carried out a survey from September 2014 to October 2014, which was targeted on 130 SW developers who were working in IT industries in South Korea. We gathered the demographic information and characteristics of the respondents, work environments of a SW industry, and social positions for SW developers. Afterward, a regression analysis and a decision tree method were performed to analyze the data. These two methods are widely used data mining techniques, which have explanation ability and are mutually complementary. We first performed a linear regression method to find the important factors assaociated with a job continuity intension of SW developers. The result showed that an 'expected age' to work as a SW developer were the most significant factor associated with the job continuity intention. We supposed that the major cause of this phenomenon is the structural problem of IT industries in South Korea, which requires SW developers to change the work field from developing area to management as they are promoted. Also, a 'motivation' to become a SW developer and a 'personality (introverted tendency)' of a SW developer are highly importantly factors associated with the job continuity intention. Next, the decision tree method was performed to extract the characteristics of highly motivated developers and the low motivated ones. We used well-known C4.5 algorithm for decision tree analysis. The results showed that 'motivation', 'personality', and 'expected age' were also important factors influencing the job continuity intentions, which was similar to the results of the regression analysis. In addition to that, the 'ability to learn' new technology was a crucial factor for the decision rules of job continuity. In other words, a person with high ability to learn new technology tends to work as a SW developer for a longer period of time. The decision rule also showed that a 'social position' of SW developers and a 'prospect' of SW industry were minor factors influencing job continuity intensions. On the other hand, 'type of an employment (regular position/ non-regular position)' and 'type of company (ordering company/ service providing company)' did not affect the job continuity intension in both methods. In this research, we demonstrated the job continuity intentions of SW developers, who were actually working at IT companies in South Korea, and we analyzed the factors associated with them. These results can be used for human resource management in many IT companies when recruiting or fostering highly-qualified SW experts. It can also help to build SW developer fostering policy and to solve the problem of unfilled recruitment of SW Developers in South Korea.