• Title/Summary/Keyword: methodology(方法論)

Search Result 4,488, Processing Time 0.034 seconds

A Inquiry of the Perception of Death in School Age (학령기 아동의 죽음인식에 관한 탐색적 연구)

  • Joun, Young-Ran
    • Korean Journal of Hospice Care
    • /
    • v.8 no.1
    • /
    • pp.13-28
    • /
    • 2008
  • Purpose: This paper aims to examine the subjective structures and types of school age children's perception of death through an investigative study on their perception of death in order to provide a basic material for them to understand death, and develop and carry out an effective death education program. Methods: The study method used the Q Methodology which can investigate the subjective structures and types of school age children's perception of death. For Q-population, 20 school age children were used as subjects for neutral interviews and open surveys, and through documentary research, a total of 132 statements were collected, For Q-samples, 23 statements (Q-samples) were derived through a non-structural method. P-samples were 31 school age children (8-13 year olds), Q-sorting was carried out using Q-cards, and the collected data was analyzed using the PC QUANL program. Results: As a result of the study, children's perception of death was divided into five types. The first type was functional type, characterized by prominent subjective perception regarding the elements of death, such as non-reversibility, universality, non-functionality, and causality. The second was after-life type, characterized by a strong, focus on life after death in one's perception of death, and it included children with Christian background and those who had experienced death in their immediate family. The third was religious type, characterized by a strong belief in being able to still watch over one's family and friends after one's death, resulting in a positive faith in the after-life. The fourth was fearful type, characterized by a deeper fear of death in comparison to other types. The fifth was realistic type, characterized by a strong and positive assent to the perception of good death. Conclusion: The significance of the results of this paper's study to Nursing is as follows. In terms of understanding the subjectivity of school age children's perception of death in nursing practice, and understanding the compositional elements of death presented with strong emphasis in existing literature and studies, the results will expand these understandings and allow us to understand the level of perception in school age children regarding the definition of death, after-life, and good death, be utilized as useful material in developing an effective death education program for them according to their type characteristics, and become the fertilizer for enabling the children to live a proper life and preventing the tendency to make light of death that occur in adolescence and the spread of suicides. In terms of nursing theory, the description and examination of the subjective structures and the characteristics of the different, types of school age children's perception of death can be utilized as useful material for building a model of school age children's perception of death, and be further used for teaching respect for life. In terms of nursing research, the results can contribute to research describing the effects of nursing intervention strategies and developing tools for providing psychosocial nursing in terms of giving school age children a positive perception of death according to their types as well respect for life.

  • PDF

A study on the utilization of drones and aerial photographs for searching ruins with a focus on topographic analysis (유적탐색을 위한 드론과 항공사진의 활용방안 연구)

  • Heo, Ui-Haeng;Lee, Wal-Yeong
    • Korean Journal of Heritage: History & Science
    • /
    • v.51 no.2
    • /
    • pp.22-37
    • /
    • 2018
  • Unmanned aerial vehicles (UAV) have attracted considerable attention both at home and abroad. The UAV is equipped with a camera that shoots images, which is advantageous for access to areas where archaeological investigations are not possible. Moreover, it is possible to acquire three-dimensional spatial image information by modeling the terrain through aerial photographing, and it is possible to specify the interpretation of the terrain of the survey area. In addition, if we understand the change of the terrain through comparison with past aerial photographs, it will be very helpful to grasp the existence of the ruins. The terrain modeling for searching these remains can be divided into two parts. First, we acquire the aerial photographs of the current terrain using the drone. Then, using image registration and post-processing, we complete the image-joining and terrain-modeling using past aerial photographs. The completed modeled terrain can be used to derive several analytical results. In the present terrain modeling, terrain analysis such as DSM, DTM, and altitude analysis can be performed to roughly grasp the characteristics of the change in the form, quality, and micro-topography. Past terrain modeling of aerial photographs allows us to understand the shape of landforms and micro-topography in wetlands. When verified with actual findings and overlapping data on the modelling of each terrain, it is believed that changes in hill shapes and buried Microform can be identified as helpful when used in low-flying applications. Thus, modeling data using aerial photographs is useful for identifying the reasons for the inability to carry out archaeological surveys, the existence of terrain and ruins in a wide area, and to discuss the preservation process of the ruins. Furthermore, it is possible to provide various themes, such as cadastral maps and land use maps, through comparison of past and present topographical data. However, it is certain that it will function as a new investigation methodology for the exploration of ruins in order to discover archaeological cultural properties.

1H Solid-state NMR Methodology Study for the Quantification of Water Content of Amorphous Silica Nanoparticles Depending on Relative Humidity (상대습도에 따른 비정질 규산염 나노입자의 함수량 정량 분석을 위한 1H 고상 핵자기 공명 분광분석 방법론 연구)

  • Oh, Sol Bi;Kim, Hyun Na
    • Korean Journal of Mineralogy and Petrology
    • /
    • v.34 no.1
    • /
    • pp.31-40
    • /
    • 2021
  • The hydrogen in nominally anhydrous mineral is known to be associated with lattice defects, but it also can exist in the form of water and hydroxyl groups on the large surface of the nanoscale particles. In this study, we investigate the effectiveness of 1H solid-state nuclear magnetic resonance (NMR) spectroscopy as a robust experimental method to quantify the hydrogen atomic environments of amorphous silica nanoparticles with varying relative humidity. Amorphous silica nanoparticles were packed into NMR rotors in a temperature-humidity controlled glove box, then stored in different atmospheric conditions with 25% and 70% relative humidity for 2~10 days until 1H NMR experiments, and a slight difference was observed in 1H NMR spectra. These results indicate that amount of hydrous species in the sample packed in the NMR rotor is rarely changed by the external atmosphere. The amount of hydrogen atom, especially the amount of physisorbed water may vary in the range of ~10% due to the temporal and spatial inhomogeneity of relative humidity in the glove box. The quantitative analysis of 1H NMR spectra shows that the amount of hydrogen atom in amorphous silica nanoparticles linearly increases as the relative humidity increases. These results imply that the sample sealing capability of the NMR rotor is sufficient to preserve the hydrous environments of samples, and is suitable for the quantitative measurement of water content of ultrafine nominally anhydrous minerals depending on the atmospheric relative humidity. We expect that 1H solid-state NMR method is suitable to investigate systematically the effect of surface area and crystallinity on the water content of diverse nano-sized nominally anhydrous minerals with varying relative humidity.

Trends in QA/QC of Phytoplankton Data for Marine Ecosystem Monitoring (해양생태계 모니터링을 위한 식물플랑크톤 자료의 정도 관리 동향)

  • YIH, WONHO;PARK, JONG WOO;SEONG, KYEONG AH;PARK, JONG-GYU;YOO, YEONG DU;KIM, HYUNG SEOP
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.26 no.3
    • /
    • pp.220-237
    • /
    • 2021
  • Since the functional importance of marine phytoplankton was firstly advocated from early 1880s massive data on the species composition and abundance were produced by classical microscopic observation and the advanced auto-imaging technologies. Recently, pigment composition resulted from direct chemical analysis of phytoplankton samples or indirect remote sensing could be used for the group-specific quantification, which leads us to more diversified data production methods and for more improved spatiotemporal accessibilities to the target data-gathering points. In quite a few cases of many long-term marine ecosystem monitoring programs the phytoplankton species composition and abundance was included as a basic monitoring item. The phytoplankton data could be utilized as a crucial evidence for the long-term change in phytoplankton community structure and ecological functioning at the monitoring stations. Usability of the phytoplankton data sometimes is restricted by the differences in data producers throughout the whole monitoring period. Methods for sample treatments, analyses, and species identification of the phytoplankton species could be inconsistent among the different data producers and the monitoring years. In-depth study to determine the precise quantitative values of the phytoplankton species composition and abundance might be begun by Victor Hensen in late 1880s. International discussion on the quality assurance of the marine phytoplankton data began in 1969 by the SCOR Working Group 33 of ICSU. Final report of the Working group in 1974 (UNESCO Technical Papers in Marine Science 18) was later revised and published as the UNESCO Monographs on oceanographic methodology 6. The BEQUALM project, the former body of IPI (International Phytoplankton Intercomparison) for marine phytoplankton data QA/QC under ISO standard, was initiated in late 1990. The IPI is promoting international collaboration for all the participating countries to apply the QA/QC standard established from the 20 years long experience and practices. In Korea, however, such a QA/QC standard for marine phytoplankton species composition and abundance data is not well established by law, whereas that for marine chemical data from measurements and analysis has been already set up and managed. The first priority might be to establish a QA/QC standard system for species composition and abundance data of marine phytoplankton, then to be extended to other functional groups at the higher consumer level of marine food webs.

Abnormal Water Temperature Prediction Model Near the Korean Peninsula Using LSTM (LSTM을 이용한 한반도 근해 이상수온 예측모델)

  • Choi, Hey Min;Kim, Min-Kyu;Yang, Hyun
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.3
    • /
    • pp.265-282
    • /
    • 2022
  • Sea surface temperature (SST) is a factor that greatly influences ocean circulation and ecosystems in the Earth system. As global warming causes changes in the SST near the Korean Peninsula, abnormal water temperature phenomena (high water temperature, low water temperature) occurs, causing continuous damage to the marine ecosystem and the fishery industry. Therefore, this study proposes a methodology to predict the SST near the Korean Peninsula and prevent damage by predicting abnormal water temperature phenomena. The study area was set near the Korean Peninsula, and ERA5 data from the European Center for Medium-Range Weather Forecasts (ECMWF) was used to utilize SST data at the same time period. As a research method, Long Short-Term Memory (LSTM) algorithm specialized for time series data prediction among deep learning models was used in consideration of the time series characteristics of SST data. The prediction model predicts the SST near the Korean Peninsula after 1- to 7-days and predicts the high water temperature or low water temperature phenomenon. To evaluate the accuracy of SST prediction, Coefficient of determination (R2), Root Mean Squared Error (RMSE), and Mean Absolute Percentage Error (MAPE) indicators were used. The summer (JAS) 1-day prediction result of the prediction model, R2=0.996, RMSE=0.119℃, MAPE=0.352% and the winter (JFM) 1-day prediction result is R2=0.999, RMSE=0.063℃, MAPE=0.646%. Using the predicted SST, the accuracy of abnormal sea surface temperature prediction was evaluated with an F1 Score (F1 Score=0.98 for high water temperature prediction in summer (2021/08/05), F1 Score=1.0 for low water temperature prediction in winter (2021/02/19)). As the prediction period increased, the prediction model showed a tendency to underestimate the SST, which also reduced the accuracy of the abnormal water temperature prediction. Therefore, it is judged that it is necessary to analyze the cause of underestimation of the predictive model in the future and study to improve the prediction accuracy.

A study on solar radiation prediction using medium-range weather forecasts (중기예보를 이용한 태양광 일사량 예측 연구)

  • Sujin Park;Hyojeoung Kim;Sahm Kim
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.1
    • /
    • pp.49-62
    • /
    • 2023
  • Solar energy, which is rapidly increasing in proportion, is being continuously developed and invested. As the installation of new and renewable energy policy green new deal and home solar panels increases, the supply of solar energy in Korea is gradually expanding, and research on accurate demand prediction of power generation is actively underway. In addition, the importance of solar radiation prediction was identified in that solar radiation prediction is acting as a factor that most influences power generation demand prediction. In addition, this study can confirm the biggest difference in that it attempted to predict solar radiation using medium-term forecast weather data not used in previous studies. In this paper, we combined the multi-linear regression model, KNN, random fores, and SVR model and the clustering technique, K-means, to predict solar radiation by hour, by calculating the probability density function for each cluster. Before using medium-term forecast data, mean absolute error (MAE) and root mean squared error (RMSE) were used as indicators to compare model prediction results. The data were converted into daily data according to the medium-term forecast data format from March 1, 2017 to February 28, 2022. As a result of comparing the predictive performance of the model, the method showed the best performance by predicting daily solar radiation with random forest, classifying dates with similar climate factors, and calculating the probability density function of solar radiation by cluster. In addition, when the prediction results were checked after fitting the model to the medium-term forecast data using this methodology, it was confirmed that the prediction error increased by date. This seems to be due to a prediction error in the mid-term forecast weather data. In future studies, among the weather factors that can be used in the mid-term forecast data, studies that add exogenous variables such as precipitation or apply time series clustering techniques should be conducted.

An Empirical Analysis of Accelerator Investment Determinants: A Longitudinal Study on Investment Determinants and Investment Performance (액셀러레이터 투자결정요인 실증 분석: 투자결정요인과 투자성과에 대한 종단 연구)

  • Jin Young Joo;Jeong Min Nam
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.4
    • /
    • pp.1-20
    • /
    • 2023
  • This study attempted to identify the relationship between the investment determinants of accelerators and investment performance through empirical analysis. Through literature review, four dimensions and 12 measurement items were extracted for investment determinants, which are independent variables, and investment performance was adjusted to the cumulative amount of subsequent investment based on previous studies. Performance data from 594 companies selected by TIPS from 2017 to 2019, which are relatively reliable and easy to secure data, were collected, and the subsequent investment cumulative attraction amount, which is a dependent variable, was hypothesized through multiple regression analysis three years after the investment. As a result of the study, 'industrial experience years' in the characteristics of founders, 'market size', 'market growth', 'competitive strength', and 'number of patents' in the characteristics of products and services had a significant positive (+) effect. The impact of independent variables on dependent variables was most influenced by the competitive strength of market characteristics, followed by the number of years of industrial experience, the number of patents, the size of the market, and market growth. This was different from the results of previous studies conducted mainly on qualitative research methods, and in most previous studies, the characteristics of founders were the most important, but the empirical analysis results were market characteristics. As a sub-factor, the intensity of competition, which was the subordinate to the importance of previous studies, had the greatest influence in empirical analysis. The academic significance of this study is that it presented a specific methodology to collect and build 594 empirical samples in the absence of empirical research on accelerator investment determinants, and created an opportunity to expand the theoretical discussion of investment determinants through causal research. In practice, the information asymmetry and uncertainty of startups that accelerators have can help them make effective investment decisions by establishing a systematic model of experience-dependent investment determinants.

  • PDF

Study on the Effects of Shop Choice Properties on Brand Attitudes: Focus on Six Major Coffee Shop Brands (점포선택속성이 브랜드 태도에 미치는 영향에 관한 연구: 6개 메이저 브랜드 커피전문점을 중심으로)

  • Yi, Weon-Ho;Kim, Su-Ok;Lee, Sang-Youn;Youn, Myoung-Kil
    • Journal of Distribution Science
    • /
    • v.10 no.3
    • /
    • pp.51-61
    • /
    • 2012
  • This study seeks to understand how the choice of a coffee shop is related to a customer's loyalty and which characteristics of a shop influence this choice. It considers large-sized coffee shops brands whose market scale has gradually grown. The users' choice of shop is determined by price, employee service, shop location, and shop atmosphere. The study investigated the effects of these four properties on the brand attitudes of coffee shops. The effects were found to vary depending on users' characteristics. The properties with the largest influence were shop atmosphere and shop location Therefore, the purpose of the study was to examine the properties that could help coffee shops get loyal customers, and the choice properties that could satisfy consumers' desires The study examined consumers' perceptions of shop properties at selection of coffee shop and the difference between perceptual difference and coffee brand in order to investigate customers' desires and needs and to suggest ways that could supply products and service. The research methodology consisted of two parts: normative and empirical research, which includes empirical analysis and statistical analysis. In this study, a statistical analysis of the empirical research was carried out. The study theoretically confirmed the shop choice properties by reviewing previous studies and performed an empirical analysis including cross tabulation based on secondary material. The findings were as follows: First, coffee shop choice properties varied by gender. Price advantage influenced the choice of both men and women; men preferred nearer coffee shops where they could buy coffee easily and more conveniently than women did. The atmosphere of the coffee shop had the greatest influence on both men and women, and shop atmosphere was thought to be the most important for age analysis. In the past, customers selected coffee shops solely to drink coffee. Now, they select the coffee shop according to its interior, menu variety, and atmosphere owing to improved quality and service of coffee shop brands. Second, the prices of the brands did not vary much because the coffee shops were similarly priced. The service was thought to be more important and to elevate service quality so that price and employee service and other properties did not have a great influence on shop choice. However, those working in the farming, forestry, fishery, and livestock industries were more concerned with the price than the shop atmosphere. College and graduate school students were also affected by inexpensive price. Third, shop choice properties varied depending on income. The shop location and shop atmosphere had a greater influence on shop choice. The customers in an income bracket of less than 2 million won selected low-price coffee shops more than those earning 6 million won or more. Therefore, price advantage had no relation with difference in income. The higher income group was not affected by employee service. Fourth, shop choice properties varied depending on place. For instance, customers at Ulsan were the most affected by the price, and the ones at Busan were the least affected. The shop location had the greatest influence among all of the properties. Among the places surveyed, Gwangju had the least influence. The alternate use of space in a coffee shop was thought to be important in all the cities under consideration. The customers at Ulsan were not affected by employee service, and they selected coffee shops according to quality and preference of shop atmosphere. Lastly, the price factor was found to be a little higher than other factors when customers frequently selected brands according to shop properties. Customers at Gwangju reacted to discounts more than those in other cities did, and the former gave less priority to the quality and taste of coffee. Brand preference varied depending on coffee shop location. Customers at Busan selected brands according to the coffee shop location, and those at Ulsan were not influenced by employee kindness and specialty. The implications of this study are that franchise coffee shop businesses should focus on customers rather than aggressive marketing strategies that increase the number of coffee shops. Thus, they should create an environment with a good atmosphere and set up coffee shops in places that customers have good access to. This study has some limitations. First, the respondents were concentrated in metropolitan areas. Secondary data showed that the number of respondents at Seoul was much more than that at Gyeonggi-do. Furthermore, the number of respondents at Gyeonggi-do was much more than those at the six major cities in the nation. Thus, the regional sample was not representative enough of the population. Second, respondents' ratio was used as a measurement scale to test the perception of shop choice properties and brand preference. The difficulties arose when examining the relation between these properties and brand preference, as well as when understanding the difference between groups. Therefore, future research should seek to address some of the shortcomings of this study: If the coffee shops are being expanded to local areas, then a questionnaire survey of consumers at small cities in local areas shall be conducted to collect primary material. In particular, variables of the questionnaire survey shall be measured using Likert scales in order to include perception on shop choice properties, brand preference, and repurchase. Therefore, correlation analysis, multi-regression, and ANOVA shall be used for empirical analysis and to investigate consumers' attitudes and behavior in detail.

  • PDF

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.