• Title/Summary/Keyword: Machine-being

Search Result 1,053, Processing Time 0.028 seconds

Design Evaluation Model Based on Consumer Values: Three-step Approach from Product Attributes, Perceived Attributes, to Consumer Values (소비자 가치기반 디자인 평가 모형: 제품 속성, 인지 속성, 소비자 가치의 3단계 접근)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.57-76
    • /
    • 2017
  • Recently, consumer needs are diversifying as information technologies are evolving rapidly. A lot of IT devices such as smart phones and tablet PCs are launching following the trend of information technology. While IT devices focused on the technical advance and improvement a few years ago, the situation is changed now. There is no difference in functional aspects, so companies are trying to differentiate IT devices in terms of appearance design. Consumers also consider design as being a more important factor in the decision-making of smart phones. Smart phones have become a fashion items, revealing consumers' own characteristics and personality. As the design and appearance of the smartphone become important things, it is necessary to examine consumer values from the design and appearance of IT devices. Furthermore, it is crucial to clarify the mechanisms of consumers' design evaluation and develop the design evaluation model based on the mechanism. Since the influence of design gets continuously strong, various and many studies related to design were carried out. These studies can classify three main streams. The first stream focuses on the role of design from the perspective of marketing and communication. The second one is the studies to find out an effective and appealing design from the perspective of industrial design. The last one is to examine the consumer values created by a product design, which means consumers' perception or feeling when they look and feel it. These numerous studies somewhat have dealt with consumer values, but they do not include product attributes, or do not cover the whole process and mechanism from product attributes to consumer values. In this study, we try to develop the holistic design evaluation model based on consumer values based on three-step approach from product attributes, perceived attributes, to consumer values. Product attributes means the real and physical characteristics each smart phone has. They consist of bezel, length, width, thickness, weight and curvature. Perceived attributes are derived from consumers' perception on product attributes. We consider perceived size of device, perceived size of display, perceived thickness, perceived weight, perceived bezel (top - bottom / left - right side), perceived curvature of edge, perceived curvature of back side, gap of each part, perceived gloss and perceived screen ratio. They are factorized into six clusters named as 'Size,' 'Slimness,' 'No-Frame,' 'Roundness,' 'Screen Ratio,' and 'Looseness.' We conducted qualitative research to find out consumer values, which are categorized into two: look and feel values. We identified the values named as 'Silhouette,' 'Neatness,' 'Attractiveness,' 'Polishing,' 'Innovativeness,' 'Professionalism,' 'Intellectualness,' 'Individuality,' and 'Distinctiveness' in terms of look values. Also, we identifies 'Stability,' 'Comfortableness,' 'Grip,' 'Solidity,' 'Non-fragility,' and 'Smoothness' in terms of feel values. They are factorized into five key values: 'Sleek Value,' 'Professional Value,' 'Unique Value,' 'Comfortable Value,' and 'Solid Value.' Finally, we developed the holistic design evaluation model by analyzing each relationship from product attributes, perceived attributes, to consumer values. This study has several theoretical and practical contributions. First, we found consumer values in terms of design evaluation and implicit chain relationship from the objective and physical characteristics to the subjective and mental evaluation. That is, the model explains the mechanism of design evaluation in consumer minds. Second, we suggest a general design evaluation process from product attributes, perceived attributes to consumer values. It is an adaptable methodology not only smart phone but also other IT products. Practically, this model can support the decision-making when companies initiative new product development. It can help product designers focus on their capacities with limited resources. Moreover, if its model combined with machine learning collecting consumers' purchasing data, most preferred values, sales data, etc., it will be able to evolve intelligent design decision support system.

Effect of Heat Treatment in Dried Lavers and Modified Processing (마른김에 대한 열처리 효과와 제조 공정 개선 시험)

  • Lee, Tae-Seek;Lee, Hee-Jung;Byun, Han-Seok;Kim, Ji-Hoe;Park, Mi-Jung;Park, Hi-Yun;Jung, Kyoo-Jin
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.33 no.6
    • /
    • pp.529-532
    • /
    • 2000
  • To establish a food safety of dried layer, heat treatment effect on the bacterial density of dried layers was investigated. And a modified process developing experiment for dried layer products using closing type drying oven was carried out. tittle bacterial density difference on the dried layer products were found before and after heat treatment at $90^{\circ}C$ for 6 hrs called Hwaip treatment having been used for long term storage. Direct or indirect heat treatment of dried lavers using gas burner and frying pan reduced about 1 to 3 log cycle of viable cell count from $10^8\;CFU/g\;to\;10^5\;CFU/g$. Heat treatment by direct surface contact type cooking machine being used in the market place for cooked dried layer products could reduce the viable cell count on the layer product from $2.2{\times}10^5{\~}5.2{\times}10^7\;CFU/g\;to\;7.0{\times}10^2{\~}5.0{\times}10^5\;CFU/g$, Ultraviolet irradiation (20 W, 30 cm) to one or both side of the dried laver products reduced the viable cell count from $2.2{\times}10^6\;CFU/g\;to\;8.0{\times}10^5\;CFU/g\;and\;2.0{\times}10^5\;CFU/g$, respectively. The viable cell count of the dried layer products produced by modified process using a closing type dryer was about $10^3\;CFU/g$ and lower 3 log cycle than that in the products collected in market place and made by open type dryer.

  • PDF

A Study on the RFID's Application Environment and Application Measure for Security (RFID의 보안업무 적용환경과 적용방안에 관한 연구)

  • Chung, Tae-Hwang
    • Korean Security Journal
    • /
    • no.21
    • /
    • pp.155-175
    • /
    • 2009
  • RFID that provide automatic identification by reading a tag attached to material through radio frequency without direct touch has some specification, such as rapid identification, long distance identification and penetration, so it is being used for distribution, transportation and safety by using the frequency of 125KHz, 134KHz, 13.56MHz, 433.92MHz, 900MHz, and 2.45GHz. Also it is one of main part of Ubiquitous that means connecting to net-work any time and any place they want. RFID is expected to be new growth industry worldwide, so Korean government think it as prospective field and promote research project and exhibition business program to linked with industry effectively. RFID could be used for access control of person and vehicle according to section and for personal certify with password. RFID can provide more confident security than magnetic card, so it could be used to prevent forgery of register card, passport and the others. Active RFID could be used for protecting operation service using it's long distance date transmission by application with positioning system. And RFID's identification and tracking function can provide effective visitor management through visitor's register, personal identification, position check and can control visitor's movement in the secure area without their approval. Also RFID can make possible of the efficient management and prevention of loss of carrying equipments and others. RFID could be applied to copying machine to manager and control it's user, copying quantity and It could provide some function such as observation of copy content, access control of user. RFID tag adhered to small storage device prevent carrying out of item using the position tracking function and control carrying-in and carrying-out of material efficiently. magnetic card and smart card have been doing good job in identification and control of person, but RFID can do above functions. RFID is very useful device but we should consider the prevention of privacy during its application.

  • PDF

Clinical Study of Acute and Chronic Pain by the Application of Magnetic Resonance Analyser $I_{TM}$ (자기공명분석기를 이용한 통증관리)

  • Park, Wook;Jin, Hee-Cheol;Cho, Myun-Hyun;Yoon, Suk-Jun;Lee, Jin-Seung;Lee, Jeong-Seok;Choi, Surk-Hwan;Kim, Sung-Yell
    • The Korean Journal of Pain
    • /
    • v.6 no.2
    • /
    • pp.192-198
    • /
    • 1993
  • In 1984, a magnetic resonance spectrometer(magnetic resonance analyser, MRA $I_{TM}$) was developed by Sigrid Lipsett and Ronald J. Weinstock in the USA, Biomedical applications of the spectrometer have been examined by Dr. Hoang Van Duc(pathologist, USC), and Nakamura, et al(Japan). From their theoretical views, the biophysical functions of this machine are to analyse and synthesize a healthy tissue and organ resonance pattern, and to detect and correct an abnormal tissue and organ resonance pattern. All of the above functions are based on Quantum physics. The healthy tissue and organ resonance patterns are predetermined as standard magnetic resonance patterns by digitizing values based on peak resonance emissions(response levels or high pitched echo-sounds amplified via human body). In clinical practice, a counter or neutralizing resonance pattern calculated by the spectrometer can correct a phase-shifted resonance pattern(response levels or low pitched echo-sounds) of a diseased tissue and organ. By administering the counter resonance pattern into the site of pain and trigger point, it is possible to readjust the phase-shifted resonance pattern and then to alleviate pain through regulation of the neurotransmitter function of the nervous system. For assessing clinical effectiveness of pain relief with MRA $I_{TM}$ this study was designed to estimate pain intensity by the patient's subjective verbal rating scale(VRS such as graded to no pain, mild, moderate and severe) before application of it, to evaluate an amount of pain relief as applied the spectrometer by the patients subjective pain relief scale(visual analogue scale, VAS, 0~100%), and then to observe a continuation of pain relief following its application for managing acute and chronic pain in the 102 patients during an 8 months period beginning March, 1993. An application time of the spectrometer ranged from 15 to 30 minutes daily in each patient at or near the site of pain and trigger point when the patient wanted to be treated. The subjects consisted of 54 males and 48 females, with the age distribution between 23~40 years in 29 cases, 41~60 years in 48 cases and 61~76 years in 25 cases respectively(Table 1). The kinds of diagnosis and the main site of pain, the duration of pain before the application, and the frequency of it's application were recorded on the Table 2, 3 and 4. A distinction between acute and chronic pain was defined according to both of the pain intervals lasting within and over 3 months. The results of application of the spectrometer were noted as follows; In 51 cases of acute pain before the application, the pain intensities were rated mild in 10 cases, moderate in 15 cases and severe in 26 cases. The amounts of pain relief were noted as between 30~50% in 9 cases, 51~70% in 13 cases and 71~95% in 29 cases. The continuation of pain relief appeared between 6~24 hours in two cases, 2~5 days in 10 cases, 6~14 days in 4 cases, 15 days in one case, and completely relived of pain in 34 cases(Table 5~7). In 51 cases of chronic pain before the application, the pain intensities were rated mild in 12 cases, moderate in l8 cases and severe in 21 cases. The amounts of pain relief were noted as between 0~50% in 10 cases, 51~70% in 27 cases and 71~90% in 14 cases. The continuation of pain relief appeared to have no effect in two cases. The level of effective duration was between 6~12 hours in two cases, 2~5 days in 11 cases, 6~14 days in 14 cases, 15~60 days in 9 cases and in 13 cases the patient was completely relieved of pain(Table 5~7). There were no complications in the patients except a mild reddening and tingling sensation of skin while applying the spectrometer. Total amounts of pain relief in all of the subjects were accounted as poor and fair in 19(18.6%) cases, good in 40(39.2%) cases and excellent in 43(42.2%) cases. The clinical effectiveness of MRA $I_{TM}$ showed variable distributions from no improvements to complete relief of pain by the patient's assessment. In conclusion, we suggest that MRA $I_{TM}$ may be successful in immediate and continued pain relief but still requires several treatments for continued relief and may be gradually effective in pain relief while being applied repeatedly.

  • PDF

A Comparative Analysis According to a Presence or Absence of Metal Artifacts when a Dose Change and QAC Technique are Applied in PET/CT Tests (PET/CT 검사에서 선량변화와 QAC기법 적용 시 Metal Artifact 유무에 따른 SUV 비교분석)

  • Yun, Sun-Hee;Kim, Yang-Jung;Kang, Young-Jik;Park, Su-Young;Kim, Ho-Sin;Ryu, Hyoung-Ki
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.1
    • /
    • pp.51-56
    • /
    • 2015
  • Purpose As medical radiation exposures on patients are being social issues an interest in a relief of radiation exposures on patients is increasing. Further, there are many cases where some patients among who are getting PET/CT tests choose to get implanted with metal artifacts in their bodies. This study is to find out effects of presence or absence of metal artifacts when dose change or CT attenuation correction for the relief of radiation exposures are applied using phantoms through changes in standard uptake value (SUV). Materials and Methods GE company's Discovery 710 machine was used for PET/CT test equipments. We used NEMA IEC body phantoms. We also used screw and mesh cage made of titanium which are used in real clinical processes for the metal artifacts. Two experiments were conducted: One is to test and measure repeatedly about SUV about differences in CT attenuation corrections according to dose changes and another is to do the same procedure for SUV about the presence and absence of the metal artifacts. We injected $^{18}F-FDG$ into NEMA IEC body phantoms with a TBR ratio of 4:1 and then put the metal material into the transformation phantoms. Once a scanning for the metal artifacts was done we eliminated the metal artifacts and went on non-metal artifacts. For the each two experiments, we scanned repeatedly with CT kVp (140, 120, 100, 80) and mA (120, 80, 40, 20, 10) for an experimental condition. For PET, we reconstructed each with standard AC (STD) technique and quantitation achieved cnsistently QAC) technique among CT attenuation correction methods. We conducted a comparative analysis on measured average values and variations which were measured through repeated measure of SUV of region 1, 2, 3 spheres for each conditions of non-metal /metal scan. Results For each kVp, 120, 80, 40 (mA) of non/metal (screw, mesh cage) showed low frequency of fluctuation rates of above 2%. In 20, 10 mA above 2% of fluctuation rates appeared in high frequency. Also, when we compared the fluctuation rates of STD and QAC techniques in non/metal (screw, mesh cage) tests QAC technique showed about 1-10% of differences for each conditions compared to STD technique. In addition, metal types did not have significant effects on fluctuation rates. Conclusion We confirmed that SUV fluctuation rates for both STD and QAC techniques increase as dosage is lower. We also found that the SUV of PET data was maintained steadily in a low dosage for QAC technique when compared with STD technique. Hence, when the low dosage is used for the relief of radiation exposures on patients QAC technique may be exploited helpfully and this could be applied in the same way for patients with metal artifacts implanted in their bodies.

  • PDF

Host-Based Intrusion Detection Model Using Few-Shot Learning (Few-Shot Learning을 사용한 호스트 기반 침입 탐지 모델)

  • Park, DaeKyeong;Shin, DongIl;Shin, DongKyoo;Kim, Sangsoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.271-278
    • /
    • 2021
  • As the current cyber attacks become more intelligent, the existing Intrusion Detection System is difficult for detecting intelligent attacks that deviate from the existing stored patterns. In an attempt to solve this, a model of a deep learning-based intrusion detection system that analyzes the pattern of intelligent attacks through data learning has emerged. Intrusion detection systems are divided into host-based and network-based depending on the installation location. Unlike network-based intrusion detection systems, host-based intrusion detection systems have the disadvantage of having to observe the inside and outside of the system as a whole. However, it has the advantage of being able to detect intrusions that cannot be detected by a network-based intrusion detection system. Therefore, in this study, we conducted a study on a host-based intrusion detection system. In order to evaluate and improve the performance of the host-based intrusion detection system model, we used the host-based Leipzig Intrusion Detection-Data Set (LID-DS) published in 2018. In the performance evaluation of the model using that data set, in order to confirm the similarity of each data and reconstructed to identify whether it is normal data or abnormal data, 1D vector data is converted to 3D image data. Also, the deep learning model has the drawback of having to re-learn every time a new cyber attack method is seen. In other words, it is not efficient because it takes a long time to learn a large amount of data. To solve this problem, this paper proposes the Siamese Convolutional Neural Network (Siamese-CNN) to use the Few-Shot Learning method that shows excellent performance by learning the little amount of data. Siamese-CNN determines whether the attacks are of the same type by the similarity score of each sample of cyber attacks converted into images. The accuracy was calculated using Few-Shot Learning technique, and the performance of Vanilla Convolutional Neural Network (Vanilla-CNN) and Siamese-CNN was compared to confirm the performance of Siamese-CNN. As a result of measuring Accuracy, Precision, Recall and F1-Score index, it was confirmed that the recall of the Siamese-CNN model proposed in this study was increased by about 6% from the Vanilla-CNN model.

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

A Study on Improvement of Collaborative Filtering Based on Implicit User Feedback Using RFM Multidimensional Analysis (RFM 다차원 분석 기법을 활용한 암시적 사용자 피드백 기반 협업 필터링 개선 연구)

  • Lee, Jae-Seong;Kim, Jaeyoung;Kang, Byeongwook
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.139-161
    • /
    • 2019
  • The utilization of the e-commerce market has become a common life style in today. It has become important part to know where and how to make reasonable purchases of good quality products for customers. This change in purchase psychology tends to make it difficult for customers to make purchasing decisions in vast amounts of information. In this case, the recommendation system has the effect of reducing the cost of information retrieval and improving the satisfaction by analyzing the purchasing behavior of the customer. Amazon and Netflix are considered to be the well-known examples of sales marketing using the recommendation system. In the case of Amazon, 60% of the recommendation is made by purchasing goods, and 35% of the sales increase was achieved. Netflix, on the other hand, found that 75% of movie recommendations were made using services. This personalization technique is considered to be one of the key strategies for one-to-one marketing that can be useful in online markets where salespeople do not exist. Recommendation techniques that are mainly used in recommendation systems today include collaborative filtering and content-based filtering. Furthermore, hybrid techniques and association rules that use these techniques in combination are also being used in various fields. Of these, collaborative filtering recommendation techniques are the most popular today. Collaborative filtering is a method of recommending products preferred by neighbors who have similar preferences or purchasing behavior, based on the assumption that users who have exhibited similar tendencies in purchasing or evaluating products in the past will have a similar tendency to other products. However, most of the existed systems are recommended only within the same category of products such as books and movies. This is because the recommendation system estimates the purchase satisfaction about new item which have never been bought yet using customer's purchase rating points of a similar commodity based on the transaction data. In addition, there is a problem about the reliability of purchase ratings used in the recommendation system. Reliability of customer purchase ratings is causing serious problems. In particular, 'Compensatory Review' refers to the intentional manipulation of a customer purchase rating by a company intervention. In fact, Amazon has been hard-pressed for these "compassionate reviews" since 2016 and has worked hard to reduce false information and increase credibility. The survey showed that the average rating for products with 'Compensated Review' was higher than those without 'Compensation Review'. And it turns out that 'Compensatory Review' is about 12 times less likely to give the lowest rating, and about 4 times less likely to leave a critical opinion. As such, customer purchase ratings are full of various noises. This problem is directly related to the performance of recommendation systems aimed at maximizing profits by attracting highly satisfied customers in most e-commerce transactions. In this study, we propose the possibility of using new indicators that can objectively substitute existing customer 's purchase ratings by using RFM multi-dimensional analysis technique to solve a series of problems. RFM multi-dimensional analysis technique is the most widely used analytical method in customer relationship management marketing(CRM), and is a data analysis method for selecting customers who are likely to purchase goods. As a result of verifying the actual purchase history data using the relevant index, the accuracy was as high as about 55%. This is a result of recommending a total of 4,386 different types of products that have never been bought before, thus the verification result means relatively high accuracy and utilization value. And this study suggests the possibility of general recommendation system that can be applied to various offline product data. If additional data is acquired in the future, the accuracy of the proposed recommendation system can be improved.

Landslide Susceptibility Mapping Using Deep Neural Network and Convolutional Neural Network (Deep Neural Network와 Convolutional Neural Network 모델을 이용한 산사태 취약성 매핑)

  • Gong, Sung-Hyun;Baek, Won-Kyung;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1723-1735
    • /
    • 2022
  • Landslides are one of the most prevalent natural disasters, threating both humans and property. Also landslides can cause damage at the national level, so effective prediction and prevention are essential. Research to produce a landslide susceptibility map with high accuracy is steadily being conducted, and various models have been applied to landslide susceptibility analysis. Pixel-based machine learning models such as frequency ratio models, logistic regression models, ensembles models, and Artificial Neural Networks have been mainly applied. Recent studies have shown that the kernel-based convolutional neural network (CNN) technique is effective and that the spatial characteristics of input data have a significant effect on the accuracy of landslide susceptibility mapping. For this reason, the purpose of this study is to analyze landslide vulnerability using a pixel-based deep neural network model and a patch-based convolutional neural network model. The research area was set up in Gangwon-do, including Inje, Gangneung, and Pyeongchang, where landslides occurred frequently and damaged. Landslide-related factors include slope, curvature, stream power index (SPI), topographic wetness index (TWI), topographic position index (TPI), timber diameter, timber age, lithology, land use, soil depth, soil parent material, lineament density, fault density, normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used. Landslide-related factors were built into a spatial database through data preprocessing, and landslide susceptibility map was predicted using deep neural network (DNN) and CNN models. The model and landslide susceptibility map were verified through average precision (AP) and root mean square errors (RMSE), and as a result of the verification, the patch-based CNN model showed 3.4% improved performance compared to the pixel-based DNN model. The results of this study can be used to predict landslides and are expected to serve as a scientific basis for establishing land use policies and landslide management policies.