• Title/Summary/Keyword: 기계학습 모델

Search Result 1,139, Processing Time 0.029 seconds

Prediction of Air Temperature and Relative Humidity in Greenhouse via a Multilayer Perceptron Using Environmental Factors (환경요인을 이용한 다층 퍼셉트론 기반 온실 내 기온 및 상대습도 예측)

  • Choi, Hayoung;Moon, Taewon;Jung, Dae Ho;Son, Jung Eek
    • Journal of Bio-Environment Control
    • /
    • v.28 no.2
    • /
    • pp.95-103
    • /
    • 2019
  • Temperature and relative humidity are important factors in crop cultivation and should be properly controlled for improving crop yield and quality. In order to control the environment accurately, we need to predict how the environment will change in the future. The objective of this study was to predict air temperature and relative humidity at a future time by using a multilayer perceptron (MLP). The data required to train MLP was collected every 10 min from Oct. 1, 2016 to Feb. 28, 2018 in an eight-span greenhouse ($1,032m^2$) cultivating mango (Mangifera indica cv. Irwin). The inputs for the MLP were greenhouse inside and outside environment data, and set-up and operating values of environment control devices. By using these data, the MLP was trained to predict the air temperature and relative humidity at a future time of 10 to 120 min. Considering typical four seasons in Korea, three-day data of the each season were compared as test data. The MLP was optimized with four hidden layers and 128 nodes for air temperature ($R^2=0.988$) and with four hidden layers and 64 nodes for relative humidity ($R^2=0.990$). Due to the characteristics of MLP, the accuracy decreased as the prediction time became longer. However, air temperature and relative humidity were properly predicted regardless of the environmental changes varied from season to season. For specific data such as spray irrigation, however, the numbers of trained data were too small, resulting in poor predictive accuracy. In this study, air temperature and relative humidity were appropriately predicted through optimization of MLP, but were limited to the experimental greenhouse. Therefore, it is necessary to collect more data from greenhouses at various places and modify the structure of neural network for generalization.

Selection of Optimal Band Combination for Machine Learning-based Water Body Extraction using SAR Satellite Images (SAR 위성 영상을 이용한 수계탐지의 최적 머신러닝 밴드 조합 연구)

  • Jeon, Hyungyun;Kim, Duk-jin;Kim, Junwoo;Vadivel, Suresh Krishnan Palanisamy;Kim, JaeEon;Kim, Taecin;Jeong, SeungHwan
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.3
    • /
    • pp.120-131
    • /
    • 2020
  • Water body detection using remote sensing based on machine interpretation of satellite image is efficient for managing water resource, drought and flood monitoring. In this study, water body detection with SAR satellite image based on machine learning was performed. However, non water body area can be misclassified to water body because of shadow effect or objects that have similar scattering characteristic comparing to water body, such as roads. To decrease misclassifying, 8 combination of morphology open filtered band, DEM band, curvature band and Cosmo-SkyMed SAR satellite image band about Mokpo region were trained to semantic segmentation machine learning models, respectively. For 8 case of machine learning models, global accuracy that is final test result was computed. Furthermore, concordance rate between landcover data of Mokpo region was calculated. In conclusion, combination of SAR satellite image, morphology open filtered band, DEM band and curvature band showed best result in global accuracy and concordance rate with landcover data. In that case, global accuracy was 95.07% and concordance rate with landcover data was 89.93%.

Apartment Price Prediction Using Deep Learning and Machine Learning (딥러닝과 머신러닝을 이용한 아파트 실거래가 예측)

  • Hakhyun Kim;Hwankyu Yoo;Hayoung Oh
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.2
    • /
    • pp.59-76
    • /
    • 2023
  • Since the COVID-19 era, the rise in apartment prices has been unconventional. In this uncertain real estate market, price prediction research is very important. In this paper, a model is created to predict the actual transaction price of future apartments after building a vast data set of 870,000 from 2015 to 2020 through data collection and crawling on various real estate sites and collecting as many variables as possible. This study first solved the multicollinearity problem by removing and combining variables. After that, a total of five variable selection algorithms were used to extract meaningful independent variables, such as Forward Selection, Backward Elimination, Stepwise Selection, L1 Regulation, and Principal Component Analysis(PCA). In addition, a total of four machine learning and deep learning algorithms were used for deep neural network(DNN), XGBoost, CatBoost, and Linear Regression to learn the model after hyperparameter optimization and compare predictive power between models. In the additional experiment, the experiment was conducted while changing the number of nodes and layers of the DNN to find the most appropriate number of nodes and layers. In conclusion, as a model with the best performance, the actual transaction price of apartments in 2021 was predicted and compared with the actual data in 2021. Through this, I am confident that machine learning and deep learning will help investors make the right decisions when purchasing homes in various economic situations.

IPC Multi-label Classification based on Functional Characteristics of Fields in Patent Documents (특허문서 필드의 기능적 특성을 활용한 IPC 다중 레이블 분류)

  • Lim, Sora;Kwon, YongJin
    • Journal of Internet Computing and Services
    • /
    • v.18 no.1
    • /
    • pp.77-88
    • /
    • 2017
  • Recently, with the advent of knowledge based society where information and knowledge make values, patents which are the representative form of intellectual property have become important, and the number of the patents follows growing trends. Thus, it needs to classify the patents depending on the technological topic of the invention appropriately in order to use a vast amount of the patent information effectively. IPC (International Patent Classification) is widely used for this situation. Researches about IPC automatic classification have been studied using data mining and machine learning algorithms to improve current IPC classification task which categorizes patent documents by hand. However, most of the previous researches have focused on applying various existing machine learning methods to the patent documents rather than considering on the characteristics of the data or the structure of patent documents. In this paper, therefore, we propose to use two structural fields, technical field and background, considered as having impacts on the patent classification, where the two field are selected by applying of the characteristics of patent documents and the role of the structural fields. We also construct multi-label classification model to reflect what a patent document could have multiple IPCs. Furthermore, we propose a method to classify patent documents at the IPC subclass level comprised of 630 categories so that we investigate the possibility of applying the IPC multi-label classification model into the real field. The effect of structural fields of patent documents are examined using 564,793 registered patents in Korea, and 87.2% precision is obtained in the case of using title, abstract, claims, technical field and background. From this sequence, we verify that the technical field and background have an important role in improving the precision of IPC multi-label classification in IPC subclass level.

Damage of Whole Crop Maize in Abnormal Climate Using Machine Learning (이상기상 시 사일리지용 옥수수의 기계학습을 이용한 피해량 산출)

  • Kim, Ji Yung;Choi, Jae Seong;Jo, Hyun Wook;Kim, Moon Ju;Kim, Byong Wan;Sung, Kyung Il
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.42 no.2
    • /
    • pp.127-136
    • /
    • 2022
  • This study was conducted to estimate the damage of Whole Crop Maize (WCM) according to abnormal climate using machine learning and present the damage through mapping. The collected WCM data was 3,232. The climate data was collected from the Korea Meteorological Administration's meteorological data open portal. Deep Crossing is used for the machine learning model. The damage was calculated using climate data from the Automated Synoptic Observing System (95 sites) by machine learning. The damage was calculated by difference between the Dry matter yield (DMY)normal and DMYabnormal. The normal climate was set as the 40-year of climate data according to the year of WCM data (1978~2017). The level of abnormal climate was set as a multiple of the standard deviation applying the World Meteorological Organization(WMO) standard. The DMYnormal was ranged from 13,845~19,347 kg/ha. The damage of WCM was differed according to region and level of abnormal climate and ranged from -305 to 310, -54 to 89, and -610 to 813 kg/ha bnormal temperature, precipitation, and wind speed, respectively. The maximum damage was 310 kg/ha when the abnormal temperature was +2 level (+1.42 ℃), 89 kg/ha when the abnormal precipitation was -2 level (-0.12 mm) and 813 kg/ha when the abnormal wind speed was -2 level (-1.60 m/s). The damage calculated through the WMO method was presented as an mapping using QGIS. When calculating the damage of WCM due to abnormal climate, there was some blank area because there was no data. In order to calculate the damage of blank area, it would be possible to use the automatic weather system (AWS), which provides data from more sites than the automated synoptic observing system (ASOS).

Estimation of High Resolution Sea Surface Salinity Using Multi Satellite Data and Machine Learning (다종 위성자료와 기계학습을 이용한 고해상도 표층 염분 추정)

  • Sung, Taejun;Sim, Seongmun;Jang, Eunna;Im, Jungho
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.747-763
    • /
    • 2022
  • Ocean salinity affects ocean circulation on a global scale and low salinity water around coastal areas often has an impact on aquaculture and fisheries. Microwave satellite sensors (e.g., Soil Moisture Active Passive [SMAP]) have provided sea surface salinity (SSS) based on the dielectric characteristics of water associated with SSS and sea surface temperature (SST). In this study, a Light Gradient Boosting Machine (LGBM)-based model for generating high resolution SSS from Geostationary Ocean Color Imager (GOCI) data was proposed, having machine learning-based improved SMAP SSS by Jang et al. (2022) as reference data (SMAP SSS (Jang)). Three schemes with different input variables were tested, and scheme 3 with all variables including Multi-scale Ultra-high Resolution SST yielded the best performance (coefficient of determination = 0.60, root mean square error = 0.91 psu). The proposed LGBM-based GOCI SSS had a similar spatiotemporal pattern with SMAP SSS (Jang), with much higher spatial resolution even in coastal areas, where SMAP SSS (Jang) was not available. In addition, when tested for the great flood occurred in Southern China in August 2020, GOCI SSS well simulated the spatial and temporal change of Changjiang Diluted Water. This research provided a potential that optical satellite data can be used to generate high resolution SSS associated with the improved microwave-based SSS especially in coastal areas.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

E-Discovery Process Model and Alternative Technologies for an Effective Litigation Response of the Company (기업의 효과적인 소송 대응을 위한 전자증거개시 절차 모델과 대체 기술)

  • Lee, Tae-Rim;Shin, Sang-Uk
    • Journal of Digital Convergence
    • /
    • v.10 no.8
    • /
    • pp.287-297
    • /
    • 2012
  • In order to prepare for the introduction of the E-Discovery system from the United States and to cope with some causable changes of legal systems, we propose a general E-Discovery process and essential tasks of the each phase. The proposed process model is designed by the analysis of well-known projects such as EDRM, The Sedona Conference, which are advanced research for the standardization of E-Discovery task procedures and for the supply of guidelines to hands-on workers. In addition, Machine Learning Algorithms, Open-source libraries for the Information Retrieval and Distributed Processing technologies based on the Hadoop for big data are introduced and its application methods on the E-Discovery work scenario are proposed. All this information will be useful to vendors or people willing to develop the E-Discovery service solution. Also, it is very helpful to company owners willing to rebuild their business process and it enables people who are about to face a major lawsuit to handle a situation effectively.

Oil Price Forecasting Based on Machine Learning Techniques (기계학습기법에 기반한 국제 유가 예측 모델)

  • Park, Kang-Hee;Hou, Tianya;Shin, Hyun-Jung
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.37 no.1
    • /
    • pp.64-73
    • /
    • 2011
  • Oil price prediction is an important issue for the regulators of the government and the related industries. When employing the time series techniques for prediction, however, it becomes difficult and challenging since the behavior of the series of oil prices is dominated by quantitatively unexplained irregular external factors, e.g., supply- or demand-side shocks, political conflicts specific to events in the Middle East, and direct or indirect influences from other global economical indices, etc. Identifying and quantifying the relationship between oil price and those external factors may provide more relevant prediction than attempting to unclose the underlying structure of the series itself. Technically, this implies the prediction is to be based on the vectoral data on the degrees of the relationship rather than the series data. This paper proposes a novel method for time series prediction of using Semi-Supervised Learning that was originally designed only for the vector types of data. First, several time series of oil prices and other economical indices are transformed into the multiple dimensional vectors by the various types of technical indicators and the diverse combination of the indicator-specific hyper-parameters. Then, to avoid the curse of dimensionality and redundancy among the dimensions, the wellknown feature extraction techniques, PCA and NLPCA, are employed. With the extracted features, a timepointspecific similarity matrix of oil prices and other economical indices is built and finally, Semi-Supervised Learning generates one-timepoint-ahead prediction. The series of crude oil prices of West Texas Intermediate (WTI) was used to verify the proposed method, and the experiments showed promising results : 0.86 of the average AUC.

Extraction of Relationships between Scientific Terms based on Composite Kernels (혼합 커널을 활용한 과학기술분야 용어간 관계 추출)

  • Choi, Sung-Pil;Choi, Yun-Soo;Jeong, Chang-Hoo;Myaeng, Sung-Hyon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.12
    • /
    • pp.988-992
    • /
    • 2009
  • In this paper, we attempted to extract binary relations between terminologies using composite kernels consisting of convolution parse tree kernels and WordNet verb synset vector kernels which explain the semantic relationships between two entities in a sentence. In order to evaluate the performance of our system, we used three domain specific test collections. The experimental results demonstrate the superiority of our system in all the targeted collection. Especially, the increase in the effectiveness on KREC 2008, 8% in F1, shows that the core contexts around the entities play an important role in boosting the entire performance of relation extraction.