• Title/Summary/Keyword: 학습알고리즘

Search Result 3,936, Processing Time 0.028 seconds

The Agriculture Decision-making System(ADS) based on Deep Learning for improving crop productivity (농산물 생산성 향상을 위한 딥러닝 기반 농업 의사결정시스템)

  • Park, Jinuk;Ahn, Heuihak;Lee, ByungKwan
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.521-530
    • /
    • 2018
  • This paper proposes "The Agriculture Decision-making System(ADS) based on Deep Learning for improving crop productivity" that collects weather information based on location supporting precision agriculture, predicts current crop condition by using the collected information and real time crop data, and notifies a farmer of the result. The system works as follows. The ICM(Information Collection Module) collects weather information based on location supporting precision agriculture. The DRCM(Deep learning based Risk Calculation Module) predicts whether the C, H, N and moisture content of soil are appropriate to grow specific crops according to current weather. The RNM(Risk Notification Module) notifies a farmer of the prediction result based on the DRCM. The proposed system improves the stability because it reduces the accuracy reduction rate as the amount of data increases and is apply the unsupervised learning to the analysis stage compared to the existing system. As a result, the simulation result shows that the ADS improved the success rate of data analysis by about 6%. And the ADS predicts the current crop growth condition accurately, prevents in advance the crop diseases in various environments, and provides the optimized condition for growing crops.

Modeling Nutrient Uptake of Cucumber Plant Based on EC and Nutrient Solution Uptake in Closed Perlite Culture (순환식 펄라이트재배에서 EC와 양액흡수량을 이용한 오이 양분흡수 모델링)

  • 김형준;우영회;김완순;조삼증;남윤일
    • Proceedings of the Korean Society for Bio-Environment Control Conference
    • /
    • 2001.04b
    • /
    • pp.75-76
    • /
    • 2001
  • 순환식 펄라이트재배에서 배액 재사용을 위한 양분흡수 모델링을 작성하고자 EC 처리(1.5, 1.8, 2.1, 2.4, 2.7 dSㆍm-1)를 수행하였다. 생육 중기까지 EC 수준에 따른 양액흡수량은 차이가 없었지만 중기 이후 EC가 높을수록 흡수량이 감소되는 경항을 보였다(Fig. 1). NO$_3$-N, P 및 K의 흡수량은 생육기간 동안 처리간 차이를 유지하였는데 N과 K는 생육 중기 이후 일정 수준을 유지하였으나 P는 생육기간 동안 다소 증가되는 경향을 보였다. S의 흡수량은 생육 중기 이후 모든 처리에서 급격한 감소를 보였으며 생육 후기에는 처리간에 차이가 없었다(Fig. 2). 오이의 무기이온 흡수율에서와 같이 흡수량에서도 EC간 차이를 보여 EC를 무기이온 흡수량을 추정하는 요소로 이용할 수 있을 것으로 생각되었다. 무기이온 흡수량은 모든 EC 처리간에 생육 초기에는 차이를 보이지 않았으나 생육중기 이후에는 뚜렷한 차이를 보인 후 생육 후기의 높은 농도에서 그 차이가 다소 감소되는 경향을 보였다. 단위일사량에 따른 양액흡수량과 EC를 주된 변수로 한 오이의 이온 흡수량 예측 회귀식을 작성하였는데 모든 무기이온 흡수량 추정식의 상관계수는 S를 제외한 모든 이온에서 높게 나타났는데 특히 N, P, K 및 Ca에서 높았다. S이온에서의 상관계수는 0.47로 낮게 나타났으나 각 이온들의 회귀식에 대한 상관계수는 모두 1% 수준에서 유의성을 보여 위의 모델식을 순환식 양액재배에서 무기이온 추정식으로 사용이 가능할 것으로 생각되었다(Table 1). 이를 이용한 실측치와의 비교는 신뢰구간 1%내에서 높은 정의상관을 보여 실제적인 적용이 가능할 것으로 생각되었다(Fig 3)..ble 3D)를 바탕으로 MPEG-4 시스템의 특징들을 수용하여 구성되고 BIFS와 일대일로 대응된다. 반면에 XMT-0는 멀티미디어 문서를 웹문서로 표현하는 SMIL 2.0 을 그 기반으로 하였기에 MPEG-4 시스템의 특징보다는 컨텐츠를 저작하는 제작자의 초점에 맞추어 개발된 형태이다. XMT를 이용하여 컨텐츠를 저작하기 위해서는 사용자 인터페이스를 통해 입력되는 저작 정보들을 손쉽게 저장하고 조작할 수 있으며, 또한 XMT 파일 형태로 출력하기 위한 API 가 필요하다. 이에, 본 논문에서는 XMT 형태의 중간 자료형으로의 저장 및 조작을 위하여 XML 에서 표준 인터페이스로 사용하고 있는 DOM(Document Object Model)을 기반으로 하여 XMT 문법에 적합하게 API를 정의하였으며, 또한, XMT 파일을 생성하기 위한 API를 구현하였다. 본 논문에서 제공된 API는 객체기반 제작/편집 도구에 응용되어 다양한 멀티미디어 컨텐츠 제작에 사용되었다.x factorization (NMF), generative topographic mapping (GTM)의 구조와 학습 및 추론알고리즘을소개하고 이를 DNA칩 데이터 분석 평가 대회인 CAMDA-2000과 CAMDA-2001에서 사용된cancer diagnosis 문제와 gene-drug dependency analysis 문제에 적용한 결과를 살펴본다.0$\mu$M이 적당하며, 초기배발달을 유기할 때의 효과적인 cysteamine의 농도는 25~50$\mu$M인 것으로 판단된다.N)A(N)/N을 제시하였다(A(N)=N에 대한 A값). 위의 실험식을 사용하여 헝가리산 Zempleni 시료(15%

  • PDF

A Study on the Control System of Maximum Demand Power Using Neural Network and Fuzzy Logic (신경망과 퍼지논리를 이용한 최대수요전력 제어시스템에 관한연구)

  • 조성원
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.9 no.4
    • /
    • pp.420-425
    • /
    • 1999
  • The maximum demand controller is an electrical equipment installed at the consumer side of power system for monitoring the electrical energy consumed during every integrating period and preventing the target maximum demand (MD) being exceeded by disconnecting sheddable loads. By avoiding the peak loads and spreading the energy requirement the controller contributes to maximizing the utility factor of the generator systems. It results in not only saving the energy but also reducing the budget for constructing the natural base facilities by keeping thc number of generating plants ~ninimumT. he conventional MD controllers often bring about the large number of control actions during the every inteyating period and/or undesirable loaddisconnecting operations during the beginning stage of the integrating period. These make the users aviod the MD controllers. In this paper. fuzzy control technique is used to get around the disadvantages of the conventional MD control system. The proposed MD controller consists of the predictor module and the fuzzy MD control module. The proposed forecasting method uses the SOFM neural network model, differently from time series analysis, and thus it has inherent advantages of neural network such as parallel processing, generalization and robustness. The MD fuzzy controller determines the sensitivity of control action based on the time closed to the end of the integrating period and the urgency of the load interrupting action along the predicted demand reaching the target. The experimental results show that the proposed method has more accurate forecastinglcontrol performance than the previous methods.

  • PDF

Relational Database SQL Test Auto-scoring System

  • Hur, Tai-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.11
    • /
    • pp.127-133
    • /
    • 2019
  • SQL is the most common language in data processing. Therefore, most of the colleges offer SQL in their curriculum. In this research, an auto scoring SQL test is proposed for the efficient results of SQL education. The system was treated with algorithms instead of using expensive DBMS(Data Base Management System) for automatic scoring, and satisfactory results were produced. For this system, the test question bank was established out of 'personnel management' and 'academic management'. It provides users with different sets of test each time. Scoring was done by dividing tables into two sections. The one that does not change the table(select) and the other that actually changes the table(update, insert, delete). In the case of a search, the answer and response were executed at first and then the results were compared and processed, the user's answers are evaluated by comparing the table with the correct answer. Modification, insertion, and deletion of table actually changes the data table, so data was restored by using ROLLBACK command. This system was implemented and tested 772 times on the 88 students in Computer Information Division of our college. The results of the implementation show that the average scoring time for a test consisting of 10 questions is 0.052 seconds, and the performance of this system is distinguished considering that multiple responses cannot be processed at the same time by a human grader, we want to develop a problem system that takes into account the difficulty of the problem into account near future.

Development of artificial intelligence-based river flood level prediction model capable of independent self-warning (독립적 자체경보가 가능한 인공지능기반 하천홍수위예측 모형개발)

  • Kim, Sooyoung;Kim, Hyung-Jun;Yoon, Kwang Seok
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.12
    • /
    • pp.1285-1294
    • /
    • 2021
  • In recent years, as rainfall is concentrated and rainfall intensity increases worldwide due to climate change, the scale of flood damage is increasing. Rainfall of a previously unobserved magnitude falls, and the rainy season lasts for a long time on record. In particular, these damages are concentrated in ASEAN countries, and at least 20 million people among ASEAN countries are affected by frequent flooding due to recent sea level rise, typhoons and torrential rain. Korea supports the domestic flood warning system to ASEAN countries through various ODA projects, but the communication network is unstable, so there is a limit to the central control method alone. Therefore, in this study, an artificial intelligence-based flood prediction model was developed to develop an observation station that can observe water level and rainfall, and even predict and warn floods at once at one observation station. Training, validation and testing were carried out for 0.5, 1, 2, 3, and 6 hours of lead time using the rainfall and water level observation data in 10-minute units from 2009 to 2020 at Junjukbi-bridge station of Seolma stream. LSTM was applied to artificial intelligence algorithm. As a result of the study, it showed excellent results in model fit and error for all lead time. In the case of a short arrival time due to a small watershed and a large watershed slope such as Seolma stream, a lead time of 1 hour will show very good prediction results. In addition, it is expected that a longer lead time is possible depending on the size and slope of the watershed.

A Study on the Air Pollution Monitoring Network Algorithm Using Deep Learning (심층신경망 모델을 이용한 대기오염망 자료확정 알고리즘 연구)

  • Lee, Seon-Woo;Yang, Ho-Jun;Lee, Mun-Hyung;Choi, Jung-Moo;Yun, Se-Hwan;Kwon, Jang-Woo;Park, Ji-Hoon;Jung, Dong-Hee;Shin, Hye-Jung
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.11
    • /
    • pp.57-65
    • /
    • 2021
  • We propose a novel method to detect abnormal data of specific symptoms using deep learning in air pollution measurement system. Existing methods generally detect abnomal data by classifying data showing unusual patterns different from the existing time series data. However, these approaches have limitations in detecting specific symptoms. In this paper, we use DeepLab V3+ model mainly used for foreground segmentation of images, whose structure has been changed to handle one-dimensional data. Instead of images, the model receives time-series data from multiple sensors and can detect data showing specific symptoms. In addition, we improve model's performance by reducing the complexity of noisy form time series data by using 'piecewise aggregation approximation'. Through the experimental results, it can be confirmed that anomaly data detection can be performed successfully.

Comparative Analysis of CNN Deep Learning Model Performance Based on Quantification Application for High-Speed Marine Object Classification (고속 해상 객체 분류를 위한 양자화 적용 기반 CNN 딥러닝 모델 성능 비교 분석)

  • Lee, Seong-Ju;Lee, Hyo-Chan;Song, Hyun-Hak;Jeon, Ho-Seok;Im, Tae-ho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.2
    • /
    • pp.59-68
    • /
    • 2021
  • As artificial intelligence(AI) technologies, which have made rapid growth recently, began to be applied to the marine environment such as ships, there have been active researches on the application of CNN-based models specialized for digital videos. In E-Navigation service, which is combined with various technologies to detect floating objects of clash risk to reduce human errors and prevent fires inside ships, real-time processing is of huge importance. More functions added, however, mean a need for high-performance processes, which raises prices and poses a cost burden on shipowners. This study thus set out to propose a method capable of processing information at a high rate while maintaining the accuracy by applying Quantization techniques of a deep learning model. First, videos were pre-processed fit for the detection of floating matters in the sea to ensure the efficient transmission of video data to the deep learning entry. Secondly, the quantization technique, one of lightweight techniques for a deep learning model, was applied to reduce the usage rate of memory and increase the processing speed. Finally, the proposed deep learning model to which video pre-processing and quantization were applied was applied to various embedded boards to measure its accuracy and processing speed and test its performance. The proposed method was able to reduce the usage of memory capacity four times and improve the processing speed about four to five times while maintaining the old accuracy of recognition.

Dimensionality Reduction of Feature Set for API Call based Android Malware Classification

  • Hwang, Hee-Jin;Lee, Soojin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.11
    • /
    • pp.41-49
    • /
    • 2021
  • All application programs, including malware, call the Application Programming Interface (API) upon execution. Recently, using those characteristics, attempts to detect and classify malware based on API Call information have been actively studied. However, datasets containing API Call information require a large amount of computational cost and processing time. In addition, information that does not significantly affect the classification of malware may affect the classification accuracy of the learning model. Therefore, in this paper, we propose a method of extracting a essential feature set after reducing the dimensionality of API Call information by applying various feature selection methods. We used CICAndMal2020, a recently announced Android malware dataset, for the experiment. After extracting the essential feature set through various feature selection methods, Android malware classification was conducted using CNN (Convolutional Neural Network) and the results were analyzed. The results showed that the selected feature set or weight priority varies according to the feature selection methods. And, in the case of binary classification, malware was classified with 97% accuracy even if the feature set was reduced to 15% of the total size. In the case of multiclass classification, an average accuracy of 83% was achieved while reducing the feature set to 8% of the total size.

Development of a deep-learning based automatic tracking of moving vehicles and incident detection processes on tunnels (딥러닝 기반 터널 내 이동체 자동 추적 및 유고상황 자동 감지 프로세스 개발)

  • Lee, Kyu Beom;Shin, Hyu Soung;Kim, Dong Gyu
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.20 no.6
    • /
    • pp.1161-1175
    • /
    • 2018
  • An unexpected event could be easily followed by a large secondary accident due to the limitation in sight of drivers in road tunnels. Therefore, a series of automated incident detection systems have been under operation, which, however, appear in very low detection rates due to very low image qualities on CCTVs in tunnels. In order to overcome that limit, deep learning based tunnel incident detection system was developed, which already showed high detection rates in November of 2017. However, since the object detection process could deal with only still images, moving direction and speed of moving vehicles could not be identified. Furthermore it was hard to detect stopping and reverse the status of moving vehicles. Therefore, apart from the object detection, an object tracking method has been introduced and combined with the detection algorithm to track the moving vehicles. Also, stopping-reverse discrimination algorithm was proposed, thereby implementing into the combined incident detection processes. Each performance on detection of stopping, reverse driving and fire incident state were evaluated with showing 100% detection rate. But the detection for 'person' object appears relatively low success rate to 78.5%. Nevertheless, it is believed that the enlarged richness of image big-data could dramatically enhance the detection capacity of the automatic incident detection system.

Prediction of Traffic Congestion in Seoul by Deep Neural Network (심층인공신경망(DNN)과 다각도 상황 정보 기반의 서울시 도로 링크별 교통 혼잡도 예측)

  • Kim, Dong Hyun;Hwang, Kee Yeon;Yoon, Young
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.4
    • /
    • pp.44-57
    • /
    • 2019
  • Various studies have been conducted to solve traffic congestions in many metropolitan cities through accurate traffic flow prediction. Most studies are based on the assumption that past traffic patterns repeat in the future. Models based on such an assumption fall short in case irregular traffic patterns abruptly occur. Instead, the approaches such as predicting traffic pattern through big data analytics and artificial intelligence have emerged. Specifically, deep learning algorithms such as RNN have been prevalent for tackling the problems of predicting temporal traffic flow as a time series. However, these algorithms do not perform well in terms of long-term prediction. In this paper, we take into account various external factors that may affect the traffic flows. We model the correlation between the multi-dimensional context information with temporal traffic speed pattern using deep neural networks. Our model trained with the traffic data from TOPIS system by Seoul, Korea can predict traffic speed on a specific date with the accuracy reaching nearly 90%. We expect that the accuracy can be improved further by taking into account additional factors such as accidents and constructions for the prediction.