• Title/Summary/Keyword: 예측성능 개선

Search Result 977, Processing Time 0.03 seconds

Sorghum Panicle Detection using YOLOv5 based on RGB Image Acquired by UAV System (무인기로 취득한 RGB 영상과 YOLOv5를 이용한 수수 이삭 탐지)

  • Min-Jun, Park;Chan-Seok, Ryu;Ye-Seong, Kang;Hye-Young, Song;Hyun-Chan, Baek;Ki-Su, Park;Eun-Ri, Kim;Jin-Ki, Park;Si-Hyeong, Jang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.4
    • /
    • pp.295-304
    • /
    • 2022
  • The purpose of this study is to detect the sorghum panicle using YOLOv5 based on RGB images acquired by a unmanned aerial vehicle (UAV) system. The high-resolution images acquired using the RGB camera mounted in the UAV on September 2, 2022 were split into 512×512 size for YOLOv5 analysis. Sorghum panicles were labeled as bounding boxes in the split image. 2,000images of 512×512 size were divided at a ratio of 6:2:2 and used to train, validate, and test the YOLOv5 model, respectively. When learning with YOLOv5s, which has the fewest parameters among YOLOv5 models, sorghum panicles were detected with mAP@50=0.845. In YOLOv5m with more parameters, sorghum panicles could be detected with mAP@50=0.844. Although the performance of the two models is similar, YOLOv5s ( 4 hours 35 minutes) has a faster training time than YOLOv5m (5 hours 15 minutes). Therefore, in terms of time cost, developing the YOLOv5s model was considered more efficient for detecting sorghum panicles. As an important step in predicting sorghum yield, a technique for detecting sorghum panicles using high-resolution RGB images and the YOLOv5 model was presented.

A Study on the Improvement of Recommendation Accuracy by Using Category Association Rule Mining (카테고리 연관 규칙 마이닝을 활용한 추천 정확도 향상 기법)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.27-42
    • /
    • 2020
  • Traditional companies with offline stores were unable to secure large display space due to the problems of cost. This limitation inevitably allowed limited kinds of products to be displayed on the shelves, which resulted in consumers being deprived of the opportunity to experience various items. Taking advantage of the virtual space called the Internet, online shopping goes beyond the limits of limitations in physical space of offline shopping and is now able to display numerous products on web pages that can satisfy consumers with a variety of needs. Paradoxically, however, this can also cause consumers to experience the difficulty of comparing and evaluating too many alternatives in their purchase decision-making process. As an effort to address this side effect, various kinds of consumer's purchase decision support systems have been studied, such as keyword-based item search service and recommender systems. These systems can reduce search time for items, prevent consumer from leaving while browsing, and contribute to the seller's increased sales. Among those systems, recommender systems based on association rule mining techniques can effectively detect interrelated products from transaction data such as orders. The association between products obtained by statistical analysis provides clues to predicting how interested consumers will be in another product. However, since its algorithm is based on the number of transactions, products not sold enough so far in the early days of launch may not be included in the list of recommendations even though they are highly likely to be sold. Such missing items may not have sufficient opportunities to be exposed to consumers to record sufficient sales, and then fall into a vicious cycle of a vicious cycle of declining sales and omission in the recommendation list. This situation is an inevitable outcome in situations in which recommendations are made based on past transaction histories, rather than on determining potential future sales possibilities. This study started with the idea that reflecting the means by which this potential possibility can be identified indirectly would help to select highly recommended products. In the light of the fact that the attributes of a product affect the consumer's purchasing decisions, this study was conducted to reflect them in the recommender systems. In other words, consumers who visit a product page have shown interest in the attributes of the product and would be also interested in other products with the same attributes. On such assumption, based on these attributes, the recommender system can select recommended products that can show a higher acceptance rate. Given that a category is one of the main attributes of a product, it can be a good indicator of not only direct associations between two items but also potential associations that have yet to be revealed. Based on this idea, the study devised a recommender system that reflects not only associations between products but also categories. Through regression analysis, two kinds of associations were combined to form a model that could predict the hit rate of recommendation. To evaluate the performance of the proposed model, another regression model was also developed based only on associations between products. Comparative experiments were designed to be similar to the environment in which products are actually recommended in online shopping malls. First, the association rules for all possible combinations of antecedent and consequent items were generated from the order data. Then, hit rates for each of the associated rules were predicted from the support and confidence that are calculated by each of the models. The comparative experiments using order data collected from an online shopping mall show that the recommendation accuracy can be improved by further reflecting not only the association between products but also categories in the recommendation of related products. The proposed model showed a 2 to 3 percent improvement in hit rates compared to the existing model. From a practical point of view, it is expected to have a positive effect on improving consumers' purchasing satisfaction and increasing sellers' sales.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Context Prediction Using Right and Wrong Patterns to Improve Sequential Matching Performance for More Accurate Dynamic Context-Aware Recommendation (보다 정확한 동적 상황인식 추천을 위해 정확 및 오류 패턴을 활용하여 순차적 매칭 성능이 개선된 상황 예측 방법)

  • Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.51-67
    • /
    • 2009
  • Developing an agile recommender system for nomadic users has been regarded as a promising application in mobile and ubiquitous settings. To increase the quality of personalized recommendation in terms of accuracy and elapsed time, estimating future context of the user in a correct way is highly crucial. Traditionally, time series analysis and Makovian process have been adopted for such forecasting. However, these methods are not adequate in predicting context data, only because most of context data are represented as nominal scale. To resolve these limitations, the alignment-prediction algorithm has been suggested for context prediction, especially for future context from the low-level context. Recently, an ontological approach has been proposed for guided context prediction without context history. However, due to variety of context information, acquiring sufficient context prediction knowledge a priori is not easy in most of service domains. Hence, the purpose of this paper is to propose a novel context prediction methodology, which does not require a priori knowledge, and to increase accuracy and decrease elapsed time for service response. To do so, we have newly developed pattern-based context prediction approach. First of ail, a set of individual rules is derived from each context attribute using context history. Then a pattern consisted of results from reasoning individual rules, is developed for pattern learning. If at least one context property matches, say R, then regard the pattern as right. If the pattern is new, add right pattern, set the value of mismatched properties = 0, freq = 1 and w(R, 1). Otherwise, increase the frequency of the matched right pattern by 1 and then set w(R,freq). After finishing training, if the frequency is greater than a threshold value, then save the right pattern in knowledge base. On the other hand, if at least one context property matches, say W, then regard the pattern as wrong. If the pattern is new, modify the result into wrong answer, add right pattern, and set frequency to 1 and w(W, 1). Or, increase the matched wrong pattern's frequency by 1 and then set w(W, freq). After finishing training, if the frequency value is greater than a threshold level, then save the wrong pattern on the knowledge basis. Then, context prediction is performed with combinatorial rules as follows: first, identify current context. Second, find matched patterns from right patterns. If there is no pattern matched, then find a matching pattern from wrong patterns. If a matching pattern is not found, then choose one context property whose predictability is higher than that of any other properties. To show the feasibility of the methodology proposed in this paper, we collected actual context history from the travelers who had visited the largest amusement park in Korea. As a result, 400 context records were collected in 2009. Then we randomly selected 70% of the records as training data. The rest were selected as testing data. To examine the performance of the methodology, prediction accuracy and elapsed time were chosen as measures. We compared the performance with case-based reasoning and voting methods. Through a simulation test, we conclude that our methodology is clearly better than CBR and voting methods in terms of accuracy and elapsed time. This shows that the methodology is relatively valid and scalable. As a second round of the experiment, we compared a full model to a partial model. A full model indicates that right and wrong patterns are used for reasoning the future context. On the other hand, a partial model means that the reasoning is performed only with right patterns, which is generally adopted in the legacy alignment-prediction method. It turned out that a full model is better than a partial model in terms of the accuracy while partial model is better when considering elapsed time. As a last experiment, we took into our consideration potential privacy problems that might arise among the users. To mediate such concern, we excluded such context properties as date of tour and user profiles such as gender and age. The outcome shows that preserving privacy is endurable. Contributions of this paper are as follows: First, academically, we have improved sequential matching methods to predict accuracy and service time by considering individual rules of each context property and learning from wrong patterns. Second, the proposed method is found to be quite effective for privacy preserving applications, which are frequently required by B2C context-aware services; the privacy preserving system applying the proposed method successfully can also decrease elapsed time. Hence, the method is very practical in establishing privacy preserving context-aware services. Our future research issues taking into account some limitations in this paper can be summarized as follows. First, user acceptance or usability will be tested with actual users in order to prove the value of the prototype system. Second, we will apply the proposed method to more general application domains as this paper focused on tourism in amusement park.

LCD 연구 개발 동향

  • 이종천
    • The Magazine of the IEIE
    • /
    • v.29 no.6
    • /
    • pp.76-80
    • /
    • 2002
  • 'Liquid Crystal의 상전이(相轉移)와 광학적 이방성(異方性)이 1888년과 1889년 F. Reinitzer와 O. Lehmann에 의해 Monatsch Chem.과 Z.Physikal.Chem.에 각각 보고된 후 부터 제2차 세계대전이 끝난 뒤인 1950년대 까지는 Liquid Crystal을 단지실험실에서의 기초학문 차원의 연구 대상으로만 다루어 왔다. 1963년 Williams가 Liquid Crystal Device로는 최초로 특허 출원을 하였으며, 1968년 RCA사의 Heilmeier등은 Nematic 액정(液晶)에 저주파(低周波) 전압(電壓)을 인가하면 투명한 액정이 혼탁(混濁)상태로 변화하는 '동적산란(動的散亂)'(Dynamic Scattering) 현상을 이용하여 최초의 DSM(Dynamic Scattering Mode) LCD(Liquid Crystal Display)를 발명하였다. 비록 150V 이상의 높은 구동전압과 과소비전력의 특성 때문에 실용화에는 실패하였지만 Guest-Host효과와 Memory효과 등을 발견하였다. 1970년대에 이르러 실온에서 안정되게 사용 가능한 액정물질들이 합성되고(H. Kelker에 의해 MBBA, G. Gray에 의한 Cyano-Biphenyl 액정의 합성), CMOS 트랜지스터의 발명, 투명도전막(ITO), 수은전지등의 주변기술들의 발전으로 인하여 LCD의 상품화가 본격적으로 이루어지게 되었다. 1971년에는 M. Shadt, W. Helfrich, J.L. Fergason등이 TN(Twisted Nematic) LCD를 발명하여 전자 계산기와 손목시계에 응용되었고, 1970년대 말에는 Sharp에서 Dot Matrix형의 휴대형 컴퓨터를 발매하였다. 이러한 단순 구동형의 TN LCD는 그래픽 정보를 표시하는 데에는 품질의 한계가 있어 1979년 영국의 Le Comber에 의해 a-Si TFT(amorphous Silicon Thin Film Transistor) LCD의 연구가 시작되었고, 1983년 T.J. Scheffer, J. Nehring, G. Waters에 의해 STN(Super Twisted Nematic) LCD가 창안되었고, 1980년 N. Clark, S. Lagerwall 및 1983년 K.Yossino에 의해 Ferroelectric LCD가 등장하여 LCD의 정보 표시량 증대에 크게 기여하였다. Color화의 진전은 1972년 A.G. Ficher의 셀 외부에 RGB(Red, Green, Blue) filter를 부착하는 방안과, 1981년 T. Uchida 등에 의한 셀 내부에 RGB filter를 부착하는 방법에 의해 상품화가 되었다. 1985년에는 J.L. Fergason에 의해 Polymer Dispersed LCD가 발명되었고, 1980년대 중반에 이르러 동화상(動畵像) 표시가 가능한 a-Si TFT LCD의 시제품(試製品) 개발이 이루어지고 1990년부터는 본격적인 양산 시대에 접어들게 되었다. 1990년대 초에는 STN LCD의 Color화 및 대형화(大型化) 고(高)품위화에 힘입어 Note-Book PC에 LCD가 본격적으로 적용이 되었고, 1990년대 후반에는TFT LCD의 표시품질 대비 가격경쟁력 확보로 인하여 Note-Book PC 시장을 독점하기에 이르렀다. 이후로는 TFT LCD의 대형화가 중요한 쟁점으로 부각되고 있고, 1995년 삼성전자는 당시 세계최대 크기의 22' TFT LCD를 개발하였다. 또한 LCD의 고정세(高情細)화를 위해 Poly Si TFT LCD의 개발이 이루어졌고, 디지타이져 일체형 LCD의 상품화가 그 응용의 폭을 넓혔으며, LCD의 대형화를 위해 1994년 Canon에 의해 14.8', 21' 등의 FLCD가 개발되었다. 대형화 방안으로 Tiled LCD 기술이 개발되고 있으며, 1995년에 Sharp에 의해 21' 두장의 Panel을 이어 붙인 28' TFT LCD가 전시되었고 1996년에는 21' 4장의 Panel을 이어 붙인 40'급 까지의 개발이 시도 되었으며 현재는 LCD의 특성향상과 생산설비의 성능개선과 안정적인 공정관리기술을 바탕으로 삼성전자에서 단패널 40' TFT LCD가 최근에 개발되었다. Projection용 디스플레이로는 Poly-Si TFT LCD를 이용하여 $25'{\sim}100'$사이의 배면투사형과 전면투사형 까지 개발되어 대형 TV시장을 주도하고 있다. 21세기 디지털방송 시대를 맞아 플라즈마디스플레이패널(PDP) TV, 액정표시장치 (LCD)TV, 강유전성액정(FLCD) TV 등 2005년에 약 1500만대 규모의 거대 시장을 형성할 것으로 예상되는 이른바 '벽걸이TV'로 불리는 차세대 초박형 TV 시장을 선점하기 위하여 세계 가전업계들이 양산에 총력을 기울이고 있다. 벽걸이TV 시장이 본격적으로 형성되더라도 PDP TV와 LCD TV가 직접적으로 시장에서 경쟁을 벌이는 일은 별로 없을 것으로 보인다. 향후 디지털TV 시장이 본격적으로 열리면 40인치 이하의 중대형 시장은 LCD TV가 주도하고 40인치 이상 대화면 시장은 PDP TV가 주도할 것으로 보는 시각이 지배적이기 때문이다. 그러나 이러한 직시형 중대형(重大型)디스플레이는 그 가격이 너무 높아서 현재의 브라운관 TV를 대체(代替)하기에는 시일이 많이 소요될 것으로 추정되고 있다. 그 대안(代案)으로는 비교적 저가격(低價格)이면서도 고품질의 디지털 화상구현이 가능한 고해상도 프로젝션 TV가 유력시되고 있다. 이러한 고해상도 프로젝션 TV용으로 DMD(Digital Micro-mirror Display), Poly-Si TFT LCD와 LCOS(Liquid Crystals on Silicon) 등의 상품화가 진행되고 있다. 인터넷과 정보통신 기술의 발달로 휴대형 디스플레이의 시장이 예상 외로 급성장하고 있으며, 요구되는 디스플레이의 품질도 단순한 문자표시에서 그치지 않고 고해상도의 그래픽 동화상 표시와 칼라 표시 및 3차원 화상표시까지 점차로 그 영역이 넓어지고 있다. <표 1>에서 보여주는 바와 같이 LCD의 시장규모는 적용분야 별로 지속적인 성장이 예상되며, 새로운 응용분야의 시장도 성장성을 어느 정도 예측할 수 있다. 따라서 LCD기술의 연구개발 방향은 크게 두가지로 분류할 수 있으며 첫째로는, 현재 양산되고 있는 LCD 상품의 경쟁력강화를 위하여 원가(原價) 절감(節減)과 표시품질을 향상시키는 것이며 둘째로는, 새로운 타입의 LCD를 개발하여 기존 상품을 대체하거나 새로운 시장을 창출하는 분야로 나눌 수 있다. 이와 같은 관점에서 현재 진행되고 있는 LCD기술개발은 다음과 같이 분류할 수 있다. 1) 원가 절감 2) 특성 향상 3) New Type LCD 개발.

  • PDF

A study on the derivation and evaluation of flow duration curve (FDC) using deep learning with a long short-term memory (LSTM) networks and soil water assessment tool (SWAT) (LSTM Networks 딥러닝 기법과 SWAT을 이용한 유량지속곡선 도출 및 평가)

  • Choi, Jung-Ryel;An, Sung-Wook;Choi, Jin-Young;Kim, Byung-Sik
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1107-1118
    • /
    • 2021
  • Climate change brought on by global warming increased the frequency of flood and drought on the Korean Peninsula, along with the casualties and physical damage resulting therefrom. Preparation and response to these water disasters requires national-level planning for water resource management. In addition, watershed-level management of water resources requires flow duration curves (FDC) derived from continuous data based on long-term observations. Traditionally, in water resource studies, physical rainfall-runoff models are widely used to generate duration curves. However, a number of recent studies explored the use of data-based deep learning techniques for runoff prediction. Physical models produce hydraulically and hydrologically reliable results. However, these models require a high level of understanding and may also take longer to operate. On the other hand, data-based deep-learning techniques offer the benefit if less input data requirement and shorter operation time. However, the relationship between input and output data is processed in a black box, making it impossible to consider hydraulic and hydrological characteristics. This study chose one from each category. For the physical model, this study calculated long-term data without missing data using parameter calibration of the Soil Water Assessment Tool (SWAT), a physical model tested for its applicability in Korea and other countries. The data was used as training data for the Long Short-Term Memory (LSTM) data-based deep learning technique. An anlysis of the time-series data fond that, during the calibration period (2017-18), the Nash-Sutcliffe Efficiency (NSE) and the determinanation coefficient for fit comparison were high at 0.04 and 0.03, respectively, indicating that the SWAT results are superior to the LSTM results. In addition, the annual time-series data from the models were sorted in the descending order, and the resulting flow duration curves were compared with the duration curves based on the observed flow, and the NSE for the SWAT and the LSTM models were 0.95 and 0.91, respectively, and the determination coefficients were 0.96 and 0.92, respectively. The findings indicate that both models yield good performance. Even though the LSTM requires improved simulation accuracy in the low flow sections, the LSTM appears to be widely applicable to calculating flow duration curves for large basins that require longer time for model development and operation due to vast data input, and non-measured basins with insufficient input data.

Estimation of Fresh Weight and Leaf Area Index of Soybean (Glycine max) Using Multi-year Spectral Data (다년도 분광 데이터를 이용한 콩의 생체중, 엽면적 지수 추정)

  • Jang, Si-Hyeong;Ryu, Chan-Seok;Kang, Ye-Seong;Park, Jun-Woo;Kim, Tae-Yang;Kang, Kyung-Suk;Park, Min-Jun;Baek, Hyun-Chan;Park, Yu-hyeon;Kang, Dong-woo;Zou, Kunyan;Kim, Min-Cheol;Kwon, Yeon-Ju;Han, Seung-ah;Jun, Tae-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.329-339
    • /
    • 2021
  • Soybeans (Glycine max), one of major upland crops, require precise management of environmental conditions, such as temperature, water, and soil, during cultivation since they are sensitive to environmental changes. Application of spectral technologies that measure the physiological state of crops remotely has great potential for improving quality and productivity of the soybean by estimating yields, physiological stresses, and diseases. In this study, we developed and validated a soybean growth prediction model using multispectral imagery. We conducted a linear regression analysis between vegetation indices and soybean growth data (fresh weight and LAI) obtained at Miryang fields. The linear regression model was validated at Goesan fields. It was found that the model based on green ratio vegetation index (GRVI) had the greatest performance in prediction of fresh weight at the calibration stage (R2=0.74, RMSE=246 g/m2, RE=34.2%). In the validation stage, RMSE and RE of the model were 392 g/m2 and 32%, respectively. The errors of the model differed by cropping system, For example, RMSE and RE of model in single crop fields were 315 g/m2 and 26%, respectively. On the other hand, the model had greater values of RMSE (381 g/m2) and RE (31%) in double crop fields. As a result of developing models for predicting a fresh weight into two years (2018+2020) with similar accumulated temperature (AT) in three years and a single year (2019) that was different from that AT, the prediction performance of a single year model was better than a two years model. Consequently, compared with those models divided by AT and a three years model, RMSE of a single crop fields were improved by about 29.1%. However, those of double crop fields decreased by about 19.6%. When environmental factors are used along with, spectral data, the reliability of soybean growth prediction can be achieved various environmental conditions.