• 제목/요약/키워드: point source model

Search Result 588, Processing Time 0.032 seconds

Accuracy Evaluation of Open-air Compost Volume Calculation Using Unmanned Aerial Vehicle (무인항공기를 이용한 야적퇴비 적재량 산정 정확도 평가)

  • Kim, Heung-Min;Bak, Su-Ho;Yoon, Hong-Joo;Jang, Seon-Woong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.3
    • /
    • pp.541-550
    • /
    • 2021
  • While open-air compost has value as a source of nutrients for crops in agricultural land, it acts as a pollution that adversely affects the environment during rainfall, and management is required. In this study, it was intended to analyze the accuracy of calculating open-air compost volume using fixed-wing UAV (unmanned aerial vehicle) capable of acquiring a wide range of images and automatic path flights and to identify the possibility of utilization. In order to evaluate the accuracy of calculating the three open-air compost volume, ground LiDAR surveys and precision surveys using a rotary UAV were performed. and compared with the open-air compost volume acquired through a fixed-wing UAV. As a result of comparing the calculation of open-air compost volume based on the ground LiDAR, the error rate of the rotary-wing was estimated to be ±5%, and the error rate of fixed-wing was -15 ~ -4%. one of three open-air compost volume calculated by fixed-wing was underestimated as about -15 %, but the deviation of the open-air compost volume was 2.9 m3, which was not significant. In addition, as a result of periodic monitoring of open-air compost using fixed-wing UAV, changes in the volume of open-air compost with time could be confirmed. These results suggested that efficient open-air compost monitoring and non-point pollutants in agricultural for a wide range using fixed-wing UAV is possible.

A Study on the Optimal Location Selection for Hydrogen Refueling Stations on a Highway using Machine Learning (머신러닝 기반 고속도로 내 수소충전소 최적입지 선정 연구)

  • Jo, Jae-Hyeok;Kim, Sungsu
    • Journal of Cadastre & Land InformatiX
    • /
    • v.51 no.2
    • /
    • pp.83-106
    • /
    • 2021
  • Interests in clean fuels have been soaring because of environmental problems such as air pollution and global warming. Unlike fossil fuels, hydrogen obtains public attention as a eco-friendly energy source because it releases only water when burned. Various policy efforts have been made to establish a hydrogen based transportation network. The station that supplies hydrogen to hydrogen-powered trucks is essential for building the hydrogen based logistics system. Thus, determining the optimal location of refueling stations is an important topic in the network. Although previous studies have mostly applied optimization based methodologies, this paper adopts machine learning to review spatial attributes of candidate locations in selecting the optimal position of the refueling stations. Machine learning shows outstanding performance in various fields. However, it has not yet applied to an optimal location selection problem of hydrogen refueling stations. Therefore, several machine learning models are applied and compared in performance by setting variables relevant to the location of highway rest areas and random points on a highway. The results show that Random Forest model is superior in terms of F1-score. We believe that this work can be a starting point to utilize machine learning based methods as the preliminary review for the optimal sites of the stations before the optimization applies.

Evaluation of Near Subsurface 2D Vs Distribution Map using SPT-Uphole Tomography Method (SPT-업홀 토모그래피 기법을 이용한 지반의 2차원 전단파 속도 분포의 도출)

  • Bang, Eun-Seok;Kim, Jong-Tae;Kim, Dong-Soo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.3C
    • /
    • pp.143-155
    • /
    • 2006
  • SPT-Uphole tomography method was introduced for the evaluation of near subsurface shear wave velocity (Vs) distribution map. In SPT-Uphole method, SPT (Standard Penetration Test) which is common in geotechnical site investigation was used as a source and several surface geophones in line were used as receivers. Vs distribution map which is the triangular shape around the boring point can be developed by tomography inversion. To obtain the exact travel time information of shear wave component, a procedure using the magnitude summation of vertical and horizontal components was used based on the evaluation of particle motion at the surface. It was verified that proposed method could give reliable Vs distribution map through the numerical study using the FEM (Finite Element Method) model. Finally, SPT-Uphole tomography method was performed at the weathered soil site where several boring data with SPT-N values are available, and the feasibility of proposed method was verified in the field.

Protein Requirements of the Korean Rockfish Sebastes schlegeli (조피볼락 Sebastes schlegeli의 단백질 요구량)

  • LEE Jong Yun;KANG Yong Jin;LEE Sang-Min;KIM In-Bae
    • Journal of Aquaculture
    • /
    • v.6 no.1
    • /
    • pp.13-27
    • /
    • 1993
  • In order to determine the protein requirements of the Korean rockfish Sebastes schlegeli six isocaloric diets containing crude protein level from 20\%\;to\;60\%$ were fed to two groups of fish, small and large size, with the initial average body weight of 8 g and 220 g respectively. White fish meal was used as a sole protein source. Daily weight gain, daily protein retention. daily energy retention, feed efficiency, protein retention efficiency and energy retention efficiency were significantly affected by the dietary protein content (p< 0.05). The growth parameters (that is, daily weight gain, daily protein retention and daily energy retention) increased up to $44\%$ protein level with no additional response above this point. The protein requirements were determined from daily weight gain using two different mathematical models. Second order polynomial regression analysis showed that maximum daily weight gain occurred at $56.7\%\;and\;50.6\%$ protein levels for the small size group and the large size group, respectively. However the protein requirements, determined by the broken line model, appeared to be about $40\%$ for both groups. Nutrient utilization also suggested that the protein requirements of both groups were close to $40\%$. When daily protein intake was considered, daily protein requirements per 100g of fish, estimated by the broken line model, were 0.99g and 0.35g for the small and large size groups respectively. Based on these results, a $40\%$ dietary crude protein level could be recommended for the optimum growth and efficient nutrient utilization of the Korean rockfish weighing between 8g and 300g.

  • PDF

Occurrence and Behavior Analysis of Soil Erosion by Applying Coefficient and Exponent of MUSLE Runoff Factor Depending on Land Use (국내 토지이용별 MUSLE 유출인자의 계수 및 지수 적용을 통한 토양유실 발생 및 거동 분석)

  • Lee, Seoro;Lee, Gwanjae;Yang, Dongseok;Choi, Yujin;Lim, Kyoung Jae;Jang, Won Seok
    • Journal of Wetlands Research
    • /
    • v.21 no.spc
    • /
    • pp.98-106
    • /
    • 2019
  • The coefficient and exponent of the MUSLE(Modified Universal Soil Loss Equation) runoff factor in the SWAT(Soil and Water Assessment Tool) model are 11.8 and 0.56 respectively, which are equally applied to the estimation of soil erosion regardless of land use. they could derive overestimation or underestimation of soil erosion, which can cause problems in the selection of soil erosion-vulnerable area and evaluation of reduction management. However, there are no studies about the estimation of coefficients and exponent for the MUSLE runoff factor by land use and their applicability to the SWAT model. Thus, in order to predict soil erosion and sediment behavior accurately through SWAT model, it is necessary to estimate the coefficient and exponent of the MUSLE runoff factor by land use and evaluate its applicability. In this study, the coefficient and exponent of MUSLE runoff factor by land use were estimated for Gaa-cheon Watershed, and the differences in soil erosion and sediment from SWAT model were analyzed. The coefficient and exponent of runoff factor estimated by this study well reflected the characteristics of soil erosion in domestic highland watershed. Therefore, in order to apply the MUSLE which developed based on observed data of US agricultural basin to the domestic watershed, it is considered that a sufficient modification and supplementation process for the coefficient and exponent of the MUSLE runoff factor depending on land use is necessary. The results of this study can be used as a basic data for selecting soil erosion vulnerable area in the non-point source management areas and establishing and evaluating soil erosion reduction management.

Comparing Farming Methods in Pollutant runoff loads from Paddy Fields using the CREAMS-PADDY Model (영농방법에 따른 논에서의 배출부하량 모의)

  • Song, Jung-Hun;Kang, Moon-Seong;Song, In-Hong;Jang, Jeong-Ryeol
    • Korean Journal of Environmental Agriculture
    • /
    • v.31 no.4
    • /
    • pp.318-327
    • /
    • 2012
  • BACKGROUND: For Non-Point Source(NPS) loads reduction, pollutant loads need to be quantified for major farming methods. The objective of this study was to evaluate impacts of farming methods on NPS pollutant loads from a paddy rice field during the growing season. METHODS AND RESULTS: The height of drainage outlet, amount of fertilizer, irrigation water quality were considered as farming factors for scenarios development. The control was derived from conventional farming methods and four different scenarios were developed based combination of farming factors. A field scale model, CREAMS-PADDY(Chemicals, Runoff, and Erosion from Agricultural Management Systems for PADDY), was used to calculate pollutant nutrient loads. The data collected from an experimental plot located downstream of the Idong reservoir were used for model calibration and validation. The simulation results agreed well with observed values during the calibration and validation periods. The calibrated model was used to evaluate farming scenarios in terms of NPS loads. Pollutant loads for T-N, T-P were reduced by 5~62%, 8~37% with increasing the height of drainage outlet from 100 mm of 100 mm, respectively. When amount of fertilizer was changed from standard to conventional, T-N, T-P pollutant loads were reduced by 0~22%, 0~24%. Irrigation water quality below water criteria IV of reservoir increased T-N of 9~65%, T-P of 9~47% in comparison with conventional. CONCLUSION(S): The results indicated that applying increased the height of drainage after midsummer drainage, standard fertilization level during non-rainy seasons, irrigation water quality below water criteria IV of reservoir were effective farming methods to reduce NPS pollutant loads from paddy in Korea.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Evaluation of Runoff‧Peak Rate Runoff and Sediment Yield under Various Rainfall Intensities and Patterns Using WEPP Watershed Model (다양한 강우강도 및 패턴에 따른 WEPP 모형의 유출‧첨두유출‧토양유실량 평가)

  • Choi, Jae-Wan;Ryu, Ji-Chul;Kim, Ik-Jae;Lim, Kyoung-Jae
    • Journal of Korea Water Resources Association
    • /
    • v.45 no.8
    • /
    • pp.795-804
    • /
    • 2012
  • Recently, changes in rainfall intensity and patterns have been causing increasing soil loss worldwide. As a result, the water ecosystem becomes worse and crops yield are reduced with soil loss and nutrient loss with it. Many studies have been proposed to estimate runoff and soil loss to predict or decrease non-point source pollution. Although the USLE has been used for many years in estimating soil losses, the USLE cannot reflect effects on soil loss of changes in rainfall intensity and patterns. The WEPP, physically based model, is capable of predicting soil loss and runoff using various rainfall intensity. In this study, the WEPP model was simulated for sediment yield, runoff and peak runoff using data of 5, 10, 30, 60 minute term rainfall, Huff's method and design rainfall. In case of rainfall interval of 5 minutes and 60 minutes, the sediment and runoff values decreased by 24% and 19%, respectively. The peak rate runoff values decreased by 16% when rainfall interval changed from 5 minutes to 60 minutes, indicating the peak rate runoff values are affected by rainfall intensity to some degrees. As a result of simulating using Huff's method, all values (sediment yield, runoff, peak runoff) were found to be the greatest at third quartile. According to the analysis under various design rainfall conditions (2, 3, 5, 10, 20, 30, 50, 100, 200, 300 years frequency), sediment yield, runoff, and peak runoff of 906.2%, 249.4% and 183.9% were estimated using 2 year to 300 year frequency rainfall data.

Cloning and Transcription Analysis of Sporulation Gene (spo5) in Schizosaccharomyces pombe (Schizosaccharomyces bombe 포자형성 유전자(spo5)의 Cloning 및 전사조절)

  • 김동주
    • The Korean Journal of Food And Nutrition
    • /
    • v.15 no.2
    • /
    • pp.112-118
    • /
    • 2002
  • Sporulation in the fission yeast Schizosaccharomyces pombe has been regarded as an important model of cellular development and differentiation. S. pombe cells proliferate by mitosis and binary fission on growth medium. Deprivation of nutrients especially nitrogen sources, causes the cessation of mitosis and initiates sexual reproduction by matting between two sexually compatible cell types. Meiosis is then followed in a diploid cell in the absence of nitrogen source. DNA fragment complemented with the mutations of sporulation gene was isolated from the S. pombe gene library constructed in the vector, pDB 248' and designated as pDB(spo5)1. We futher analyzed six recombinant plasmids, pDB(spo5)2, pDB(spo5)3, pDB(spo5)4, pDB(spo5)5, pDB (spo5)6, pDB(spo5)7 and found each of these plasmids is able to rescue the spo5-2, spo5-3, spo5-4, spo5-5, spo5-6, spo5-7 mutations, respectively. Mapping of the integrated plasmid into the homologous site of the S. pombe chromosomes demonstrated that pDB(spo5)1, and pDB(spu5)Rl contained the spo5 gene. Transcripts of spo5 gene were analyzed by Northern hybridization. Two transcripts of 3.2 kb and 2.5kb were detected with 5kb Hind Ⅲ fragment containing a part of the spo5 gene as a probe. The small mRNA(2.5kb) appeared only when a wild-type strain was cultured in the absence of nitrogen source in which condition the large mRNA (3.2kb) was produced constitutively. Appearance of a 2.5kb spo5-mRNA depends upon the function of the meil, mei2 and mei3 genes.