• Title/Summary/Keyword: real experiments

Search Result 3,373, Processing Time 0.033 seconds

Estimation for Red Pepper(Capsicum annum L.) Biomass by Reflectance Indices with Ground-Based Remote Sensor (지상부 원격탐사 센서의 반사율지수에 의한 고추 생체량 추정)

  • Kim, Hyun-Gu;Kang, Seong-Soo;Hong, Soon-Dal
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.42 no.2
    • /
    • pp.79-87
    • /
    • 2009
  • Pot experiments using sand culture were conducted in 2004 under greenhouse conditions to evaluate the effect of nitrogen deficiency on red pepper biomass. Nitrogen stress was imposed by implementing 6 levels (40% to 140%) of N in Hoagland's nutrient solution for red pepper. Canopy reflectance measurements were made with hand held spectral sensors including $GreenSeeker^{TM}$, $Crop\;Circle^{TM}$, and $Field\;Scout^{TM}$ Chlorophyll meter, and a spectroradiometer as well as Minolta SPAD-502 chlorophyll meter. Canopy reflectance and dry weight of red pepper were measured at five growth stages, the 30th, 40th, 50th, 80th and 120th day after planting(DAT). Dry weight of red pepper affected by nitrogen stress showed large differences between maximum and minimum values at the 120th DAT ranged from 48.2 to $196.6g\;plant^{-1}$, respectively. Several reflectance indices obtained from $GreenSeeker^{TM}$, $Crop\;Circle^{TM}$ and Spectroradiometer including chlorophyll readings were compared for evaluation of red pepper biomass. The reflectance indices such as rNDVI, aNDVI and gNDVI by the $Crop\;Circle^{TM}$ sensor showed the highest correlation coefficient with dry weight of red pepper at the 40th, 50th, and 80th DAT, respectively. Also these reflectance indices at the same growth station was closely correlated with dry weight, yield, and nitrogen uptake of red pepper at the 120th DAT, especially showing the best correlation coefficient at the 80th DAT. From these result, the aNDVI at the 80th DAT can significantly explain for dry weight of red pepper at the 120th DAT as well as for application level of nitrogen fertilizer. Consequently ground remote sensing as a non-destructive real-time assessment of plant nitrogen status was thought to be a useful tool for in season nitrogen management for red pepper providing both spatial and temporal information.

The pH Reduction of the Recycled Aggregate Originated from the Waste Concrete by the scCO2 Treatment (초임계 이산화탄소를 이용한 폐콘크리트 순환골재의 중성화)

  • Chung, Chul-woo;Lee, Minhee;Kim, Seon-ok;Kim, Jihyun
    • Economic and Environmental Geology
    • /
    • v.50 no.4
    • /
    • pp.257-266
    • /
    • 2017
  • Batch experiments were performed to develop the method for the pH reduction of recycled aggregate by using $scCO_2$ (supercritical $CO_2$), maintaining the pH of extraction water below 9.8. Three different aggregate types from a domestic company were used for the $scCO_2$-water-recycled aggregate reaction to investigate the low pH maintenance of aggregate during the reaction. Thirty five gram of recycled aggregate sample was mixed with 70 mL of distilled water in a Teflon beaker, which was fixed in a high pressurized stainless steel cell (150 mL of capacity). The inside of the cell was pressurized to 100 bar and each cell was located in an oven at $50^{\circ}C$ for 50 days and the pH and ion concentrations of water in the cell were measured at a different reaction time interval. The XRD and SEM-EDS analyses for the aggregate before and after the reaction were performed to identify the mineralogical change during the reaction. The extraction experiment for the aggregate was also conducted to investigate the pH change of extracted water by the $scCO_2$ treatment. The pH of the recycled aggregate without the $scCO_2$ treatment maintained over 12, but its pH dramatically decreased to below 7 after 1 hour reaction and maintained below 8 for 50 day reaction. Concentration of $Ca^{2+}$, $Si^{4+}$, $Mg^{2+}$ and $Na^+$ increased in water due to the $scCO_2$-water-recycled aggregate reaction and lots of secondary precipitates such as calcite, amorphous silicate, and hydroxide minerals were found by XRD and SEM-EDS analyses. The pH of extracted water from the recycled aggregates without the $scCO_2$ treatment maintained over 12, but the pH of extracted water with the $scCO_2$ treatment kept below 9 of pH for both of 50 day and 1 day treatment, suggesting that the recycled aggregate with the $scCO_2$ treatment can be reused in real construction sites.

The Neutralization Treatment of Waste Mortar and Recycled Aggregate by Using the scCO2-Water-Aggregate Reaction (초임계이산화탄소-물-골재 반응을 이용한 폐모르타르와 순환골재의 중성화 처리)

  • Kim, Taehyoung;Lee, Jinkyun;Chung, Chul-woo;Kim, Jihyun;Lee, Minhee;Kim, Seon-ok
    • Economic and Environmental Geology
    • /
    • v.51 no.4
    • /
    • pp.359-370
    • /
    • 2018
  • The batch and column experiments were performed to overcome the limitation of the neutralization process using the $scCO_2$-water-recycled aggregate, reducing its treatment time to 3 hour. The waste cement mortar and two kinds of recycled aggregate were used for the experiment. In the extraction batch experiment, three different types of waste mortar were reacted with water and $scCO_2$ for 1 ~ 24 hour and the pH of extracted solution from the treated waste mortar was measured to determine the minimum reaction time maintaining below 9.8 of pH. The continuous column experiment was also performed to identify the pH reduction effect of the neutralization process for the massive recycled aggregate, considering the non-equilibrium reaction in the field. Thirty five gram of waste mortar was mixed with 70 mL of distilled water in a high pressurized stainless steel cell at 100 bar and $50^{\circ}C$ for 1 ~ 24 hour as the neutralization process. The dried waste mortar was mixed with water at 150 rpm for 10 min. and the pH of water was measured for 15 days. The XRD and TG/DTA analyses for the waste mortar before and after the reaction were performed to identify the mineralogical change during the neutralization process. The acryl column (16 cm in diameter, 1 m in length) was packed with 3 hour treated (or untreated) recycled aggregate and 220 liter of distilled water was flushed down into the column. The pH and $Ca^{2+}$ concentration of the effluent from the column were measured at the certain time interval. The pH of extracted water from 3 hour treated waste mortar (10 ~ 13 mm in diameter) maintained below 9.8 (the legal limit). From XRD and TG/DTA analyses, the amount of portlandite in the waste mortar decreased after the neutralization process but the calcite was created as the secondary mineral. From the column experiment, the pH of the effluent from the column packed with 3 hour treated recycled aggregate kept below 9.8 regardless of their sizes, identifying that the recycled aggregate with 3 hour $scCO_2$ treatment can be reused in real construction sites.

Prediction of infectious diseases using multiple web data and LSTM (다중 웹 데이터와 LSTM을 사용한 전염병 예측)

  • Kim, Yeongha;Kim, Inhwan;Jang, Beakcheol
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.139-148
    • /
    • 2020
  • Infectious diseases have long plagued mankind, and predicting and preventing them has been a big challenge for mankind. For this reasen, various studies have been conducted so far to predict infectious diseases. Most of the early studies relied on epidemiological data from the Centers for Disease Control and Prevention (CDC), and the problem was that the data provided by the CDC was updated only once a week, making it difficult to predict the number of real-time disease outbreaks. However, with the emergence of various Internet media due to the recent development of IT technology, studies have been conducted to predict the occurrence of infectious diseases through web data, and most of the studies we have researched have been using single Web data to predict diseases. However, disease forecasting through a single Web data has the disadvantage of having difficulty collecting large amounts of learning data and making accurate predictions through models for recent outbreaks such as "COVID-19". Thus, we would like to demonstrate through experiments that models that use multiple Web data to predict the occurrence of infectious diseases through LSTM models are more accurate than those that use single Web data and suggest models suitable for predicting infectious diseases. In this experiment, we predicted the occurrence of "Malaria" and "Epidemic-parotitis" using a single web data model and the model we propose. A total of 104 weeks of NEWS, SNS, and search query data were collected, of which 75 weeks were used as learning data and 29 weeks were used as verification data. In the experiment we predicted verification data using our proposed model and single web data, Pearson correlation coefficient for the predicted results of our proposed model showed the highest similarity at 0.94, 0.86, and RMSE was also the lowest at 0.19, 0.07.

Association of β-Catenin with Fat Accumulation in 3T3-L1 Adipocytes and Human Population (β-catenin 유전자의 3T3-L1 지방세포 및 인체에서의 지방축적 연관성 연구)

  • Bae, Sung-Min;Lee, Hae-Yong;Chae, Soo-Ahn;Oh, Dong-Jin;Park, Suk-Won;Yoon, Yoo-Sik
    • Journal of Life Science
    • /
    • v.21 no.9
    • /
    • pp.1301-1309
    • /
    • 2011
  • The major function of adipocytes is to store fat in the form of triglycerides. One of the signaling pathways known to affect adipogenesis, i.e. fat formation, is the WNT/${\beta}$-catenin pathway which inhibits the expression and activity of key regulators of adipogenesis. The purpose of this research is to find genes among the WNT/${\beta}$-catenin pathway which regulate adipogenesis by using small interfering (si) RNA and to find the association of single nucleotide polymorphisms (SNPs) of the gene with serum triglyceride levels in the human population. To elucidate the effects of ${\beta}$-catenin siRNA on adipogenesis key factors, PPAR${\gamma}$ and C/EBP${\alpha}$, we performed real-time PCR and western blotting experiments for the analyses of mRNA and protein levels. It was found that the siRNA-mediated knockdown of ${\beta}$-catenin upregulates adipogenesis key factors. However, upstream regulators of the WNT/${\beta}$-catenin pathway, such as DVL2 and LRP6, had no significant effects compared to ${\beta}$-catenin. These results indicate that ${\beta}$-catenin is a candidate gene for human fat accumulation. In general, serum triglyceride level is a good indicator of fat accumulation in humans. According to statistical analyses of the association between serum triglyceride level and SNPs of ${\beta}$-catenin, -10,288 C>T SNP (rs7630377) in the promoter region was significantly associated with serum triglyceride levels (p<0.05) in 290 Korean subjects. On the other hand, serum cholesterol levels were not significantly associated with SNPs of the ${\beta}$-catenin gene. The results of this study showed that ${\beta}$-catenin is associated with fat accumulation both in vitro and in the human population.

A Template-based Interactive University Timetabling Support System (템플릿 기반의 상호대화형 전공강의시간표 작성지원시스템)

  • Chang, Yong-Sik;Jeong, Ye-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.121-145
    • /
    • 2010
  • University timetabling depending on the educational environments of universities is an NP-hard problem that the amount of computation required to find solutions increases exponentially with the problem size. For many years, there have been lots of studies on university timetabling from the necessity of automatic timetable generation for students' convenience and effective lesson, and for the effective allocation of subjects, lecturers, and classrooms. Timetables are classified into a course timetable and an examination timetable. This study focuses on the former. In general, a course timetable for liberal arts is scheduled by the office of academic affairs and a course timetable for major subjects is scheduled by each department of a university. We found several problems from the analysis of current course timetabling in departments. First, it is time-consuming and inefficient for each department to do the routine and repetitive timetabling work manually. Second, many classes are concentrated into several time slots in a timetable. This tendency decreases the effectiveness of students' classes. Third, several major subjects might overlap some required subjects in liberal arts at the same time slots in the timetable. In this case, it is required that students should choose only one from the overlapped subjects. Fourth, many subjects are lectured by same lecturers every year and most of lecturers prefer the same time slots for the subjects compared with last year. This means that it will be helpful if departments reuse the previous timetables. To solve such problems and support the effective course timetabling in each department, this study proposes a university timetabling support system based on two phases. In the first phase, each department generates a timetable template from the most similar timetable case, which is based on case-based reasoning. In the second phase, the department schedules a timetable with the help of interactive user interface under the timetabling criteria, which is based on rule-based approach. This study provides the illustrations of Hanshin University. We classified timetabling criteria into intrinsic and extrinsic criteria. In intrinsic criteria, there are three criteria related to lecturer, class, and classroom which are all hard constraints. In extrinsic criteria, there are four criteria related to 'the numbers of lesson hours' by the lecturer, 'prohibition of lecture allocation to specific day-hours' for committee members, 'the number of subjects in the same day-hour,' and 'the use of common classrooms.' In 'the numbers of lesson hours' by the lecturer, there are three kinds of criteria : 'minimum number of lesson hours per week,' 'maximum number of lesson hours per week,' 'maximum number of lesson hours per day.' Extrinsic criteria are also all hard constraints except for 'minimum number of lesson hours per week' considered as a soft constraint. In addition, we proposed two indices for measuring similarities between subjects of current semester and subjects of the previous timetables, and for evaluating distribution degrees of a scheduled timetable. Similarity is measured by comparison of two attributes-subject name and its lecturer-between current semester and a previous semester. The index of distribution degree, based on information entropy, indicates a distribution of subjects in the timetable. To show this study's viability, we implemented a prototype system and performed experiments with the real data of Hanshin University. Average similarity from the most similar cases of all departments was estimated as 41.72%. It means that a timetable template generated from the most similar case will be helpful. Through sensitivity analysis, the result shows that distribution degree will increase if we set 'the number of subjects in the same day-hour' to more than 90%.

Object Tracking Based on Exactly Reweighted Online Total-Error-Rate Minimization (정확히 재가중되는 온라인 전체 에러율 최소화 기반의 객체 추적)

  • JANG, Se-In;PARK, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.53-65
    • /
    • 2019
  • Object tracking is one of important steps to achieve video-based surveillance systems. Object tracking is considered as an essential task similar to object detection and recognition. In order to perform object tracking, various machine learning methods (e.g., least-squares, perceptron and support vector machine) can be applied for different designs of tracking systems. In general, generative methods (e.g., principal component analysis) were utilized due to its simplicity and effectiveness. However, the generative methods were only focused on modeling the target object. Due to this limitation, discriminative methods (e.g., binary classification) were adopted to distinguish the target object and the background. Among the machine learning methods for binary classification, total error rate minimization can be used as one of successful machine learning methods for binary classification. The total error rate minimization can achieve a global minimum due to a quadratic approximation to a step function while other methods (e.g., support vector machine) seek local minima using nonlinear functions (e.g., hinge loss function). Due to this quadratic approximation, the total error rate minimization could obtain appropriate properties in solving optimization problems for binary classification. However, this total error rate minimization was based on a batch mode setting. The batch mode setting can be limited to several applications under offline learning. Due to limited computing resources, offline learning could not handle large scale data sets. Compared to offline learning, online learning can update its solution without storing all training samples in learning process. Due to increment of large scale data sets, online learning becomes one of essential properties for various applications. Since object tracking needs to handle data samples in real time, online learning based total error rate minimization methods are necessary to efficiently address object tracking problems. Due to the need of the online learning, an online learning based total error rate minimization method was developed. However, an approximately reweighted technique was developed. Although the approximation technique is utilized, this online version of the total error rate minimization could achieve good performances in biometric applications. However, this method is assumed that the total error rate minimization can be asymptotically achieved when only the number of training samples is infinite. Although there is the assumption to achieve the total error rate minimization, the approximation issue can continuously accumulate learning errors according to increment of training samples. Due to this reason, the approximated online learning solution can then lead a wrong solution. The wrong solution can make significant errors when it is applied to surveillance systems. In this paper, we propose an exactly reweighted technique to recursively update the solution of the total error rate minimization in online learning manner. Compared to the approximately reweighted online total error rate minimization, an exactly reweighted online total error rate minimization is achieved. The proposed exact online learning method based on the total error rate minimization is then applied to object tracking problems. In our object tracking system, particle filtering is adopted. In particle filtering, our observation model is consisted of both generative and discriminative methods to leverage the advantages between generative and discriminative properties. In our experiments, our proposed object tracking system achieves promising performances on 8 public video sequences over competing object tracking systems. The paired t-test is also reported to evaluate its quality of the results. Our proposed online learning method can be extended under the deep learning architecture which can cover the shallow and deep networks. Moreover, online learning methods, that need the exact reweighting process, can use our proposed reweighting technique. In addition to object tracking, the proposed online learning method can be easily applied to object detection and recognition. Therefore, our proposed methods can contribute to online learning community and object tracking, detection and recognition communities.

An Efficient Algorithm for Streaming Time-Series Matching that Supports Normalization Transform (정규화 변환을 지원하는 스트리밍 시계열 매칭 알고리즘)

  • Loh, Woong-Kee;Moon, Yang-Sae;Kim, Young-Kuk
    • Journal of KIISE:Databases
    • /
    • v.33 no.6
    • /
    • pp.600-619
    • /
    • 2006
  • According to recent technical advances on sensors and mobile devices, processing of data streams generated by the devices is becoming an important research issue. The data stream of real values obtained at continuous time points is called streaming time-series. Due to the unique features of streaming time-series that are different from those of traditional time-series, similarity matching problem on the streaming time-series should be solved in a new way. In this paper, we propose an efficient algorithm for streaming time- series matching problem that supports normalization transform. While the existing algorithms compare streaming time-series without any transform, the algorithm proposed in the paper compares them after they are normalization-transformed. The normalization transform is useful for finding time-series that have similar fluctuation trends even though they consist of distant element values. The major contributions of this paper are as follows. (1) By using a theorem presented in the context of subsequence matching that supports normalization transform[4], we propose a simple algorithm for solving the problem. (2) For improving search performance, we extend the simple algorithm to use $k\;({\geq}\;1)$ indexes. (3) For a given k, for achieving optimal search performance of the extended algorithm, we present an approximation method for choosing k window sizes to construct k indexes. (4) Based on the notion of continuity[8] on streaming time-series, we further extend our algorithm so that it can simultaneously obtain the search results for $m\;({\geq}\;1)$ time points from present $t_0$ to a time point $(t_0+m-1)$ in the near future by retrieving the index only once. (5) Through a series of experiments, we compare search performances of the algorithms proposed in this paper, and show their performance trends according to k and m values. To the best of our knowledge, since there has been no algorithm that solves the same problem presented in this paper, we compare search performances of our algorithms with the sequential scan algorithm. The experiment result showed that our algorithms outperformed the sequential scan algorithm by up to 13.2 times. The performances of our algorithms should be more improved, as k is increased.

A New Item Recommendation Procedure Using Preference Boundary

  • Kim, Hyea-Kyeong;Jang, Moon-Kyoung;Kim, Jae-Kyeong;Cho, Yoon-Ho
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.81-99
    • /
    • 2010
  • Lately, in consumers' markets the number of new items is rapidly increasing at an overwhelming rate while consumers have limited access to information about those new products in making a sensible, well-informed purchase. Therefore, item providers and customers need a system which recommends right items to right customers. Also, whenever new items are released, for instance, the recommender system specializing in new items can help item providers locate and identify potential customers. Currently, new items are being added to an existing system without being specially noted to consumers, making it difficult for consumers to identify and evaluate new products introduced in the markets. Most of previous approaches for recommender systems have to rely on the usage history of customers. For new items, this content-based (CB) approach is simply not available for the system to recommend those new items to potential consumers. Although collaborative filtering (CF) approach is not directly applicable to solve the new item problem, it would be a good idea to use the basic principle of CF which identifies similar customers, i,e. neighbors, and recommend items to those customers who have liked the similar items in the past. This research aims to suggest a hybrid recommendation procedure based on the preference boundary of target customer. We suggest the hybrid recommendation procedure using the preference boundary in the feature space for recommending new items only. The basic principle is that if a new item belongs within the preference boundary of a target customer, then it is evaluated to be preferred by the customer. Customers' preferences and characteristics of items including new items are represented in a feature space, and the scope or boundary of the target customer's preference is extended to those of neighbors'. The new item recommendation procedure consists of three steps. The first step is analyzing the profile of items, which are represented as k-dimensional feature values. The second step is to determine the representative point of the target customer's preference boundary, the centroid, based on a personal information set. To determine the centroid of preference boundary of a target customer, three algorithms are developed in this research: one is using the centroid of a target customer only (TC), the other is using centroid of a (dummy) big target customer that is composed of a target customer and his/her neighbors (BC), and another is using centroids of a target customer and his/her neighbors (NC). The third step is to determine the range of the preference boundary, the radius. The suggested algorithm Is using the average distance (AD) between the centroid and all purchased items. We test whether the CF-based approach to determine the centroid of the preference boundary improves the recommendation quality or not. For this purpose, we develop two hybrid algorithms, BC and NC, which use neighbors when deciding centroid of the preference boundary. To test the validity of hybrid algorithms, BC and NC, we developed CB-algorithm, TC, which uses target customers only. We measured effectiveness scores of suggested algorithms and compared them through a series of experiments with a set of real mobile image transaction data. We spilt the period between 1st June 2004 and 31st July and the period between 1st August and 31st August 2004 as a training set and a test set, respectively. The training set Is used to make the preference boundary, and the test set is used to evaluate the performance of the suggested hybrid recommendation procedure. The main aim of this research Is to compare the hybrid recommendation algorithm with the CB algorithm. To evaluate the performance of each algorithm, we compare the purchased new item list in test period with the recommended item list which is recommended by suggested algorithms. So we employ the evaluation metric to hit the ratio for evaluating our algorithms. The hit ratio is defined as the ratio of the hit set size to the recommended set size. The hit set size means the number of success of recommendations in our experiment, and the test set size means the number of purchased items during the test period. Experimental test result shows the hit ratio of BC and NC is bigger than that of TC. This means using neighbors Is more effective to recommend new items. That is hybrid algorithm using CF is more effective when recommending to consumers new items than the algorithm using only CB. The reason of the smaller hit ratio of BC than that of NC is that BC is defined as a dummy or virtual customer who purchased all items of target customers' and neighbors'. That is centroid of BC often shifts from that of TC, so it tends to reflect skewed characters of target customer. So the recommendation algorithm using NC shows the best hit ratio, because NC has sufficient information about target customers and their neighbors without damaging the information about the target customers.

Prefetching based on the Type-Level Access Pattern in Object-Relational DBMSs (객체관계형 DBMS에서 타입수준 액세스 패턴을 이용한 선인출 전략)

  • Han, Wook-Shin;Moon, Yang-Sae;Whang, Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.529-544
    • /
    • 2001
  • Prefetching is an effective method to minimize the number of roundtrips between the client and the server in database management systems. In this paper we propose new notions of the type-level access pattern and the type-level access locality and developed an efficient prefetchin policy based on the notions. The type-level access patterns is a sequence of attributes that are referenced in accessing the objects: the type-level access locality a phenomenon that regular and repetitive type-level access patterns exist. Existing prefetching methods are based on object-level or page-level access patterns, which consist of object0ids of page-ids of the objects accessed. However, the drawback of these methods is that they work only when exactly the same objects or pages are accessed repeatedly. In contrast, even though the same objects are not accessed repeatedly, our technique effectively prefetches objects if the same attributes are referenced repeatedly, i,e of there is type-level access locality. Many navigational applications in Object-Relational Database Management System(ORDBMs) have type-level access locality. Therefore our technique can be employed in ORDBMs to effectively reduce the number of roundtrips thereby significantly enhancing the performance. We have conducted extensive experiments in a prototype ORDBMS to show the effectiveness of our algorithm. Experimental results using the 007 benchmark and a real GIS application show that our technique provides orders of magnitude improvements in the roundtrips and several factors of improvements in overall performance over on-demand fetching and context-based prefetching, which a state-of the art prefetching method. These results indicate that our approach significantly and is a practical method that can be implemented in commercial ORDMSs.

  • PDF