• Title/Summary/Keyword: linear standard model

Search Result 432, Processing Time 0.038 seconds

Preliminary study on the use of near infrared spectroscopy for determination of plasma deuterium oxide in dairy cattle

  • Purnomoadi, Agung;Nonaka, Itoko;Higuchi, Kouji;Enishi, Osamu;Amari, Masahiro;Terada, Fuminori
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.4101-4101
    • /
    • 2001
  • Information of body composition (fat and protein) in living animal is important to determine the nutrients requirement. Deuterium oxide (D2O) dilution techniques, as one of isotope dilution techniques have been useful for the prediction of body composition. However, the determination of D2O concentration is time consuming and complicated. Therefore this study was conducted to develop a new method to predict D2O concentration in plasma using near infrared spectroscopy technique (NIRS). Four dairy cows in early lactation were used. They were fed total mixed ration containing conr silage, timothy hay, and concentrates to make 17.0%CP and 14.0 MJDE/kgDM. Dosing D2O was at week 1,3 and 5 after parturition. After dosing D2O, the blood was collected from hour 0 to 72. Blood samples were then centrifuge at 3,000 rpm for 10 minutes to obtain plasma. D2O concentration was analyzed by gas chromatograph (deuterium oxide analyzable system, HK102, Shokotsusyou) after extracted from plasma by liophilization. Plasma sample was scanned by NIRS using Pacific Scientific (Neotec) model 6500 (Perstorp Analytical, Silver Spring, MD) in the range of wavelength from 1100 to 2500 nm. Calibration equation was developed using multiple linear regression. Sample from one animal (cow #550; n: 74) was used for developing the calibration while the rest three animals were used for validating the equation. The range, R and SEC of the calibration set samples were 135-925 ppm, 0.93 and 48.1 ppm, respectively. Validation of the calibration equation for three individual cows was done and the average of NIR predicted value of D2O at each collection time from three weeks injection showed a high correlation. The range, r and 53 of plasma from cow #474 were 322-840 ppm,0.93 and 53.1; cow #478 were 146-951 ppm,0.95 and 39.8; cow #942 were 313-885 ppm,0.95 and 37.2, respectively. Judgement of accuracy based on ratio of standard deviation and standard error in validation set samples (RPD) for cow #474, #478 and #942 were 2.2,4.3 and 3.4, respectively. The error in application due to the variation between individual was considered smaller than the bias from collection period, however, this prediction can be overcome with correction of standard zero-minute concentration of blood. The results of this preliminary study on the use of NIRS for determination of D2O in plasma showed very promising as shown by a convenient and satisfy accuracy. Further study on various physiological stage of animal should be done.

  • PDF

A Study on the Emission and Dispersion of Particulate Matter from a Cement Plant (한 시멘트공장의 분진발생과 대기확산에 관한 조사연구)

  • Chang, Man-Ik;Chung, Yong;Kwon, Sook-Pyo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.16 no.1
    • /
    • pp.67-77
    • /
    • 1983
  • To investigate the an air pollution by particulate matter and its dispersion, a cement plant produceing portland cement 600,000 ton/year and its vicinity were surveyed from Obtober, 1980 to April, 1983. The survey was mainly focused on main stack emmission rate of the cement plant and particle size distribution in the dust, dustfall and total suspended particulate concentration in the area by month and distance from the stack. The results of the study were as follows; 1. The main stack emission rate was surveyed before and after the spray tower was additionally installed to the original E.P bag filter. Before the spray tower installed, the main stack emission rate was higher ($0.64g/Nm^3$) than the emission standard of Korean Environmental Preservation Law's ($0.59g/Nm^3$, amended to $0.4g/Nm^3$ on April 1983), but after the spray tower was installed, its main stack emission rate was markedly decreased to the standard ($0.43g/Nm^3$). 2. $2{\sim}3{\mu}m$ of the particle size was the largest portion (20.8%) of the dust particulate from the main stack and 50% of the frequency distribution was $1.5{\mu}m$ of the size. Most particle size was below $10{\mu}m$. 3. The spray tower reduced the dustfall to $37.81{\sim}9.76\;ton/km^2/month$ while dustfall appeared at $45.29-15.45ton/km^2/month$, in the vicinity of plant before spray tower installed 4. Mean concentrations of total suspended particulate for 24 hours of the various stations were determined in $20.6-200.0{\mu}g/m^3$, 3 stations of tham were higher than the value of Harry and William's arthmetic average standard $130{\mu}g/m^3$. 5. Linear regression between dustfall [X] and total suspended particulate[Y] concentration was an equation, Y=4.024X+11.479.[r=0.91] 6. During the whole seasons in the opposite area 100m apart from the omission source the prevailing wind direction was with estimated more than $30ton/km^2/month$, and the concentration of total suspended particulate for 24 hours averaging time was more than $140{\mu}g/m^3$ in the same area and direction. 7. Assuming the wind direction were constant through the day dustfalls for a day were estimated at $13.40ton/km^2/day,\;10.79ton/km^2/day$ and $4.55ton/km^2/day$ at various distances of 100m, 500m and 1,500m from the emission source respectively. 8. In the simutalion of dustfall and suspended dust by area, Gaussian dispersion model modified by size distribution of particulate matter was not applicated since the emission of dust were from multi sources other them stack. From the above results, it could be applied that the dispersion of dust from the cement plant is estimated and regulated for the purpose of environmental protection.

  • PDF

Performance of Investment Strategy using Investor-specific Transaction Information and Machine Learning (투자자별 거래정보와 머신러닝을 활용한 투자전략의 성과)

  • Kim, Kyung Mock;Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.65-82
    • /
    • 2021
  • Stock market investors are generally split into foreign investors, institutional investors, and individual investors. Compared to individual investor groups, professional investor groups such as foreign investors have an advantage in information and financial power and, as a result, foreign investors are known to show good investment performance among market participants. The purpose of this study is to propose an investment strategy that combines investor-specific transaction information and machine learning, and to analyze the portfolio investment performance of the proposed model using actual stock price and investor-specific transaction data. The Korea Exchange offers daily information on the volume of purchase and sale of each investor to securities firms. We developed a data collection program in C# programming language using an API provided by Daishin Securities Cybosplus, and collected 151 out of 200 KOSPI stocks with daily opening price, closing price and investor-specific net purchase data from January 2, 2007 to July 31, 2017. The self-organizing map model is an artificial neural network that performs clustering by unsupervised learning and has been introduced by Teuvo Kohonen since 1984. We implement competition among intra-surface artificial neurons, and all connections are non-recursive artificial neural networks that go from bottom to top. It can also be expanded to multiple layers, although many fault layers are commonly used. Linear functions are used by active functions of artificial nerve cells, and learning rules use Instar rules as well as general competitive learning. The core of the backpropagation model is the model that performs classification by supervised learning as an artificial neural network. We grouped and transformed investor-specific transaction volume data to learn backpropagation models through the self-organizing map model of artificial neural networks. As a result of the estimation of verification data through training, the portfolios were rebalanced monthly. For performance analysis, a passive portfolio was designated and the KOSPI 200 and KOSPI index returns for proxies on market returns were also obtained. Performance analysis was conducted using the equally-weighted portfolio return, compound interest rate, annual return, Maximum Draw Down, standard deviation, and Sharpe Ratio. Buy and hold returns of the top 10 market capitalization stocks are designated as a benchmark. Buy and hold strategy is the best strategy under the efficient market hypothesis. The prediction rate of learning data using backpropagation model was significantly high at 96.61%, while the prediction rate of verification data was also relatively high in the results of the 57.1% verification data. The performance evaluation of self-organizing map grouping can be determined as a result of a backpropagation model. This is because if the grouping results of the self-organizing map model had been poor, the learning results of the backpropagation model would have been poor. In this way, the performance assessment of machine learning is judged to be better learned than previous studies. Our portfolio doubled the return on the benchmark and performed better than the market returns on the KOSPI and KOSPI 200 indexes. In contrast to the benchmark, the MDD and standard deviation for portfolio risk indicators also showed better results. The Sharpe Ratio performed higher than benchmarks and stock market indexes. Through this, we presented the direction of portfolio composition program using machine learning and investor-specific transaction information and showed that it can be used to develop programs for real stock investment. The return is the result of monthly portfolio composition and asset rebalancing to the same proportion. Better outcomes are predicted when forming a monthly portfolio if the system is enforced by rebalancing the suggested stocks continuously without selling and re-buying it. Therefore, real transactions appear to be relevant.

Quantitative Flood Forecasting Using Remotely-Sensed Data and Neural Networks

  • Kim, Gwangseob
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2002.05a
    • /
    • pp.43-50
    • /
    • 2002
  • Accurate quantitative forecasting of rainfall for basins with a short response time is essential to predict streamflow and flash floods. Previously, neural networks were used to develop a Quantitative Precipitation Forecasting (QPF) model that highly improved forecasting skill at specific locations in Pennsylvania, using both Numerical Weather Prediction (NWP) output and rainfall and radiosonde data. The objective of this study was to improve an existing artificial neural network model and incorporate the evolving structure and frequency of intense weather systems in the mid-Atlantic region of the United States for improved flood forecasting. Besides using radiosonde and rainfall data, the model also used the satellite-derived characteristics of storm systems such as tropical cyclones, mesoscale convective complex systems and convective cloud clusters as input. The convective classification and tracking system (CCATS) was used to identify and quantify storm properties such as life time, area, eccentricity, and track. As in standard expert prediction systems, the fundamental structure of the neural network model was learned from the hydroclimatology of the relationships between weather system, rainfall production and streamflow response in the study area. The new Quantitative Flood Forecasting (QFF) model was applied to predict streamflow peaks with lead-times of 18 and 24 hours over a five year period in 4 watersheds on the leeward side of the Appalachian mountains in the mid-Atlantic region. Threat scores consistently above .6 and close to 0.8 ∼ 0.9 were obtained fur 18 hour lead-time forecasts, and skill scores of at least 4% and up to 6% were attained for the 24 hour lead-time forecasts. This work demonstrates that multisensor data cast into an expert information system such as neural networks, if built upon scientific understanding of regional hydrometeorology, can lead to significant gains in the forecast skill of extreme rainfall and associated floods. In particular, this study validates our hypothesis that accurate and extended flood forecast lead-times can be attained by taking into consideration the synoptic evolution of atmospheric conditions extracted from the analysis of large-area remotely sensed imagery While physically-based numerical weather prediction and river routing models cannot accurately depict complex natural non-linear processes, and thus have difficulty in simulating extreme events such as heavy rainfall and floods, data-driven approaches should be viewed as a strong alternative in operational hydrology. This is especially more pertinent at a time when the diversity of sensors in satellites and ground-based operational weather monitoring systems provide large volumes of data on a real-time basis.

  • PDF

Improving the Accuracy of Early Diagnosis of Thyroid Nodule Type Based on the SCAD Method

  • Shahraki, Hadi Raeisi;Pourahmad, Saeedeh;Paydar, Shahram;Azad, Mohsen
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.17 no.4
    • /
    • pp.1861-1864
    • /
    • 2016
  • Although early diagnosis of thyroid nodule type is very important, the diagnostic accuracy of standard tests is a challenging issue. We here aimed to find an optimal combination of factors to improve diagnostic accuracy for distinguishing malignant from benign thyroid nodules before surgery. In a prospective study from 2008 to 2012, 345 patients referred for thyroidectomy were enrolled. The sample size was split into a training set and testing set as a ratio of 7:3. The former was used for estimation and variable selection and obtaining a linear combination of factors. We utilized smoothly clipped absolute deviation (SCAD) logistic regression to achieve the sparse optimal combination of factors. To evaluate the performance of the estimated model in the testing set, a receiver operating characteristic (ROC) curve was utilized. The mean age of the examined patients (66 male and 279 female) was $40.9{\pm}13.4years$ (range 15- 90 years). Some 54.8% of the patients (24.3% male and 75.7% female) had benign and 45.2% (14% male and 86% female) malignant thyroid nodules. In addition to maximum diameters of nodules and lobes, their volumes were considered as related factors for malignancy prediction (a total of 16 factors). However, the SCAD method estimated the coefficients of 8 factors to be zero and eliminated them from the model. Hence a sparse model which combined the effects of 8 factors to distinguish malignant from benign thyroid nodules was generated. An optimal cut off point of the ROC curve for our estimated model was obtained (p=0.44) and the area under the curve (AUC) was equal to 77% (95% CI: 68%-85%). Sensitivity, specificity, positive predictive value and negative predictive values for this model were 70%, 72%, 71% and 76%, respectively. An increase of 10 percent and a greater accuracy rate in early diagnosis of thyroid nodule type by statistical methods (SCAD and ANN methods) compared with the results of FNA testing revealed that the statistical modeling methods are helpful in disease diagnosis. In addition, the factor ranking offered by these methods is valuable in the clinical context.

Characteristics of Aerodynamic Damping on Helical-Shaped Super Tall Building (나선형 형상의 초고층건물의 공력감쇠의 특성)

  • Kim, Wonsul;Yi, Jin-Hak;Tamura, Yukio
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.37 no.1
    • /
    • pp.9-17
    • /
    • 2017
  • Characteristics of aerodynamic damping ratios of a helical $180^{\circ}$ model which shows better aerodynamic behavior in both along-wind and across-wind responses on a super tall building was investigated by an aeroelastic model test. The aerodynamic damping ratio was evaluated from the wind-induced responses of the model by using Random Decrement (RD) technique. Further, various triggering levels in evaluation of aerodynamic damping ratios using RD technique were also examined. As a result, it was found that when at least 2000 segments were used for evaluating aerodynamic damping ratio for ensemble averaging, the aerodynamic damping ratio can be obtained more consistently with lower irregular fluctuations. This is good agreement with those of previous studies. Another notable observation was that for square and helical $180^{\circ}$ models, the aerodynamic damping ratios in along-wind direction showed similar linear trends with reduced wind speeds regarding of building shapes. On the other hand, for the helical $180^{\circ}$ model, the aerodynamic damping ratio in across-wind direction showed quite different trends with those of the square model. In addition, the aerodynamic damping ratios of the helical $180^{\circ}$ model showed very similar trends with respect to the change of wind direction, and showed gradually increasing trends having small fluctuations with reduced wind speeds. Another observation was that in definition of triggering levels in RD technique on aerodynamic damping ratios, it may be possible to adopt the triggering levels of "standard deviation" or "${\sqrt{2}}$ times of the standard deviation" of the response time history if RD functions have a large number of triggering points. Further, these triggering levels may result in similar values and distributions with reduced wind speeds and either may be acceptable.

ADVANTAGES OF USING ARTIFICIAL NEURAL NETWORKS CALIBRATION TECHNIQUES TO NEAR-INFRARED AGRICULTURAL DATA

  • Buchmann, Nils-Bo;Ian A.Cowe
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1032-1032
    • /
    • 2001
  • Artificial Neural Network (ANN) calibration techniques have been used commercially for agricultural applications since the mid-nineties. Global models, based on transmission data from 850 to 1050 nm, are used routinely to measure protein and moisture in wheat and barley and also moisture in triticale, rye, and oats. These models are currently used commercially in approx. 15 countries throughout the world. Results concerning earlier European ANN models are being published elsewhere. Some of the findings from that study will be discussed here. ANN models have also been developed for coarsely ground samples of compound feed and feed ingredients, again measured in transmission mode from 850 to 1050 nm. The performance of models for pig- and poultry feed will be discussed briefly. These models were developed from a very large data set (more than 20,000 records), and cover a very broad range of finished products. The prediction curves are linear over the entire range for protein, fat moisture, fibre, and starch (measured only on poultry feed), and accuracy is in line with the performance of smaller models based on Partial Least Squares (PLS). A simple bias adjustment is sufficient for calibration transfer across instruments. Recently, we have investigated the possible use of ANN for a different type of NIR spectrometer, based on reflectance data from 1100 to 2500 nm. In one study, based on data for protein, fat, and moisture measured on unground compound feed samples, dedicated ANN models for specific product classes (cattle feed, pig feed, broiler feed, and layers feed) gave moderately better Standard Errors of Prediction (SEP) compared to modified PLS (MPLS). However, if the four product classes were combined into one general calibration model, the performance of the ANN model deteriorated only slightly compared to the class-specific models, while the SEP values for the MPLS predictions doubled. Brix value in molasses is a measure of sugar content. Even with a huge dataset, PLS models were not sufficiently accurate for commercial use. In contrast an ANN model based on the same data improved the accuracy considerably and straightened out non-linearity in the prediction plot. The work of Mr. David Funk (GIPSA, U. S. Department of Agriculture) who has studied the influence of various types of spectral distortions on ANN- and PLS models, thereby providing comparative information on the robustness of these models towards instrument differences, will be discussed. This study was based on data from different classes of North American wheat measured in transmission from 850 to 1050 nm. The distortions studied included the effect of absorbance offset pathlength variation, presence of stray light bandwidth, and wavelength stretch and offset (either individually or combined). It was shown that a global ANN model was much less sensitive to most perturbations than class-specific GIPSA PLS calibrations. It is concluded that ANN models based on large data sets offer substantial advantages over PLS models with respect to accuracy, range of materials that can be handled by a single calibration, stability, transferability, and sensitivity to perturbations.

  • PDF

Dynamic Behavior Modelling of Augmented Objects with Haptic Interaction (햅틱 상호작용에 의한 증강 객체의 동적 움직임 모델링)

  • Lee, Seonho;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.15 no.1
    • /
    • pp.171-178
    • /
    • 2014
  • This paper presents dynamic modelling of a virtual object in augmented reality environments when external forces are applied to the object in real-time fashion. In order to simulate a natural behavior of the object we employ the theory of Newtonian physics to construct motion equation of the object according to the varying external forces applied to the AR object. In dynamic modelling process, the physical interaction is taken placed between the augmented object and the physical object such as a haptic input device and the external forces are transferred to the object. The intrinsic properties of the augmented object are either rigid or elastically deformable (non-rigid) model. In case of the rigid object, the dynamic motion of the object is simulated when the augmented object is collided with by the haptic stick by considering linear momentum or angular momentum. In the case of the non-rigid object, the physics-based simulation approach is adopted since the elastically deformable models respond in a natural way to the external or internal forces and constraints. Depending on the characteristics of force caused by a user through a haptic interface and model's intrinsic properties, the virtual elastic object in AR is deformed naturally. In the simulation, we exploit standard mass-spring damper differential equation so called Newton's second law of motion to model deformable objects. From the experiments, we can successfully visualize the behavior of a virtual objects in AR based on the theorem of physics when the haptic device interact with the rigid or non-rigid virtual object.

Elevation Correction of Multi-Temporal Digital Elevation Model based on Unmanned Aerial Vehicle Images over Agricultural Area (농경지 지역 무인항공기 영상 기반 시계열 수치표고모델 표고 보정)

  • Kim, Taeheon;Park, Jueon;Yun, Yerin;Lee, Won Hee;Han, Youkyung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.3
    • /
    • pp.223-235
    • /
    • 2020
  • In this study, we propose an approach for calibrating the elevation of a DEM (Digital Elevation Model), one of the key data in realizing unmanned aerial vehicle image-based precision agriculture. First of all, radiometric correction is performed on the orthophoto, and then ExG (Excess Green) is generated. The non-vegetation area is extracted based on the threshold value estimated by applying the Otsu method to ExG. Subsequently, the elevation of the DEM corresponding to the location of the non-vegetation area is extracted as EIFs (Elevation Invariant Features), which is data for elevation correction. The normalized Z-score is estimated based on the difference between the extracted EIFs to eliminate the outliers. Then, by constructing a linear regression model and correcting the elevation of the DEM, high-quality DEM is produced without GCPs (Ground Control Points). To verify the proposed method using a total of 10 DEMs, the maximum/minimum value, average/standard deviation before and after elevation correction were compared and analyzed. In addition, as a result of estimating the RMSE (Root Mean Square Error) by selecting the checkpoints, an average RMSE was derivsed as 0.35m. Comprehensively, it was confirmed that a high-quality DEM could be produced without GCPs.

Design of Data-centroid Radial Basis Function Neural Network with Extended Polynomial Type and Its Optimization (데이터 중심 다항식 확장형 RBF 신경회로망의 설계 및 최적화)

  • Oh, Sung-Kwun;Kim, Young-Hoon;Park, Ho-Sung;Kim, Jeong-Tae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.3
    • /
    • pp.639-647
    • /
    • 2011
  • In this paper, we introduce a design methodology of data-centroid Radial Basis Function neural networks with extended polynomial function. The two underlying design mechanisms of such networks involve K-means clustering method and Particle Swarm Optimization(PSO). The proposed algorithm is based on K-means clustering method for efficient processing of data and the optimization of model was carried out using PSO. In this paper, as the connection weight of RBF neural networks, we are able to use four types of polynomials such as simplified, linear, quadratic, and modified quadratic. Using K-means clustering, the center values of Gaussian function as activation function are selected. And the PSO-based RBF neural networks results in a structurally optimized structure and comes with a higher level of flexibility than the one encountered in the conventional RBF neural networks. The PSO-based design procedure being applied at each node of RBF neural networks leads to the selection of preferred parameters with specific local characteristics (such as the number of input variables, a specific set of input variables, and the distribution constant value in activation function) available within the RBF neural networks. To evaluate the performance of the proposed data-centroid RBF neural network with extended polynomial function, the model is experimented with using the nonlinear process data(2-Dimensional synthetic data and Mackey-Glass time series process data) and the Machine Learning dataset(NOx emission process data in gas turbine plant, Automobile Miles per Gallon(MPG) data, and Boston housing data). For the characteristic analysis of the given entire dataset with non-linearity as well as the efficient construction and evaluation of the dynamic network model, the partition of the given entire dataset distinguishes between two cases of Division I(training dataset and testing dataset) and Division II(training dataset, validation dataset, and testing dataset). A comparative analysis shows that the proposed RBF neural networks produces model with higher accuracy as well as more superb predictive capability than other intelligent models presented previously.