• Title/Summary/Keyword: Linear systems

Search Result 5,898, Processing Time 0.044 seconds

Perfluoropolymer Membranes of Tetrafluoroethylene and 2,2,4Trifluofo- 5Trifluorometoxy- 1,3Dioxole.

  • Arcella, V.;Colaianna, P.;Brinati, G.;Gordano, A.;Clarizia, G.;Tocci, E.;Drioli, E.
    • Proceedings of the Membrane Society of Korea Conference
    • /
    • 1999.07a
    • /
    • pp.39-42
    • /
    • 1999
  • Perfluoropolymers represent the ultimate resistance to hostile chemical environments and high service temperature, attributed to the presence of fluorine in the polymer backbone, i.e. to the high bond energy of C-F and C-C bonds of fluorocarbons. Copolymers of Tetrafluoroethylene (TEE) and 2, 2, 4Trifluoro-5Trifluorometoxy- 1, 3Dioxole (TTD), commercially known as HYFLON AD, are amorphous perfluoropolymers with glass transition temperature (Tg)higher than room temperature, showing a thermal decomposition temperature exceeding 40$0^{\circ}C$. These polymer systems are highly soluble in fluorinated solvents, with low solution viscosities. This property allows the preparation of self-supported and composite membranes with desired membrane thickness. Symmetric and asymmetric perfluoropolymer membranes, made with HYFLON AD, have been prepared and evaluated. Porous and not porous symmetric membranes have been obtained by solvent evaporation with various processing conditions. Asymmetric membranes have been prepared by th wet phase inversion method. Measure of contact angle to distilled water have been carried out. Figure 1 compares experimental results with those of other commercial membranes. Contact angles of about 120$^{\circ}$for our amorphous perfluoropolymer membranes demonstrate that they posses a high hydrophobic character. Measure of contact angles to hexandecane have been also carried out to evaluate the organophobic character. Rsults are reported in Figure 2. The observed strong organophobicity leads to excellent fouling resistance and inertness. Porous membranes with pore size between 30 and 80 nanometers have shown no permeation to water at pressures as high as 10 bars. However high permeation to gases, such as O2, N2 and CO2, and no selectivities were observed. Considering the porous structure of the membrane, this behavior was expected. In consideration of the above properties, possible useful uses in th field of gas- liquid separations are envisaged for these membranes. A particularly promising application is in the field of membrane contactors, equipments in which membranes are used to improve mass transfer coefficients in respect to traditional extraction and absorption processes. Gas permeation properties have been evaluated for asymmetric membranes and composite symmetric ones. Experimental permselectivity values, obtained at different pressure differences, to various single gases are reported in Tab. 1, 2 and 3. Experimental data have been compared with literature data obtained with membranes made with different amorphous perfluoropolymer systems, such as copolymers of Perfluoro2, 2dimethyl dioxole (PDD) and Tetrafluorethylene, commercialized by the Du Pont Company with the trade name of Teflon AF. An interesting linear relationship between permeability and the glass transition temperature of the polymer constituting the membrane has been observed. Results are descussed in terms of polymer chain structure, which affects the presence of voids at molecular scale and their size distribution. Molecular Dyanmics studies are in progress in order to support the understanding of these results. A modified Theodoru- Suter method provided by the Amorphous Cell module of InsightII/Discover was used to determine the chain packing. A completely amorphous polymer box of about 3.5 nm was considered. Last but not least the use of amorphous perfluoropolymer membranes appears to be ideal when separation processes have to be performed in hostile environments, i.e. high temperatures and aggressive non-aqueous media, such as chemicals and solvents. In these cases Hyflon AD membranes can exploit the outstanding resistance of perfluoropolymers.

  • PDF

Game Theoretic Optimization of Investment Portfolio Considering the Performance of Information Security Countermeasure (정보보호 대책의 성능을 고려한 투자 포트폴리오의 게임 이론적 최적화)

  • Lee, Sang-Hoon;Kim, Tae-Sung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.37-50
    • /
    • 2020
  • Information security has become an important issue in the world. Various information and communication technologies, such as the Internet of Things, big data, cloud, and artificial intelligence, are developing, and the need for information security is increasing. Although the necessity of information security is expanding according to the development of information and communication technology, interest in information security investment is insufficient. In general, measuring the effect of information security investment is difficult, so appropriate investment is not being practice, and organizations are decreasing their information security investment. In addition, since the types and specification of information security measures are diverse, it is difficult to compare and evaluate the information security countermeasures objectively, and there is a lack of decision-making methods about information security investment. To develop the organization, policies and decisions related to information security are essential, and measuring the effect of information security investment is necessary. Therefore, this study proposes a method of constructing an investment portfolio for information security measures using game theory and derives an optimal defence probability. Using the two-person game model, the information security manager and the attacker are assumed to be the game players, and the information security countermeasures and information security threats are assumed as the strategy of the players, respectively. A zero-sum game that the sum of the players' payoffs is zero is assumed, and we derive a solution of a mixed strategy game in which a strategy is selected according to probability distribution among strategies. In the real world, there are various types of information security threats exist, so multiple information security measures should be considered to maintain the appropriate information security level of information systems. We assume that the defence ratio of the information security countermeasures is known, and we derive the optimal solution of the mixed strategy game using linear programming. The contributions of this study are as follows. First, we conduct analysis using real performance data of information security measures. Information security managers of organizations can use the methodology suggested in this study to make practical decisions when establishing investment portfolio for information security countermeasures. Second, the investment weight of information security countermeasures is derived. Since we derive the weight of each information security measure, not just whether or not information security measures have been invested, it is easy to construct an information security investment portfolio in a situation where investment decisions need to be made in consideration of a number of information security countermeasures. Finally, it is possible to find the optimal defence probability after constructing an investment portfolio of information security countermeasures. The information security managers of organizations can measure the specific investment effect by drawing out information security countermeasures that fit the organization's information security investment budget. Also, numerical examples are presented and computational results are analyzed. Based on the performance of various information security countermeasures: Firewall, IPS, and Antivirus, data related to information security measures are collected to construct a portfolio of information security countermeasures. The defence ratio of the information security countermeasures is created using a uniform distribution, and a coverage of performance is derived based on the report of each information security countermeasure. According to numerical examples that considered Firewall, IPS, and Antivirus as information security countermeasures, the investment weights of Firewall, IPS, and Antivirus are optimized to 60.74%, 39.26%, and 0%, respectively. The result shows that the defence probability of the organization is maximized to 83.87%. When the methodology and examples of this study are used in practice, information security managers can consider various types of information security measures, and the appropriate investment level of each measure can be reflected in the organization's budget.

A Study of a Non-commercial 3D Planning System, Plunc for Clinical Applicability (비 상업용 3차원 치료계획시스템인 Plunc의 임상적용 가능성에 대한 연구)

  • Cho, Byung-Chul;Oh, Do-Hoon;Bae, Hoon-Sik
    • Radiation Oncology Journal
    • /
    • v.16 no.1
    • /
    • pp.71-79
    • /
    • 1998
  • Purpose : The objective of this study is to introduce our installation of a non-commercial 3D Planning system, Plunc and confirm it's clinical applicability in various treatment situations. Materials and Methods : We obtained source codes of Plunc, offered by University of North Carolina and installed them on a Pentium Pro 200MHz (128MB RAM, Millenium VGA) with Linux operating system. To examine accuracy of dose distributions calculated by Plunc, we input beam data of 6MV Photon of our linear accelerator(Siemens MXE 6740) including tissue-maximum ratio, scatter-maximum ratio, attenuation coefficients and shapes of wedge filters. After then, we compared values of dose distributions(Percent depth dose; PDD, dose profiles with and without wedge filters, oblique incident beam, and dose distributions under air-gap) calculated by Plunc with measured values. Results : Plunc operated in almost real time except spending about 10 seconds in full volume dose distribution and dose-volume histogram(DVH) on the PC described above. As compared with measurements for irradiations of 90-cm 550 and 10-cm depth isocenter, the PDD curves calculated by Plunc did not exceed $1\%$ of inaccuracies except buildup region. For dose profiles with and without wedge filter, the calculated ones are accurate within $2\%$ except low-dose region outside irradiations where Plunc showed $5\%$ of dose reduction. For the oblique incident beam, it showed a good agreement except low dose region below $30\%$ of isocenter dose. In the case of dose distribution under air-gap, there was $5\%$ errors of the central-axis dose. Conclusion : By comparing photon dose calculations using the Plunc with measurements, we confirmed that Plunc showed acceptable accuracies about $2-5\%$ in typical treatment situations which was comparable to commercial planning systems using correction-based a1gorithms. Plunc does not have a function for electron beam planning up to the present. However, it is possible to implement electron dose calculation modules or more accurate photon dose calculation into the Plunc system. Plunc is shown to be useful to clear many limitations of 2D planning systems in clinics where a commercial 3D planning system is not available.

  • PDF

A Study of Traffic Incident Flow Characteristics on Korean Highway Using Multi-Regime (Multi-Regime에 의한 돌발상황 시 교통류 분석)

  • Lee Seon-Ha;kang Hee-Chan
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.4 no.1 s.6
    • /
    • pp.43-56
    • /
    • 2005
  • This research has examined a time series analysis(TSA) of an every hour traffic information such as occupancy, a traffic flow, and a speed, a statistical model of a surveyed data on the traffic fundamental diagram and an expand aspect of a traffic jam by many Parts of the traffic flow. Based on the detected data from traffic accidents on the Cheonan-Nonsan high way and events when the road volume decreases dramatically like traffic accidents it can be estimated from the change of occupancy right after accidents. When it comes to a traffic jam like events the changing gap of the occupancy and the mean speed is gentle, in addition to a quickness and an accuracy of a detection by the time series analyse of simple traffic index is weak. When it is a stable flow a relationship between the occupancy and a flow is a linear, which explain a very high reliability. In contrast, a platoon form presented by a wide deviation about an ideal speed of drivers is difficult to express by a statical model in a relationship between the speed and occupancy, In this case the speed drops shifty at 6$\~$8$\%$ occupancy. In case of an unstable flow, it is difficult to adopt a statistical model because the formation-clearance Process of a traffic jam is analyzed in each parts. Taken the formation-clearance process of a traffic jam by 2 parts division into consideration the flow having an accident is transferred to a stopped flow and the occupancy increases dramatically. When the flow recovers from a sloped flow to a free flow the occupancy which has increased dramatically decrease gradually and then traffic flow increases according as the result analyzed traffic flow by the multi regime as time series. When it is on the traffic jam the traffic flow transfers from an impeded free flow to a congested flow and then a jammed flow which is complicated more than on the accidents and the gap of traffic volume in each traffic conditions about a same occupancy is generated huge. This research presents a need of a multi-regime division when analyzing a traffic flow and for the future it needs a fixed quantity division and model about each traffic regimes.

  • PDF

A Study on Developing a VKOSPI Forecasting Model via GARCH Class Models for Intelligent Volatility Trading Systems (지능형 변동성트레이딩시스템개발을 위한 GARCH 모형을 통한 VKOSPI 예측모형 개발에 관한 연구)

  • Kim, Sun-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.19-32
    • /
    • 2010
  • Volatility plays a central role in both academic and practical applications, especially in pricing financial derivative products and trading volatility strategies. This study presents a novel mechanism based on generalized autoregressive conditional heteroskedasticity (GARCH) models that is able to enhance the performance of intelligent volatility trading systems by predicting Korean stock market volatility more accurately. In particular, we embedded the concept of the volatility asymmetry documented widely in the literature into our model. The newly developed Korean stock market volatility index of KOSPI 200, VKOSPI, is used as a volatility proxy. It is the price of a linear portfolio of the KOSPI 200 index options and measures the effect of the expectations of dealers and option traders on stock market volatility for 30 calendar days. The KOSPI 200 index options market started in 1997 and has become the most actively traded market in the world. Its trading volume is more than 10 million contracts a day and records the highest of all the stock index option markets. Therefore, analyzing the VKOSPI has great importance in understanding volatility inherent in option prices and can afford some trading ideas for futures and option dealers. Use of the VKOSPI as volatility proxy avoids statistical estimation problems associated with other measures of volatility since the VKOSPI is model-free expected volatility of market participants calculated directly from the transacted option prices. This study estimates the symmetric and asymmetric GARCH models for the KOSPI 200 index from January 2003 to December 2006 by the maximum likelihood procedure. Asymmetric GARCH models include GJR-GARCH model of Glosten, Jagannathan and Runke, exponential GARCH model of Nelson and power autoregressive conditional heteroskedasticity (ARCH) of Ding, Granger and Engle. Symmetric GARCH model indicates basic GARCH (1, 1). Tomorrow's forecasted value and change direction of stock market volatility are obtained by recursive GARCH specifications from January 2007 to December 2009 and are compared with the VKOSPI. Empirical results indicate that negative unanticipated returns increase volatility more than positive return shocks of equal magnitude decrease volatility, indicating the existence of volatility asymmetry in the Korean stock market. The point value and change direction of tomorrow VKOSPI are estimated and forecasted by GARCH models. Volatility trading system is developed using the forecasted change direction of the VKOSPI, that is, if tomorrow VKOSPI is expected to rise, a long straddle or strangle position is established. A short straddle or strangle position is taken if VKOSPI is expected to fall tomorrow. Total profit is calculated as the cumulative sum of the VKOSPI percentage change. If forecasted direction is correct, the absolute value of the VKOSPI percentage changes is added to trading profit. It is subtracted from the trading profit if forecasted direction is not correct. For the in-sample period, the power ARCH model best fits in a statistical metric, Mean Squared Prediction Error (MSPE), and the exponential GARCH model shows the highest Mean Correct Prediction (MCP). The power ARCH model best fits also for the out-of-sample period and provides the highest probability for the VKOSPI change direction tomorrow. Generally, the power ARCH model shows the best fit for the VKOSPI. All the GARCH models provide trading profits for volatility trading system and the exponential GARCH model shows the best performance, annual profit of 197.56%, during the in-sample period. The GARCH models present trading profits during the out-of-sample period except for the exponential GARCH model. During the out-of-sample period, the power ARCH model shows the largest annual trading profit of 38%. The volatility clustering and asymmetry found in this research are the reflection of volatility non-linearity. This further suggests that combining the asymmetric GARCH models and artificial neural networks can significantly enhance the performance of the suggested volatility trading system, since artificial neural networks have been shown to effectively model nonlinear relationships.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

The Effects of the Computer Aided Innovation Capabilities on the R&D Capabilities: Focusing on the SMEs of Korea (Computer Aided Innovation 역량이 연구개발역량에 미치는 효과: 국내 중소기업을 대상으로)

  • Shim, Jae Eok;Byeon, Moo Jang;Moon, Hyo Gon;Oh, Jay In
    • Asia pacific journal of information systems
    • /
    • v.23 no.3
    • /
    • pp.25-53
    • /
    • 2013
  • This study analyzes the effect of Computer Aided Innovation (CAI) to improve R&D Capabilities empirically. Survey was distributed by e-mail and Google Docs, targeting CTO of 235 SMEs. 142 surveys were returned back (rate of return 60.4%) from companies. Survey results from 119 companies (83.8%) which are effective samples except no-response, insincere response, estimated value, etc. were used for statistics analysis. Companies with less than 50billion KRW sales of entire researched companies occupy 76.5% in terms of sample traits. Companies with less than 300 employees occupy 83.2%. In terms of the type of company business Partners (called 'partners with big companies' hereunder) who work with big companies for business occupy 68.1%. SMEs based on their own business (called 'independent small companies') appear to occupy 31.9%. The present status of holding IT system according to traits of company business was classified into partners with big companies versus independent SMEs. The present status of ERP is 18.5% to 34.5%. QMS is 11.8% to 9.2%. And PLM (Product Life-cycle Management) is 6.7% to 2.5%. The holding of 3D CAD is 47.1% to 21%. IT system-holding and its application of independent SMEs seemed very vulnerable, compared with partner companies of big companies. This study is comprised of IT infra and IT Utilization as CAI capacity factors which are independent variables. factors of R&D capabilities which are independent variables are organization capability, process capability, HR capability, technology-accumulating capability, and internal/external collaboration capability. The highest average value of variables was 4.24 in organization capability 2. The lowest average value was 3.01 in IT infra which makes users access to data and information in other areas and use them with ease when required during new product development. It seems that the inferior environment of IT infra of general SMEs is reflected in CAI itself. In order to review the validity used to measure variables, Factors have been analyzed. 7 factors which have over 1.0 pure value of their dependent and independent variables were extracted. These factors appear to explain 71.167% in total of total variances. From the result of factor analysis about measurable variables in this study, reliability of each item was checked by Cronbach's Alpha coefficient. All measurable factors at least over 0.611 seemed to acquire reliability. Next, correlation has been done to explain certain phenomenon by correlation analysis between variables. As R&D capabilities factors which are arranged as dependent variables, organization capability, process capability, HR capability, technology-accumulating capability, and internal/external collaboration capability turned out that they acquire significant correlation at 99% reliability level in all variables of IT infra and IT Utilization which are independent variables. In addition, correlation coefficient between each factor is less than 0.8, which proves that the validity of this study judgement has been acquired. The pair with the highest coefficient had 0.628 for IT utilization and technology-accumulating capability. Regression model which can estimate independent variables was used in this study under the hypothesis that there is linear relation between independent variables and dependent variables so as to identify CAI capability's impact factors on R&D. The total explanations of IT infra among CAI capability for independent variables such as organization capability, process capability, human resources capability, technology-accumulating capability, and collaboration capability are 10.3%, 7%, 11.9%, 30.9%, and 10.5% respectively. IT Utilization exposes comprehensively low explanatory capability with 12.4%, 5.9%, 11.1%, 38.9%, and 13.4% for organization capability, process capability, human resources capability, technology-accumulating capability, and collaboration capability respectively. However, both factors of independent variables expose very high explanatory capability relatively for technology-accumulating capability among independent variable. Regression formula which is comprised of independent variables and dependent variables are all significant (P<0.005). The suitability of regression model seems high. When the results of test for dependent variables and independent variables are estimated, the hypothesis of 10 different factors appeared all significant in regression analysis model coefficient (P<0.01) which is estimated to affect in the hypothesis. As a result of liner regression analysis between two independent variables drawn by influence factor analysis for R&D capability and R&D capability. IT infra and IT Utilization which are CAI capability factors has positive correlation to organization capability, process capability, human resources capability, technology-accumulating capability, and collaboration capability with inside and outside which are dependent variables, R&D capability factors. It was identified as a significant factor which affects R&D capability. However, considering adjustable variables, a big gap is found, compared to entire company. First of all, in case of partner companies with big companies, in IT infra as CAI capability, organization capability, process capability, human resources capability, and technology capability out of R&D capacities seems to have positive correlation. However, collaboration capability appeared insignificance. IT utilization which is a CAI capability factor seemed to have positive relation to organization capability, process capability, human resources capability, and internal/external collaboration capability just as those of entire companies. Next, by analyzing independent types of SMEs as an adjustable variable, very different results were found from those of entire companies or partner companies with big companies. First of all, all factors in IT infra except technology-accumulating capability were rejected. IT utilization was rejected except technology-accumulating capability and collaboration capability. Comprehending the above adjustable variables, the following results were drawn in this study. First, in case of big companies or partner companies with big companies, IT infra and IT utilization affect improving R&D Capabilities positively. It was because most of big companies encourage innovation by using IT utilization and IT infra building over certain level to their partner companies. Second, in all companies, IT infra and IT utilization as CAI capability affect improving technology-accumulating capability positively at least as R&D capability factor. The most of factor explanation is low at around 10%. However, technology-accumulating capability is rather high around 25.6% to 38.4%. It was found that CAI capability contributes to technology-accumulating capability highly. Companies shouldn't consider IT infra and IT utilization as a simple product developing tool in R&D section. However, they have to consider to use them as a management innovating strategy tool which proceeds entire-company management innovation centered in new product development. Not only the improvement of technology-accumulating capability in department of R&D. Centered in new product development, it has to be used as original management innovative strategy which proceeds entire company management innovation. It suggests that it can be a method to improve technology-accumulating capability in R&D section and Dynamic capability to acquire sustainable competitive advantage.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Spatio-temporal Water Quality Variations at Various Streams of Han-River Watershed and Empirical Models of Serial Impoundment Reservoirs (한강수계 하천에서의 시공간적 수질변화 특성 및 연속적 인공댐호의 경험적 모델)

  • Jeon, Hye-Won;Choi, Ji-Woong;An, Kwang-Guk
    • Korean Journal of Ecology and Environment
    • /
    • v.45 no.4
    • /
    • pp.378-391
    • /
    • 2012
  • The objective of this study was to determine temporal patterns and longitudinal gradients of water chemistry at eight artificial reservoirs and ten streams within the Han-River watershed along the main axis of the headwaters to the downstreams during 2009~2010. Also, we evaluated chemical relations and their variations among major trophic variables such as total nitrogen (TN), total phosphorus (TP), and chlorophyll-a (CHL-a) and determined intense summer monsoon and annual precipitation effects on algal growth using empirical regression model. Stream water quality of TN, TP, and other parameters degradated toward the downstreams, and especially was largely impacted by point-sources of wastewater disposal plants near Jungrang Stream. In contrast, summer river runoff and rainwater improved the stream water quality of TP, TN, and ionic contents, measured as conductivity (EC) in the downstream reach. Empirical linear regression models of log-transformed CHL-a against log-transformed TN, TP, and TN : TP mass ratios in five reservoirs indicated that the variation of TP accounted 33.8% ($R^2$=0.338, p<0.001, slope=0.710) in the variation of CHL and the variation of TN accounted only 21.4% ($R^2$=0.214, p<0.001) in the CHL-a. Overall, our study suggests that, primary productions, estimated as CHL-a, were more determined by ambient phosphorus loading rather than nitrogen in the lentic systems of artificial reservoirs, and the stream water quality as lotic ecosystems were more influenced by a point-source locations of tributary streams and intense seasonal rainfall rather than a presence of artificial dam reservoirs along the main axis of the watershed.

Prediction on the Quality of Total Mixed Ration for Dairy Cows by Near Infrared Reflectance Spectroscopy (근적외선 분광법에 의한 국내 축우용 TMR의 성분추정)

  • Ki, Kwang-Seok;Kim, Sang-Bum;Lee, Hyun-June;Yang, Seung-Hak;Lee, Jae-Sik;Jin, Ze-Lin;Kim, Hyeon-Shup;Jeo, Joon-Mo;Koo, Jae-Yeon;Cho, Jong-Ku
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.29 no.3
    • /
    • pp.253-262
    • /
    • 2009
  • The present study was conducted to develop a rapid and accurate method of evaluating chemical composition of total mixed ration (TMR) for dairy cows using near infrared reflectance spectroscopy (NIRS). A total of 253 TMR samples were collected from TMR manufacturers and dairy farms in Korea. Prior to NIR analysis, TMR samples were dried at $65^{\circ}C$ for 48 hour and then ground to 2 mm size. The samples were scanned at 2 nm interval over the wavelength range of 400-2500 nm on a FOSS-NIR Systems Model 6500. The values obtained by NIR analysis and conventional chemical methods were compared. Generally, the relationship between chemical analysis and NIR analysis was linear: $R^2$ and standard error of calibration (SEC) were 0.701 (SEC 0.407), 0.965 (SEC 0.315), 0.796 (SEC 0.406), 0.889 (SEC 0.987), 0.894 (SEC 0.311), 0.933 (SEC 0.885) and 0.889 (SEC 1.490) for moisture, crude protein, ether extract, crude fiber, crude ash, acid detergent fiber (ADF) and neutral detergent fiber (NDF), respectively. In addition, the standard error of prediction (SEP) value was 0.371, 0.290, 0.321, 0.380, 0.960, 0.859 and 1.446 for moisture, crude protein, ether extract, crude fiber, crude ash, ADF and NDF, respectively. The results of the present study showed that the NIR analysis for unknown TMR samples would be relatively accurate. Use of the developed NIR calibration curve can obtain fast and reliable data on chemical composition of TMR. Collection and analysis of more TMR samples will increase accuracy and precision of NIR analysis to TMR.