• Title/Summary/Keyword: Variable Input

Search Result 1,444, Processing Time 0.03 seconds

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

A Joint Application of DRASTIC and Numerical Groundwater Flow Model for The Assessment of Groundwater Vulnerability of Buyeo-Eup Area (DRASTIC 모델 및 지하수 수치모사 연계 적용에 의한 부여읍 일대의 지하수 오염 취약성 평가)

  • Lee, Hyun-Ju;Park, Eun-Gyu;Kim, Kang-Joo;Park, Ki-Hoon
    • Journal of Soil and Groundwater Environment
    • /
    • v.13 no.1
    • /
    • pp.77-91
    • /
    • 2008
  • In this study, we developed a technique of applying DRASTIC, which is the most widely used tool for estimation of groundwater vulnerability to the aqueous phase contaminant infiltrated from the surface, and a groundwater flow model jointly to assess groundwater contamination potential. The developed technique is then applied to Buyeo-eup area in Buyeo-gun, Chungcheongnam-do, Korea. The input thematic data of a depth to water required in DRASTIC model is known to be the most sensitive to the output while only a few observations at a few time schedules are generally available. To overcome this practical shortcoming, both steady-state and transient groundwater level distributions are simulated using a finite difference numerical model, MODFLOW. In the application for the assessment of groundwater vulnerability, it is found that the vulnerability results from the numerical simulation of a groundwater level is much more practical compared to cokriging methods. Those advantages are, first, the results from the simulation enable a practitioner to see the temporally comprehensive vulnerabilities. The second merit of the technique is that the method considers wide variety of engaging data such as field-observed hydrogeologic parameters as well as geographic relief. The depth to water generated through geostatistical methods in the conventional method is unable to incorporate temporally variable data, that is, the seasonal variation of a recharge rate. As a result, we found that the vulnerability out of both the geostatistical method and the steady-state groundwater flow simulation are in similar patterns. By applying the transient simulation results to DRASTIC model, we also found that the vulnerability shows sharp seasonal variation due to the change of groundwater recharge. The change of the vulnerability is found to be most peculiar during summer with the highest recharge rate and winter with the lowest. Our research indicates that numerical modeling can be a useful tool for temporal as well as spatial interpolation of the depth to water when the number of the observed data is inadequate for the vulnerability assessments through the conventional techniques.

Predicting Regional Soybean Yield using Crop Growth Simulation Model (작물 생육 모델을 이용한 지역단위 콩 수량 예측)

  • Ban, Ho-Young;Choi, Doug-Hwan;Ahn, Joong-Bae;Lee, Byun-Woo
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_2
    • /
    • pp.699-708
    • /
    • 2017
  • The present study was to develop an approach for predicting soybean yield using a crop growth simulation model at the regional level where the detailed and site-specific information on cultivation management practices is not easily accessible for model input. CROPGRO-Soybean model included in Decision Support System for Agrotechnology Transfer (DSSAT) was employed for this study, and Illinois which is a major soybean production region of USA was selected as a study region. As a first step to predict soybean yield of Illinois using CROPGRO-Soybean model, genetic coefficients representative for each soybean maturity group (MG I~VI) were estimated through sowing date experiments using domestic and foreign cultivars with diverse maturity in Seoul National University Farm ($37.27^{\circ}N$, $126.99^{\circ}E$) for two years. The model using the representative genetic coefficients simulated the developmental stages of cultivars within each maturity group fairly well. Soybean yields for the grids of $10km{\times}10km$ in Illinois state were simulated from 2,000 to 2,011 with weather data under 18 simulation conditions including the combinations of three maturity groups, three seeding dates and two irrigation regimes. Planting dates and maturity groups were assigned differently to the three sub-regions divided longitudinally. The yearly state yields that were estimated by averaging all the grid yields simulated under non-irrigated and fully-Irrigated conditions showed a big difference from the statistical yields and did not explain the annual trend of yield increase due to the improved cultivation technologies. Using the grain yield data of 9 agricultural districts in Illinois observed and estimated from the simulated grid yield under 18 simulation conditions, a multiple regression model was constructed to estimate soybean yield at agricultural district level. In this model a year variable was also added to reflect the yearly yield trend. This model explained the yearly and district yield variation fairly well with a determination coefficients of $R^2=0.61$ (n = 108). Yearly state yields which were calculated by weighting the model-estimated yearly average agricultural district yield by the cultivation area of each agricultural district showed very close correspondence ($R^2=0.80$) to the yearly statistical state yields. Furthermore, the model predicted state yield fairly well in 2012 in which data were not used for the model construction and severe yield reduction was recorded due to drought.

Analysis of Empirical Multiple Linear Regression Models for the Production of PM2.5 Concentrations (PM2.5농도 산출을 위한 경험적 다중선형 모델 분석)

  • Choo, Gyo-Hwang;Lee, Kyu-Tae;Jeong, Myeong-Jae
    • Journal of the Korean earth science society
    • /
    • v.38 no.4
    • /
    • pp.283-292
    • /
    • 2017
  • In this study, the empirical models were established to estimate the concentrations of surface-level $PM_{2.5}$ over Seoul, Korea from 1 January 2012 to 31 December 2013. We used six different multiple linear regression models with aerosol optical thickness (AOT), ${\AA}ngstr{\ddot{o}}m$ exponents (AE) data from Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Terra and Aqua satellites, meteorological data, and planetary boundary layer depth (PBLD) data. The results showed that $M_6$ was the best empirical model and AOT, AE, relative humidity (RH), wind speed, wind direction, PBLD, and air temperature data were used as input data. Statistical analysis showed that the result between the observed $PM_{2.5}$ and the estimated $PM_{2.5}$ concentrations using $M_6$ model were correlations (R=0.62) and root square mean error ($RMSE=10.70{\mu}gm^{-3}$). In addition, our study show that the relation strongly depends on the seasons due to seasonal observation characteristics of AOT, with a relatively better correlation in spring (R=0.66) and autumntime (R=0.75) than summer and wintertime (R was about 0.38 and 0.56). These results were due to cloud contamination of summertime and the influence of snow/ice surface of wintertime, compared with those of other seasons. Therefore, the empirical multiple linear regression model used in this study showed that the AOT data retrieved from the satellite was important a dominant variable and we will need to use additional weather variables to improve the results of $PM_{2.5}$. Also, the result calculated for $PM_{2.5}$ using empirical multi linear regression model will be useful as a method to enable monitoring of atmospheric environment from satellite and ground meteorological data.

A Scalable and Modular Approach to Understanding of Real-time Software: An Architecture-based Software Understanding(ARSU) and the Software Re/reverse-engineering Environment(SRE) (실시간 소프트웨어의 조절적${\cdot}$단위적 이해 방법 : ARSU(Architecture-based Software Understanding)와 SRE(Software Re/reverse-engineering Environment))

  • Lee, Moon-Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3159-3174
    • /
    • 1997
  • This paper reports a research to develop a methodology and a tool for understanding of very large and complex real-time software. The methodology and the tool mostly developed by the author are called the Architecture-based Real-time Software Understanding (ARSU) and the Software Re/reverse-engineering Environment (SRE) respectively. Due to size and complexity, it is commonly very hard to understand the software during reengineering process. However the research facilitates scalable re/reverse-engineering of such real-time software based on the architecture of the software in three-dimensional perspectives: structural, functional, and behavioral views. Firstly, the structural view reveals the overall architecture, specification (outline), and the algorithm (detail) views of the software, based on hierarchically organized parent-chi1d relationship. The basic building block of the architecture is a software Unit (SWU), generated by user-defined criteria. The architecture facilitates navigation of the software in top-down or bottom-up way. It captures the specification and algorithm views at different levels of abstraction. It also shows the functional and the behavioral information at these levels. Secondly, the functional view includes graphs of data/control flow, input/output, definition/use, variable/reference, etc. Each feature of the view contains different kind of functionality of the software. Thirdly, the behavioral view includes state diagrams, interleaved event lists, etc. This view shows the dynamic properties or the software at runtime. Beside these views, there are a number of other documents: capabilities, interfaces, comments, code, etc. One of the most powerful characteristics of this approach is the capability of abstracting and exploding these dimensional information in the architecture through navigation. These capabilities establish the foundation for scalable and modular understanding of the software. This approach allows engineers to extract reusable components from the software during reengineering process.

  • PDF

Incremental Ensemble Learning for The Combination of Multiple Models of Locally Weighted Regression Using Genetic Algorithm (유전 알고리즘을 이용한 국소가중회귀의 다중모델 결합을 위한 점진적 앙상블 학습)

  • Kim, Sang Hun;Chung, Byung Hee;Lee, Gun Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.9
    • /
    • pp.351-360
    • /
    • 2018
  • The LWR (Locally Weighted Regression) model, which is traditionally a lazy learning model, is designed to obtain the solution of the prediction according to the input variable, the query point, and it is a kind of the regression equation in the short interval obtained as a result of the learning that gives a higher weight value closer to the query point. We study on an incremental ensemble learning approach for LWR, a form of lazy learning and memory-based learning. The proposed incremental ensemble learning method of LWR is to sequentially generate and integrate LWR models over time using a genetic algorithm to obtain a solution of a specific query point. The weaknesses of existing LWR models are that multiple LWR models can be generated based on the indicator function and data sample selection, and the quality of the predictions can also vary depending on this model. However, no research has been conducted to solve the problem of selection or combination of multiple LWR models. In this study, after generating the initial LWR model according to the indicator function and the sample data set, we iterate evolution learning process to obtain the proper indicator function and assess the LWR models applied to the other sample data sets to overcome the data set bias. We adopt Eager learning method to generate and store LWR model gradually when data is generated for all sections. In order to obtain a prediction solution at a specific point in time, an LWR model is generated based on newly generated data within a predetermined interval and then combined with existing LWR models in a section using a genetic algorithm. The proposed method shows better results than the method of selecting multiple LWR models using the simple average method. The results of this study are compared with the predicted results using multiple regression analysis by applying the real data such as the amount of traffic per hour in a specific area and hourly sales of a resting place of the highway, etc.

Analysis of Influential Factors in the Relationship between Innovation Efforts Based on the Company's Environment and Company Performance: Focus on Small and Medium-sized ICT Companies (기업의 환경적 특성에 따른 혁신활동과 기업성과간 영향요인 분석: ICT분야 중소기업을 중심으로)

  • Kim, Eun-jung;Roh, Doo-hwan;Park, Ho-young
    • Journal of Technology Innovation
    • /
    • v.25 no.4
    • /
    • pp.107-143
    • /
    • 2017
  • This study aims to understand the impact of internal and external environments and innovation efforts on a company's performance. First, the relationships and patterns between variables were determined through an exploratory factor analysis. Afterwards, a cluster analysis was conducted, in which the influential factors summarized in the factor analysis were classified. Finally, structural equation modeling was used to carry out an empirical analysis of the structural relationship between innovation efforts and the company's performance in the classified clusters. 7 factors were derived from the exploratory factor analysis of 40 input variables from external and internal environments. 4 clusters (n=1,022) were formed based on the 7 factors. Empirical analysis of the 4 clusters using structural equation modelling showed the following: Only independent technology development had a positive impact on the company's performance for Cluster 1, which is characterized by sensitivity to a technological/competitive environment and innovativeness. Only independent technology development and joint research had positive impacts on the company's performance for Cluster 2, which is characterized by sensitivity to a market environment and internal orientation. Joint research and the mediating variable of government support program utilization had positive impacts, while the introduction of technology had a negative impact on the company's performance for Cluster 3, which is characterized by sensitivity to a competitive environment, innovativeness, and willingness to cooperate with the government and related institutions. Independent technology development as well as the mediating variables of network utilization and government support program utilization had positive impacts on the company's performance for Cluster 4, which is characterized by openness and external cooperation.

Semi-daily Variations in Populations of the Dinoflagellates Dinophysis acuminata and Oxyphysis oxytoxoides and a Mixotrophic Ciliate Prey Mesodinium rubrum in Masan Bay (마산만에서 와편모류 Dinophysis acuminata 및 Oxyphysis oxytoxoides와 먹이생물 섬모류인 Mesodinium rubrum의 단주기적 개체군 변동)

  • KIM, SUNJU;YOON, JIHAE;KIM, MIRAN;PARK, MYUNG GIL
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.20 no.3
    • /
    • pp.151-157
    • /
    • 2015
  • Recent laboratory studies have documented that mixotrophic dinoflagellates Dinophysis spp. and heterotrophic dinoflagellate Oxyphysis oxytoxoides share a common prey, i.e. the mixotrophic ciliate Mesodinium rubrum. Nonetheless, very little is known about the population dynamics and species interactions among these protists in natural environments. To investigate the interactions between the dinoflagellate predators and their ciliate prey in the field, we took the samples twice a day from 26 July to 28 August, 2011 at a fixed station in Masan Bay and analyzed their abundances. During this study, salinity was highly variable, ranging from 5 to 28, due to the periodic input of rainfalls to the sampling station. Water temperature was on average $26.5^{\circ}C$ until 20 August and thereafter was about $21^{\circ}C$ by the end of the sampling period. The ciliate M. rubrum occurred persistently throughout the sampling period, ranging from 13 to $492\;cells\;mL^{-1}$. Cell densities of D. acuminata and O. oxytoxoides ranged from undetectable level to $19,833\;cells\;L^{-1}$ and from undetectable level to $100,333\;cells\;L^{-1}$, respectively. The high abundance of D. acuminata mostly followed the blooming of the ciliate M. rubrum, but it often did not peak even during heavy blooms of the prey, probably due to sensitivity to large salinity fluctuation and also presumably overlapped grazing by other mixotrophic dinoflagellates. The abundance of O. oxytoxoides was detected only when water temperature was lower than $24^{\circ}C$, indicating that water temperature is an important environmental factor to control the population dynamics of the dinoflagellate species.

Noise-robust electrocardiogram R-peak detection with adaptive filter and variable threshold (적응형 필터와 가변 임계값을 적용하여 잡음에 강인한 심전도 R-피크 검출)

  • Rahman, MD Saifur;Choi, Chul-Hyung;Kim, Si-Kyung;Park, In-Deok;Kim, Young-Pil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.126-134
    • /
    • 2017
  • There have been numerous studies on extracting the R-peak from electrocardiogram (ECG) signals. However, most of the detection methods are complicated to implement in a real-time portable electrocardiograph device and have the disadvantage of requiring a large amount of calculations. R-peak detection requires pre-processing and post-processing related to baseline drift and the removal of noise from the commercial power supply for ECG data. An adaptive filter technique is widely used for R-peak detection, but the R-peak value cannot be detected when the input is lower than a threshold value. Moreover, there is a problem in detecting the P-peak and T-peak values due to the derivation of an erroneous threshold value as a result of noise. We propose a robust R-peak detection algorithm with low complexity and simple computation to solve these problems. The proposed scheme removes the baseline drift in ECG signals using an adaptive filter to solve the problems involved in threshold extraction. We also propose a technique to extract the appropriate threshold value automatically using the minimum and maximum values of the filtered ECG signal. To detect the R-peak from the ECG signal, we propose a threshold neighborhood search technique. Through experiments, we confirmed the improvement of the R-peak detection accuracy of the proposed method and achieved a detection speed that is suitable for a mobile system by reducing the amount of calculation. The experimental results show that the heart rate detection accuracy and sensitivity were very high (about 100%).

Evaluation of Methane Generation Rate Constant(k) by Estimating Greenhouse Gas Emission in Small Scale Landfill (소규모 매립지에 대한 메탄발생속도상수(k) 산출 및 온실가스 발생량 평가)

  • Lee, Wonjae;Kang, Byungwook;Cho, Byungyeol;Lee, Sangwoo;Yeon, Ikjun
    • Journal of the Korean GEO-environmental Society
    • /
    • v.15 no.5
    • /
    • pp.5-11
    • /
    • 2014
  • In this study, greenhouse gas emission for small scale landfill (H and Y landfill) was investigated to deduce special the methane generation rate constant(k). To achieve the purpose, the data of physical composition was collected and amount of LFG emission was calculated by using FOD method suggested in 2006 IPCC GL. Also, amount of LFG emission was directly measured in the active landfill sites. By comparing the results, the methane generation rate constant(k), which was used as input variable in FOD method suggested in 2006 IPCC GL, was deduced. From the results on the physical composition, it was shown that the ranges of DOC per year in H (1997~2011) and Y (1994~2011) landfill sites were 13.16 %~23.79 % ($16.52{\pm}3.84%$) and 7.24 %~34.67 % ($14.56{\pm}7.30%$), respectively. The DOC results showed the differences with the suggested values (= 18 %) in 2006 IPCC GL. The average values of methane generation rate constant(k) from each landfill site were $0.0413yr^{-1}$ and $0.0117yr^{-1}$. The results of methane generation rate constant(k) was shown big difference with 2006 IPCC GL defualt value (k = 0.09). It was confirmed that calculation results of greenhouse gas emission using default value in 2006 IPCC GL show excessive output.