• Title/Summary/Keyword: Aggregate Function

Search Result 197, Processing Time 0.027 seconds

A Study on Keynese's Employment and Price Theory (케인즈의 고용 . 물가이론소고)

  • 박일근
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.8 no.12
    • /
    • pp.65-77
    • /
    • 1985
  • The mainpoints of General Theory is 1) the mainspring of economic activity is effective demand which can expand or control in relation to supply as a result of spontaneous decision by customer or government. 2) change in effective demand Produce change in output and employment in the same direction 3) which given productivity of labour the Vice level depend on the money supply affect the in downward direction 4) change in the money supply affect the economy through the rates of interest 5) the only automatic mechanism through which the economy can adjust itself to a deficiency of effective demand is the long process which unemployment reduces wage rates and consequently the demand for money and interest rates, above summarized contents are General Theory frame-work. The neo-classical macro general equilibrium theory, which has been reconstructed subsequent to Keyneses critism is treated the neo-classical macro-general equilibrium theory which inherits the classical theories of labour market and the aggregate production function, on demand side, it introduce the Keyneses macro-general equilibrium theory, which function through flexible movement of prices, wage and interest. Nowadays, Keynes General Theory is being developed into new dimension i, e. the macro-disequilibrium theory, and adequacy, and appropriateness of the theory and its significant contributions to modern economics are being reinterpreted and substantiated.

  • PDF

A Study constructing a Function-Based Records Classification System for Korean Individual Church (한국 개(個)교회기록물의 기능분류 방안)

  • Ma, Won-jun
    • The Korean Journal of Archival Studies
    • /
    • no.10
    • /
    • pp.145-194
    • /
    • 2004
  • Church archives are the evidential instruments to remember church activity and important information aggregate which has administrative, legal, financial, historical, faithful value as the collective memory of church community. So it must be managed necessarily and the management orders are based on the Bible. The western churches which have a correct understanding about the importance of church records and management order have taken multilateral endeavor to create, manage church archives systematically. On the other hand, korean churches don't have the records management systems. Therefore, Records created in individual church are mostly managed unsystematically and exist as 'backlogs', finally, they are destructed without reasonable formalities. In those problems, the purpose of this study is to offer the way of records classification and disposition instrument with recognition that records management should be done from the time of creation or previous to it. As a concrete device for them, I tried to embody the function-based classification method and disposal schedule. I prefer the function-based classification and disposal schedule to the organization and function-based classification to present stable classification and disposal schedule, as we can say the best feature of the modern organization is multilateral and also churches have same aspect. For this study, I applied DIRKS(Designing and Implementing Recordkeeping Systems) manual which National Archives of Australia provide and guidelines in ICA/IRMT series to construct the theory of the function-based classification in individual churches. Through them, it was possible to present a model for preliminary investigation, analysis of business activity, records survey, disposal schedule. And I took an example of 'Myong Sung Presbyterian Church' which belong to 'The Presbyterian church in Korea'. I explained in detail codifying process and results of preliminary investigation in 'Myong Sung Presbyterian Church', analysis of business activity based on it, process of presenting the function-based classification and disposal schedule got from all those steps. For establishing disposal schedule, I planned 'General Disposal Schedule' and 'Agency Disposal Schedule' which categorized 'general function' and 'agency function' of an agency, according to DIRKS in Australia and ICA/IRMT. And for estimation of disposal date I had a thorough grasp of important records category presented in 'Constitution of General Assembly', interview to know the importance of tasks, and added examples of disposal schedule in western church archives. This study has significance that it was intended to embody 'the function-based classification' and 'disposal schedule' suitable for individual church, applying DIRKS in Australia and ICA/IRMT on absence of the theory or example which tried to present the function-based classification and disposal schedule for individual church. Also it is meaningful to present a model that can classify and disposal real records according to the function in individual church which has no recognition or way about records management.

Genetically Optimized Self-Organizing Polynomial Neural Networks (진화론적 최적 자기구성 다항식 뉴럴 네트워크)

  • 박호성;박병준;장성환;오성권
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.1
    • /
    • pp.40-49
    • /
    • 2004
  • In this paper, we propose a new architecture of Genetic Algorithms(GAs)-based Self-Organizing Polynomial Neural Networks(SOPNN), discuss a comprehensive design methodology and carry out a series of numeric experiments. The conventional SOPNN is based on the extended Group Method of Data Handling(GMDH) method and utilized the polynomial order (viz. linear, quadratic, and modified quadratic) as well as the number of node inputs fixed (selected in advance by designer) at Polynomial Neurons (or nodes) located in each layer through a growth process of the network. Moreover it does not guarantee that the SOPNN generated through learning has the optimal network architecture. But the proposed GA-based SOPNN enable the architecture to be a structurally more optimized network, and to be much more flexible and preferable neural network than the conventional SOPNN. In order to generate the structurally optimized SOPNN, GA-based design procedure at each stage (layer) of SOPNN leads to the selection of preferred nodes (or PNs) with optimal parameters- such as the number of input variables, input variables, and the order of the polynomial-available within SOPNN. An aggregate performance index with a weighting factor is proposed in order to achieve a sound balance between approximation and generalization (predictive) abilities of the model. A detailed design procedure is discussed in detail. To evaluate the performance of the GA-based SOPNN, the model is experimented with using two time series data (gas furnace and NOx emission process data of gas turbine power plant). A comparative analysis shows that the proposed GA-based SOPNN is model with higher accuracy as well as more superb predictive capability than other intelligent models presented previously.

Applications of Artificial Neural Networks for Using High Performance Concrete (고성능 콘크리트의 활용을 위한 신경망의 적용)

  • Yang, Seung-Il;Yoon, Young-Soo;Lee, Seung-Hoon;Kim, Gyu-Dong
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.3 no.4 s.11
    • /
    • pp.119-129
    • /
    • 2003
  • Concrete and steel are essential structural materials in the construction. But, concrete, different from steel, consists of many materials and is affected by many factors such as properties of materials, site environmental situations, and skill of constructors. Concrete have two kinds of properties, immediately knowing properties such as slump, air contents and time dependent one like strength. Therefore, concrete mixes depend on experiences of experts. However, at point of time using High Performance Concrete, new method is wanted because of more ingredients like mineral and chemical admixtures and lack of data. Artificial Neural Networks(ANN) are a mimic models of human brain to solve a complex nonlinear problem. They are powerful pattern recognizers and classifiers, also their computing abilities have been proven in the fields of prediction, estimation and pattern recognition. Here, among them, the back propagation network and radial basis function network ate used. Compositions of high-performance concrete mixes are eight components(water, cement, fine aggregate, coarse aggregate, fly ash, silica fume, superplasticizer and air-entrainer). Compressive strength, slump, and air contents are measured. The results show that neural networks are proper tools to minimize the uncertainties of the design of concrete mixtures.

A Study on Temporal Data Models and Aggregate Functions (시간지원 데이터 모델 및 집계함수에 관한 연구)

  • Lee, In-Hong;Moon, Hong-Jin;Cho, Dong-Young;Lee, Wan-Kwon;Cho, Hyun-Joon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.2947-2959
    • /
    • 1997
  • Temporal data model is able to handle the time varying information, which is to add temporal attributes to conventional data model. The temporal data model is classified into three models depending upon supporting time dimension, that are the valid time model to support valid time, the transaction time model to support transaction model, and the bitemporal data model to support valid time and transaction time. Most temporal data models are designed to process the temporal data by extending the relational model. There are two types or temporal data model, which are the tuple timestamping and the attribute timestamping depending on time dimension. In this research, a concepts of temporal data model, the time dimension, types of thc data model, and a consideration for the data model design are discussed Also, temporal data models in terms of the time dimension are compared. And the aggregate function model of valid time model is proposed, and then logical analysis for its computing consts has been done.

  • PDF

Analytical Modeling of TCP Dynamics in Infrastructure-Based IEEE 802.11 WLANs

  • Yu, Jeong-Gyun;Choi, Sung-Hyun;Qiao, Daji
    • Journal of Communications and Networks
    • /
    • v.11 no.5
    • /
    • pp.518-528
    • /
    • 2009
  • IEEE 802.11 wireless local area network (WLAN) has become the prevailing solution for wireless Internet access while transport control protocol (TCP) is the dominant transport-layer protocol in the Internet. It is known that, in an infrastructure-based WLAN with multiple stations carrying long-lived TCP flows, the number of TCP stations that are actively contending to access the wireless channel remains very small. Hence, the aggregate TCP throughput is basically independent of the total number of TCP stations. This phenomenon is due to the closed-loop nature of TCP flow control and the bottleneck downlink (i.e., access point-to-station) transmissions in infrastructure-based WLANs. In this paper, we develop a comprehensive analytical model to study TCP dynamics in infrastructure-based 802.11 WLANs. We calculate the average number of active TCP stations and the aggregate TCP throughput using our model for given total number of TCP stations and the maximum TCP receive window size. We find out that the default minimum contention window sizes specified in the standards (i.e., 31 and 15 for 802.11b and 802.11a, respectively) are not optimal in terms of TCP throughput maximization. Via ns-2 simulation, we verify the correctness of our analytical model and study the effects of some of the simplifying assumptions employed in the model. Simulation results show that our model is reasonably accurate, particularly when the wireline delay is small and/or the packet loss rate is low.

Self-Organizing Polynomial Neural Networks Based on Genetically Optimized Multi-Layer Perceptron Architecture

  • Park, Ho-Sung;Park, Byoung-Jun;Kim, Hyun-Ki;Oh, Sung-Kwun
    • International Journal of Control, Automation, and Systems
    • /
    • v.2 no.4
    • /
    • pp.423-434
    • /
    • 2004
  • In this paper, we introduce a new topology of Self-Organizing Polynomial Neural Networks (SOPNN) based on genetically optimized Multi-Layer Perceptron (MLP) and discuss its comprehensive design methodology involving mechanisms of genetic optimization. Let us recall that the design of the 'conventional' SOPNN uses the extended Group Method of Data Handling (GMDH) technique to exploit polynomials as well as to consider a fixed number of input nodes at polynomial neurons (or nodes) located in each layer. However, this design process does not guarantee that the conventional SOPNN generated through learning results in optimal network architecture. The design procedure applied in the construction of each layer of the SOPNN deals with its structural optimization involving the selection of preferred nodes (or PNs) with specific local characteristics (such as the number of input variables, the order of the polynomials, and input variables) and addresses specific aspects of parametric optimization. An aggregate performance index with a weighting factor is proposed in order to achieve a sound balance between the approximation and generalization (predictive) abilities of the model. To evaluate the performance of the GA-based SOPNN, the model is experimented using pH neutralization process data as well as sewage treatment process data. A comparative analysis indicates that the proposed SOPNN is the model having higher accuracy as well as more superb predictive capability than other intelligent models presented previously.reviously.

Assessment of compressive strength of high-performance concrete using soft computing approaches

  • Chukwuemeka Daniel;Jitendra Khatti;Kamaldeep Singh Grover
    • Computers and Concrete
    • /
    • v.33 no.1
    • /
    • pp.55-75
    • /
    • 2024
  • The present study introduces an optimum performance soft computing model for predicting the compressive strength of high-performance concrete (HPC) by comparing models based on conventional (kernel-based, covariance function-based, and tree-based), advanced machine (least square support vector machine-LSSVM and minimax probability machine regressor-MPMR), and deep (artificial neural network-ANN) learning approaches using a common database for the first time. A compressive strength database, having results of 1030 concrete samples, has been compiled from the literature and preprocessed. For the purpose of training, testing, and validation of soft computing models, 803, 101, and 101 data points have been selected arbitrarily from preprocessed data points, i.e., 1005. Thirteen performance metrics, including three new metrics, i.e., a20-index, index of agreement, and index of scatter, have been implemented for each model. The performance comparison reveals that the SVM (kernel-based), ET (tree-based), MPMR (advanced), and ANN (deep) models have achieved higher performance in predicting the compressive strength of HPC. From the overall analysis of performance, accuracy, Taylor plot, accuracy metric, regression error characteristics curve, Anderson-Darling, Wilcoxon, Uncertainty, and reliability, it has been observed that model CS4 based on the ensemble tree has been recognized as an optimum performance model with higher performance, i.e., a correlation coefficient of 0.9352, root mean square error of 5.76 MPa, and mean absolute error of 4.1069 MPa. The present study also reveals that multicollinearity affects the prediction accuracy of Gaussian process regression, decision tree, multilinear regression, and adaptive boosting regressor models, novel research in compressive strength prediction of HPC. The cosine sensitivity analysis reveals that the prediction of compressive strength of HPC is highly affected by cement content, fine aggregate, coarse aggregate, and water content.

Unit Root Test for Temporally Aggregated Autoregressive Process

  • Shin, Dong-Wan;Kim, Sung-Chul
    • Journal of the Korean Statistical Society
    • /
    • v.22 no.2
    • /
    • pp.271-282
    • /
    • 1993
  • Unit root test for temporally aggregated first order autoregressive process is considered. The temporal aggregate of fist order autoregression is an autoregressive moving average of order (1,1) with moving average parameter being function of the autoregressive parameter. One-step Gauss-Newton estimators are proposed and are shown to have the same limiting distribution as the ordinary least squares estimator for unit root when complete observations are available. A Monte-Carlo simulation shows that the temporal aggregation have no effect on the size. The power of the suggested test are nearly the same as the powers of the test based on complete observations.

  • PDF

Adsorption of Hydrophobic Organic Compounds from Aqueous Solution with CTAB Coated Silicate (CTAB가 코팅된 Silicate을 이용한 소수성 유기물질의 흡착)

  • 김학성;정영도;한훈석
    • Journal of environmental and Sanitary engineering
    • /
    • v.10 no.3
    • /
    • pp.78-84
    • /
    • 1995
  • Cationic surfactants can be used to modify surface of solids to promote adsorption of hydrophobic organic compounds. This behavior is due to the surfactant forming aggregate structure on the solid surface. Partition coefficients are commonly used to quantify the distribution of organic pollutants between the aqueous and particulate phases of aquatic system Partitioning of hydrophobic compounds to cetyltrimethylammonium bromide ( CTAB ) coated silicate has been investigated as a function of surfactant surface coverage at I=0 and 0.1 ionic strength. Toluene, Xylene, TCI sorption experiments demonstrated that the CTAB coated silicate was able to remove these hydrophobic organic compounds from solution The hydrophobic organic compound with the higher Kow had higher removals than lowest Kow hydrophobic organic compound.

  • PDF