• Title/Summary/Keyword: automatic modeling

Search Result 650, Processing Time 0.026 seconds

Simulation of Dynamic Characteristics of a Trigenerative Climate Control System Based On Peltier Thermoelectric Modules

  • Vasilyev, G.S.;Kuzichkin, O.R.;Surzhik, D.I.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.252-257
    • /
    • 2021
  • The application of the principle of trigeneration allows to simultaneously provide electricity to power electronic devices, as well as heat and cold to create the necessary microclimate of the premises and increase efficiency compared to separate cooling and heating systems. The use of Peltier thermoelectric modules (TEM) as part of trigenerative systems allows for smooth and precise control of the temperature regime, high manufacturability and reliability due to the absence of moving parts, resistance to shock and vibration, and small weight and size parameters of the system. One of the promising areas of improvement of trigenerative systems is their modeling and optimization based on the automatic control theory. A block diagram and functional model of an energy-saving trigenerative climate control system based on Peltier modules are developed, and the transfer functions of an open and closed system are obtained. The simulation of the transient characteristics of the system with varying parameters of the components is performed. The directions for improving the quality of transients in the climate control system are determined, as well as the prospects of the proposed methodology for modeling and analyzing control systems operating in substantially nonlinear modes.

Coordinated control of two arms using fuzzy inference

  • Kim, Moon-Ju;Park, Min-Kee;Ji, Seung-Hwan;Kim, Seung-Woo;Park, Mignon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1994.10a
    • /
    • pp.263-266
    • /
    • 1994
  • Recently, complicated and dexterous tasks with two or more arms are needed in ninny robot manipulator applications which can not be accomplished with one manipulator. In general, when two arms manipulate an object, tile dynamics of the arms and the object should be considered simultaneously. In order to control the force of tile arms, we can use various control schemes based upon dynamic modeling. But, there are difficulties in solving inverse dynamics equations, and the environment where a manipulator performs various tasks is usually unknown, and we can not describe a model precisely, for instances, the effect of the joint flexibility, and the friction between the arm and the object. Therefore, in this paper, we suggest a new force control method employing fuzzy inference without solving dynamic equations. Fuzzy inference rules and parameters are designed and adjusted with the automatic fuzzy modeling method using the Hough transform and gradient descent method.

  • PDF

Automation of Building Extraction and Modeling Using Airborne LiDAR Data (항공 라이다 데이터를 이용한 건물 모델링의 자동화)

  • Lim, Sae-Bom;Kim, Jung-Hyun;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.619-628
    • /
    • 2009
  • LiDAR has capability of rapid data acquisition and provides useful information for reconstructing surface of the Earth. However, Extracting information from LiDAR data is not easy task because LiDAR data consist of irregularly distributed point clouds of 3D coordinates and lack of semantic and visual information. This thesis proposed methods for automatic extraction of buildings and 3D detail modeling using airborne LiDAR data. As for preprocessing, noise and unnecessary data were removed by iterative surface fitting and then classification of ground and non-ground data was performed by analyzing histogram. Footprints of the buildings were extracted by tracing points on the building boundaries. The refined footprints were obtained by regularization based on the building hypothesis. The accuracy of building footprints were evaluated by comparing with 1:1,000 digital vector maps. The horizontal RMSE was 0.56m for test areas. Finally, a method of 3D modeling of roof superstructure was developed. Statistical and geometric information of the LiDAR data on building roof were analyzed to segment data and to determine roof shape. The superstructures on the roof were modeled by 3D analytical functions that were derived by least square method. The accuracy of the 3D modeling was estimated using simulation data. The RMSEs were 0.91m, 1.43m, 1.85m and 1.97m for flat, sloped, arch and dome shapes, respectively. The methods developed in study show that the automation of 3D building modeling process was effectively performed.

Three-Dimensional Resistivity Modeling by Serendipity Element (Serendipity 요소법에 의한 전기비저항 3차원 모델링)

  • Lee, Keun-Soo;Cho, In-Ky;Kang, Hye-Jin
    • Geophysics and Geophysical Exploration
    • /
    • v.15 no.1
    • /
    • pp.33-38
    • /
    • 2012
  • A resistivity method has been applied to wide range of engineering and environmental problems with the help of automatic and precise data acquisition. Thus, more accurate modeling and inversion of time-lapse monitoring data are required since resistivity monitoring has been introduced to quantitatively find out subsurface changes With respect to time. Here, we used the finite element method (FEM) for 3D resistivity modeling since the method is easy to realize complex topography and arbitrary shaped anomalous bodies. In the FEM, the linear elements, also referred to as first order elements, have certain advantages of simple formulation and narrow bandwidth of system equation. However, the linear elements show the poor accuracy and slow convergence of the solution with respect to the number of elements or nodes. To achieve the higher accuracy of finite element solution, high order elements are generally used. In this study, we developed a 3D resistivity modeling program using high order Serendipity elements. Comparing the Serendipity element solutions for a cube model with the linear element solutions, we assured that the Serendipity element solutions are more accurate than the linear element solutions in the 3D resistivity modeling.

Estimation of deep percolation using field moisture observations and HYDRUS-1D modeling in Haean basin (해안분지의 현장 토양수분 관측과 HYDRUS-1D 모델링을 이용한 지하수 함양 추정)

  • Kim, Jeong Jik;Jeon, Woo-Hyun;Lee, Jin-Yong
    • Journal of the Geological Society of Korea
    • /
    • v.54 no.5
    • /
    • pp.545-556
    • /
    • 2018
  • This study was conducted to estimate the deep percolation using numerical modeling and field observation data based on rainfall in Haean basin. Soil moisture sensors were installed to monitoring at 30, 60 and 90 cm depths in four sites (YHS1-4) and automatic weather station was installed to around YHS3. Soil moisture and meteorological data was observed from March 25, 2017 to March 25, 2018 and May 06, 2016 to May 06, 2018, respectively. Numerical analysis was performed from June to August, 2017 using the HYDRUS-1D. Average soil moisture contents were high to generally in YHS3 for 0.300 to $0.334m^3/m^3$ and lowest in YHS1 for 0.129 to $0.265m^3/m^3$ during the soil moisture monitoring period. The results of soil moisture flow modeling showed that field observations and modeling values were similar but the peak values were larger in the modeling result. Correlation analysis between observation and modeling data showed that r, $r^2$ and RMSE were 0.88, 0.77, and 0.0096, respectively. This show high correlation and low error rate. The total deep percolation was 744.2 mm during the period of modelling at 500 cm depth. This showed that 61.3% of the precipitation amount (1,214 mm) was recharged in 2017. Deep percolation amount was high in the study area. This study is expected to provide basic data for the estimation of groundwater recharge through unsaturated zone.

AutoML and Artificial Neural Network Modeling of Process Dynamics of LNG Regasification Using Seawater (해수 이용 LNG 재기화 공정의 딥러닝과 AutoML을 이용한 동적모델링)

  • Shin, Yongbeom;Yoo, Sangwoo;Kwak, Dongho;Lee, Nagyeong;Shin, Dongil
    • Korean Chemical Engineering Research
    • /
    • v.59 no.2
    • /
    • pp.209-218
    • /
    • 2021
  • First principle-based modeling studies have been performed to improve the heat exchange efficiency of ORV and optimize operation, but the heat transfer coefficient of ORV is an irregular system according to time and location, and it undergoes a complex modeling process. In this study, FNN, LSTM, and AutoML-based modeling were performed to confirm the effectiveness of data-based modeling for complex systems. The prediction accuracy indicated high performance in the order of LSTM > AutoML > FNN in MSE. The performance of AutoML, an automatic design method for machine learning models, was superior to developed FNN, and the total time required for model development was 1/15 compared to LSTM, showing the possibility of using AutoML. The prediction of NG and seawater discharged temperatures using LSTM and AutoML showed an error of less than 0.5K. Using the predictive model, real-time optimization of the amount of LNG vaporized that can be processed using ORV in winter is performed, confirming that up to 23.5% of LNG can be additionally processed, and an ORV optimal operation guideline based on the developed dynamic prediction model was presented.

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

A New Dynamic Auction Mechanism in the Supply Chain: N-Bilateral Optimized Combinatorial Auction (N-BOCA) (공급사슬에서의 새로운 동적 경매 메커니즘: 다자간 최적화 조합경매 모형)

  • Choi Jin-Ho;Chang Yong-Sik;Han In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.139-161
    • /
    • 2006
  • In this paper, we introduce a new combinatorial auction mechanism - N-Bilateral Optimized Combinatorial Auction (N-BOCA). N-BOCA is a flexible iterative combinatorial auction model that offers optimized trading for multi-suppliers and multi-purchasers in the supply chain. We design the N-BOCA system from the perspectives of architecture, protocol, and trading strategy. Under the given N-BOCA architecture and protocol, auctioneers and bidders have diverse decision strategies f3r winner determination. This needs flexible modeling environments. Hence, we propose an optimization modeling agent for bid and auctioneer selection. The agent has the capability to automatic model formulation for Integer Programming modeling. Finally, we show the viability of N-BOCA through prototype and experiments. The results say both higher allocation efficiency and effectiveness compared with 1-to-N general combinatorial auction mechanisms.

  • PDF

Pronunciation Variation Modeling for Korean Point-of-Interest Data Using Prosodic Information (운율 정보를 이용한 한국어 위치 정보 데이타의 발음 모델링)

  • Kim, Sun-He;Park, Jeon-Gue;Na, Min-Soo;Jeon, Je-Hun;Chung, Min-Wha
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.2
    • /
    • pp.104-111
    • /
    • 2007
  • This paper examines how the performance of an automatic speech recognizer was improved for Korean Point-of-Interest (POI) data by modeling pronunciation variation using structural prosodic information such as prosodic words and syllable length. First, multiple pronunciation variants are generated using prosodic words given that each POI word can be broken down into prosodic words. And the cross-prosodic-word variations were modeled considering the syllable length of word. A total of 81 experiments were conducted using 9 test sets (3 baseline and 6 proposed) on 9 trained sets (3 baseline, 6 proposed). The results show: (i) the performance was improved when the pronunciation lexica were generated using prosodic words; (ii) the best performance was achieved when the maximum number of variants was constrained to 3 based on the syllable length; and (iii) compared to the baseline word error rate (WER) of 4.63%, a maximum of 8.4% in WER reduction was achieved when both prosodic words and syllable length were considered.

A Case Study on the Air Quality Impact Assessment for the Large Scale Urban Development (대규모 도시개발사업에 대한 대기질 평가 사례 연구)

  • Kim, Sang-Mok;Lee, Sang-Hun;Park, Keun-Hyoung;Woo, Jae-Kyun;Koo, Youn-Seo;Kim, Sung-Tae;Han, Jin-Seok
    • Journal of Environmental Impact Assessment
    • /
    • v.16 no.6
    • /
    • pp.381-391
    • /
    • 2007
  • The air quality modeling was carried out to assess the impact of air quality for large scale urban development. The site for the assessment is Multi-fuctional Administrative City which locates in Yeongi-gun, Chungcheongnam-do and estimated population in 2030 is 500,000. Two automatic weather monitoring stations were installed to monitor the meteorological variables for a year and upper air meteorological parameters were measured using radiosonde for 5 days with 4 hours interval in every season. The air quality of standard air pollutants were also measured for 5 days continuously in every season. The results of wind field analysis based on the site measurements and CALMET modeling showed that the valley and mountain winds were prevailed when the sypnotic wind was weak. It also showed that wind speed and directions were highly space-variable within the site basin. The variable wind characteristics implies that the Gaussian dispersion model such ISC3 and AERMOD are not appropriate and the unsteady-sate Lagrangian model such as CALPUFF is preferable. CALPUFF model was applied to assess air quality impact of new sources. The new sources were those for individual and group heating facilities as well as the traffic increases. The results showed that the estimated concentrations of CO and $SO_2$ pollutants by summing the impact concentration of new sources by the dispersion model and the ambient air concentrations by the site measurements were acceptable but those of PM-10 and $NO_2$ would violate ambient air quality standards at several locations due to high ambient air concentrations. It is recommended that the emission reductions near the site should be enforced to improve the ambient air quality.