• Title/Summary/Keyword: engineering optimization

Search Result 11,050, Processing Time 0.051 seconds

Development of an intelligent IIoT platform for stable data collection (안정적 데이터 수집을 위한 지능형 IIoT 플랫폼 개발)

  • Woojin Cho;Hyungah Lee;Dongju Kim;Jae-hoi Gu
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.687-692
    • /
    • 2024
  • The energy crisis is emerging as a serious problem around the world. In the case of Korea, there is great interest in energy efficiency research related to industrial complexes, which use more than 53% of total energy and account for more than 45% of greenhouse gas emissions in Korea. One of the studies is a study on saving energy through sharing facilities between factories using the same utility in an industrial complex called a virtual energy network plant and through transactions between energy producing and demand factories. In such energy-saving research, data collection is very important because there are various uses for data, such as analysis and prediction. However, existing systems had several shortcomings in reliably collecting time series data. In this study, we propose an intelligent IIoT platform to improve it. The intelligent IIoT platform includes a preprocessing system to identify abnormal data and process it in a timely manner, classifies abnormal and missing data, and presents interpolation techniques to maintain stable time series data. Additionally, time series data collection is streamlined through database optimization. This paper contributes to increasing data usability in the industrial environment through stable data collection and rapid problem response, and contributes to reducing the burden of data collection and optimizing monitoring load by introducing a variety of chatbot notification systems.

Status Diagnosis Algorithm for Optimizing Power Generation of PV Power Generation System due to PV Module and Inverter Failure, Leakage and Arc Occurrence (태양광 모듈, 인버터 고장, 누설 및 아크 발생에 따른 태양광발전시스템의 발전량 최적화를 위한 상태진단 알고리즘)

  • Yongho Yoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.4
    • /
    • pp.135-140
    • /
    • 2024
  • It is said that PV power generation systems have a long lifespan compared to other renewable energy sources and require little maintenance. However, there are cases where the performance expected during initial design is not achieved due to shading, temperature rise, mismatch, contamination/deterioration of PV modules, failure of inverter, leakage current, and arc generation. Therefore, in order to solve the problems of these systems, the power generation amount and operation status are investigated qualitatively, or the performance is comparatively analyzed based on the performance ratio (PR), which is the performance index of the solar power generation system. However, because it includes large losses, it is difficult to accurately determine whether there are any abnormalities such as performance degradation, failure, or defects in the PV power generation system using only the performance coefficient. In this paper, we studied a status diagnosis algorithm for shading, inverter failure, leakage, and arcing of PV modules to optimize the power generation of PV power generation systems according to changes in the surrounding environment. In addition, using the studied algorithm, we examined the results of an empirical test on condition diagnosis for each area and the resulting optimized operation of power generation.

Monitoring of Reinjected Leachate in a Landfill using Electrical Resistivity Survey (전기비저항 탐사를 이용한 매립지의 재주입 침출수 모니터링)

  • Chul Hee Lee;Su In Jeon;Young-Kyu Kim;Won-Ki Kim
    • Geophysics and Geophysical Exploration
    • /
    • v.27 no.3
    • /
    • pp.159-170
    • /
    • 2024
  • The bioreactor method, in which leachate is reinjected into a landfill for rapid decomposition and stabilization of buried waste, is being applied and tested at many landfills because of its numerous advantages. To apply the bioreactor method to a landfill successfully, it is very important to understand the behavioral characteristics of the injected leachate. In this study, electrical resistivity monitoring was performed to estimate the behavior of a landfill leachate in Korea where the bioreactor method was applied. For the electrical resistivity monitoring, a baseline survey was conducted in August 2013 before the leachate was injected, and time-lapse monitoring surveys were conducted four times after injection. The electrical resistivity monitoring results revealed reductions in electrical resistivity in the landfill attributable to the injected leachate, and the change in its characteristics over time was confirmed. In addition, by newly defining the electrical resistivity change ratio and applying it in this study, the spatial distribution and behavior of the leachate over time were effectively identified. More research on optimization of data acquisition and integrated monitoring methods using various techniques should be conducted in the near future.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Identification of the Environmentally Problematic Input/Environmental Emissions and Selection of the Optimum End-of-pipe Treatment Technologies of the Cement Manufacturing Process (시멘트 제조공정의 환경적 취약 투입물/환경오염물 파악 및 최적종말처리 공정 선정)

  • Lee, Joo-Young;Kim, Yoon-Ha;Lee, Kun-Mo
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.39 no.8
    • /
    • pp.449-455
    • /
    • 2017
  • Process input data including material and energy, process output data including product, co-product and its environmental emissions of the reference and target processes were collected and analyzed to evaluate the process performance. Environmentally problematic input/environmental emissions of the manufacturing processes were identified using these data. Significant process inputs contributing to each of the environmental emissions were identified using multiple regression analysis between the process inputs and environmental emissions. Optimum combination of the end-of-pipe technologies for treating the environmental emissions considering economic aspects was made using the linear programming technique. The cement manufacturing processes in Korea and the EU producing same type of cement were chosen for the case study. Environmentally problematic input/environmental emissions of the domestic cement manufacturing processes include coal, dust, and $SO_x$. Multiple regression analysis among the process inputs and environmental emissions revealed that $CO_2$ emission was influenced most by coal, followed by the input raw materials and gypsum. $SO_x$ emission was influenced by coal, and dust emission by gypsum followed by raw material. Optimization of the end-of-pipe technologies treating dust showed that a combination of 100% of the electro precipitator and 2.4% of the fiber filter gives the lowest cost. The $SO_x$ case showed that a combination of 100% of the dry addition process and 25.88% of the wet scrubber gives the lowest cost. Salient feature of this research is that it proposed a method for identifying environmentally problematic input/environmental emissions of the manufacturing processes, in particular, cement manufacturing process. Another feature is that it showed a method for selecting the optimum combination of the end-of-pipe treatment technologies.

A Study on Process Optimization for CSOs Application of Horizontal Flow Filtration Technology (수평흐름식 여과기술의 CSOs 적용을 위한 공정 최적화 연구)

  • Kim, Jae-Hak;Yang, Jeong-Ha;Lee, Young-Shin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.2
    • /
    • pp.56-63
    • /
    • 2018
  • The management of Combined Sewer Overflows(CSOs) and Separated Sewer Overflows(SSOs) discharge directly to the effluent system in an untreated state, which occurs when the facility capacity is exceeded due to heavy rain, has become an important issue in recent years as the heavy rain becomes a regular phenomenon. Despite the continuous development of filtration technology, targeting densely populated urban areas, CSOs are rarely applied. Therefore, this study was carried out to optimize the process to apply CSOs in a pilot-scale horizontal flow filtration system with a rope-type synthetic fiber. The research was carried out in two steps: a preliminary study using artificial samples and a field study using sewage. In the preliminary study using an artificial sample, head loss of the filter media itself was analyzed to be approximately 1.1cm, and the head loss was increased by approximately 0.1cm as the linear velocity was increased by 10m/hr. In addition, the SS removal efficiency was stable at 81.4%, the filtration duration was maintained for more than 6 hours, and the average recovery rate of 98% was obtained by air backwashing only. In the on-site evaluation using sewage, the filtration duration was approximately 2 hours and the average removal efficiency of 83.9% was obtained when belt screen (over 450 mesh) was applied as a pre-treatment process to prevent the premature clogging of filter media. To apply the filtration process to CSOs and SSOs, it was concluded that the combination with the pre-treatment process was important to reinforce the hydraulic dimension for the stable maintain of operation period, rather than efficiency. Compared to the dry season, the quality of incoming sewage was lower in the rainy season, which was attributed to the characteristics of the drainage area with higher sanitary sewerage. In addition, the difference in removal efficiency according to the influent quality of the wet season and dry season was small.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

The Preparation of Magnetic Chitosan Nanoparticles with GABA and Drug Adsorption-Release (GABA를 담지한 자성 키토산 나노입자 제조와 약물의흡수 및 방출 연구)

  • Yoon, Hee-Soo;Kang, Ik-Joong
    • Korean Chemical Engineering Research
    • /
    • v.58 no.4
    • /
    • pp.541-549
    • /
    • 2020
  • The Drug Delivery System (DDS) is defined as a technology for designing existing or new drug formulations and optimizing drug treatment. DDS is designed to efficiently deliver drugs for the care of diseases, minimize the side effects of drug, and maximize drug efficacy. In this study, the optimization of tripolyphosphate (TPP) concentration on the size of Chitosan nanoparticles (CNPs) produced by crosslinking with chitosan was measured. In addition, the characteristics of Fe3O4-CNPs according to the amount of iron oxide (Fe3O4) were measured, and it was confirmed that the higher the amount of Fe3O4, the better the characteristics as a magnetic drug carrier were displayed. Through the ninhydrin reaction, a calibration curve was obtained according to the concentration of γ-aminobutyric acid (GABA) of Y = 0.00373exp(179.729X)-0.0114 (R2 = 0.989) in the low concentration (0.004 to 0.02 wt%) and Y = 21.680X-0.290 (R2 = 0.999) in the high concentration (0.02 to 0.1 wt%). Absorption was constant at about 62.5% above 0.04 g of initial GABA. In addition, the amount of GABA released from GABA-Fe3O4-CNPs over time was measured to confirm that drug release was terminated after about 24 hr. Finally, GABA-Fe3O4-CNPs performed under the optimal conditions were spherical particles of about 150 nm, and it was confirmed that the properties of the particles appear well, indicating that GABA-Fe3O4-CNPs were suitable as drug carriers.

Impact of Sulfur Dioxide Impurity on Process Design of $CO_2$ Offshore Geological Storage: Evaluation of Physical Property Models and Optimization of Binary Parameter (이산화황 불순물이 이산화탄소 해양 지중저장 공정설계에 미치는 영향 평가: 상태량 모델의 비교 분석 및 이성분 매개변수 최적화)

  • Huh, Cheol;Kang, Seong-Gil;Cho, Mang-Ik
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.13 no.3
    • /
    • pp.187-197
    • /
    • 2010
  • Carbon dioxide Capture and Storage(CCS) is regarded as one of the most promising options to response climate change. CCS is a three-stage process consisting of the capture of carbon dioxide($CO_2$), the transport of $CO_2$ to a storage location, and the long term isolation of $CO_2$ from the atmosphere for the purpose of carbon emission mitigation. Up to now, process design for this $CO_2$ marine geological storage has been carried out mainly on pure $CO_2$. Unfortunately the $CO_2$ mixture captured from the power plants and steel making plants contains many impurities such as $N_2$, $O_2$, Ar, $H_2O$, $SO_2$, $H_2S$. A small amount of impurities can change the thermodynamic properties and then significantly affect the compression, purification, transport and injection processes. In order to design a reliable $CO_2$ marine geological storage system, it is necessary to analyze the impact of these impurities on the whole CCS process at initial design stage. The purpose of the present paper is to compare and analyse the relevant physical property models including BWRS, PR, PRBM, RKS and SRK equations of state, and NRTL-RK model which are crucial numerical process simulation tools. To evaluate the predictive accuracy of the equation of the state for $CO_2-SO_2$ mixture, we compared numerical calculation results with reference experimental data. In addition, optimum binary parameter to consider the interaction of $CO_2$ and $SO_2$ molecules was suggested based on the mean absolute percent error. In conclusion, we suggest the most reliable physical property model with optimized binary parameter in designing the $CO_2-SO_2$ mixture marine geological storage process.

Optimization Process Models of Gas Combined Cycle CHP Using Renewable Energy Hybrid System in Industrial Complex (산업단지 내 CHP Hybrid System 최적화 모델에 관한 연구)

  • Oh, Kwang Min;Kim, Lae Hyun
    • Journal of Energy Engineering
    • /
    • v.28 no.3
    • /
    • pp.65-79
    • /
    • 2019
  • The study attempted to estimate the optimal facility capacity by combining renewable energy sources that can be connected with gas CHP in industrial complexes. In particular, we reviewed industrial complexes subject to energy use plan from 2013 to 2016. Although the regional designation was excluded, Sejong industrial complex, which has a fuel usage of 38 thousand TOE annually and a high heat density of $92.6Gcal/km^2{\cdot}h$, was selected for research. And we analyzed the optimal operation model of CHP Hybrid System linking fuel cell and photovoltaic power generation using HOMER Pro, a renewable energy hybrid system economic analysis program. In addition, in order to improve the reliability of the research by analyzing not only the heat demand but also the heat demand patterns for the dominant sectors in the thermal energy, the main supply energy source of CHP, the economic benefits were added to compare the relative benefits. As a result, the total indirect heat demand of Sejong industrial complex under construction was 378,282 Gcal per year, of which paper industry accounted for 77.7%, which is 293,754 Gcal per year. For the entire industrial complex indirect heat demand, a single CHP has an optimal capacity of 30,000 kW. In this case, CHP shares 275,707 Gcal and 72.8% of heat production, while peak load boiler PLB shares 103,240 Gcal and 27.2%. In the CHP, fuel cell, and photovoltaic combinations, the optimum capacity is 30,000 kW, 5,000 kW, and 1,980 kW, respectively. At this time, CHP shared 275,940 Gcal, 72.8%, fuel cell 12,390 Gcal, 3.3%, and PLB 90,620 Gcal, 23.9%. The CHP capacity was not reduced because an uneconomical alternative was found that required excessive operation of the PLB for insufficient heat production resulting from the CHP capacity reduction. On the other hand, in terms of indirect heat demand for the paper industry, which is the dominant industry, the optimal capacity of CHP, fuel cell, and photovoltaic combination is 25,000 kW, 5,000 kW, and 2,000 kW. The heat production was analyzed to be CHP 225,053 Gcal, 76.5%, fuel cell 11,215 Gcal, 3.8%, PLB 58,012 Gcal, 19.7%. However, the economic analysis results of the current electricity market and gas market confirm that the return on investment is impossible. However, we confirmed that the CHP Hybrid System, which combines CHP, fuel cell, and solar power, can improve management conditions of about KRW 9.3 billion annually for a single CHP system.