• Title/Summary/Keyword: engineering optimization

Search Result 11,050, Processing Time 0.048 seconds

A Study on Optimization of Nitric Acid Leaching and Roasting Process for Selective Lithium Leaching of Spent Batreries Cell Powder (폐 배터리 셀 분말의 선택적 리튬 침출을 위한 질산염화 공정 최적화 연구)

  • Jung, Yeon Jae;Park, Sung Cheol;Kim, Yong Hwan;Yoo, Bong Young;Lee, Man Seung;Son, Seong Ho
    • Resources Recycling
    • /
    • v.30 no.6
    • /
    • pp.43-52
    • /
    • 2021
  • In this study, the optimal nitration process for selective lithium leaching from powder of a spent battery cell (LiNixCoyMnzO2, LiCoO2) was studied using Taguchi method. The nitration process is a method of selective lithium leaching that involves converting non-lithium nitric compounds into oxides via nitric acid leaching and roasting. The influence of pretreatment temperature, nitric acid concentration, amount of nitric acid, and roasting temperature were evaluated. The signal-to-noise ratio and analysis of variance of the results were determined using L16(44) orthogonal arrays. The findings indicated that the roasting temperature followed by the nitric acid concentration, pretreatment temperature, and amount of nitric acid used had the greatest impact on the lithium leaching ratio. Following detailed experiments, the optimal conditions were found to be 10 h of pretreatment at 700℃ with 2 ml/g of 10 M nitric acid leaching followed by 10 h of roasting at 275℃. Under these conditions, the overall recovery of lithium exceeded 80%. X-ray diffraction (XRD) analysis of the leaching residue in deionized water after roasting of lithium nitrate and other nitrate compounds was performed. This was done to determine the cause of rapid decrease in lithium leaching rate above a roasting temperature of 400℃. The results confirmed that lithium manganese oxide was formed from lithium nitrate and manganese nitrate at these temperatures, and that it did not leach in deionized water. XRD analysis was also used to confirm the recovery of pure LiNO3 from the solution that was leached during the nitration process. This was carried out by evaporating and concentrating the leached solution through solid-liquid separation.

Optimal Operation of Gas Engine for Biogas Plant in Sewage Treatment Plant (하수처리장 바이오가스 플랜트의 가스엔진 최적 운영 방안)

  • Kim, Gill Jung;Kim, Lae Hyun
    • Journal of Energy Engineering
    • /
    • v.28 no.2
    • /
    • pp.18-35
    • /
    • 2019
  • The Korea District Heating Corporation operates a gas engine generator with a capacity of $4500m^3 /day$ of biogas generated from the sewage treatment plant of the Nanji Water Recycling Center and 1,500 kW. However, the actual operation experience of the biogas power plant is insufficient, and due to lack of accumulated technology and know-how, frequent breakdown and stoppage of the gas engine causes a lot of economic loss. Therefore, it is necessary to prepare technical fundamental measures for stable operation of the power plant In this study, a series of process problems of the gas engine plant using the biogas generated in the sewage treatment plant of the Nanji Water Recovery Center were identified and the optimization of the actual operation was made by minimizing the problems in each step. In order to purify the gas, which is the main cause of the failure stop, the conditions for establishing the quality standard of the adsorption capacity of the activated carbon were established through the analysis of the components and the adsorption test for the active carbon being used at present. In addition, the system was applied to actual operation by applying standards for replacement cycle of activated carbon to minimize impurities, strengthening measurement period of hydrogen sulfide, localization of activated carbon, and strengthening and improving the operation standards of the plant. As a result, the operating performance of gas engine # 1 was increased by 530% and the operation of the second engine was increased by 250%. In addition, improvement of vent line equipment has reduced work process and increased normal operation time and operation rate. In terms of economic efficiency, it also showed a sales increase of KRW 77,000 / year. By applying the strengthening and improvement measures of operating standards, it is possible to reduce the stoppage of the biogas plant, increase the utilization rate, It is judged to be an operational plan.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Optimization and Scale-up of Fish Skin Peptide Loaded Liposome Preparation and Its Storage Stability (어피 펩타이드 리포좀 대량생산 최적 조건 및 저장 안정성)

  • Lee, JungGyu;Lee, YunJung;Bai, JingJing;Kim, Soojin;Cho, Youngjae;Choi, Mi-Jung
    • Food Engineering Progress
    • /
    • v.21 no.4
    • /
    • pp.360-366
    • /
    • 2017
  • Fish skin peptide-loaded liposomes were prepared in 100 mL and 1 L solution as lab scales, and 10 L solution as a prototype scale. The particle size and zeta potential were measured to determine the optimal conditions for the production of fish skin peptide-loaded liposome. The liposome was manufactured by the following conditions: (1) primary homogenization at 4,000 rpm, 8,000 rpm, and 12,000 rpm for 3 minutes; (2) secondary homogenization at 40 watt (W), 60 W, and 80 W for 3 minutes. From this experimental design, the optimal conditions of homogenization were selected as 4,000 rpm and 60 W. For the next step, fish peptides were prepared as the concentrations of 3, 6, and 12% at the optimum manufacturing conditions of liposome and stored at $4^{\circ}C$. Particle size, polydispersion index (pdI), and zeta potential of peptide-loaded liposome were measured for its stability. Particle size increased significantly as manufacture scale and peptide concentration increased, and decreased over storage time. The zeta potential results increased as storage time increased at 10 L scale. In addition, 12% peptide showed the formation of a sediment layer after 3 weeks, and 6% peptide was considered to be the most suitable for industrial application.

Economic Impact of HEMOS-Cloud Services for M&S Support (M&S 지원을 위한 HEMOS-Cloud 서비스의 경제적 효과)

  • Jung, Dae Yong;Seo, Dong Woo;Hwang, Jae Soon;Park, Sung Uk;Kim, Myung Il
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.10
    • /
    • pp.261-268
    • /
    • 2021
  • Cloud computing is a computing paradigm in which users can utilize computing resources in a pay-as-you-go manner. In a cloud system, resources can be dynamically scaled up and down to the user's on-demand so that the total cost of ownership can be reduced. The Modeling and Simulation (M&S) technology is a renowned simulation-based method to obtain engineering analysis and results through CAE software without actual experimental action. In general, M&S technology is utilized in Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD), Multibody dynamics (MBD), and optimization fields. The work procedure through M&S is divided into pre-processing, analysis, and post-processing steps. The pre/post-processing are GPU-intensive job that consists of 3D modeling jobs via CAE software, whereas analysis is CPU or GPU intensive. Because a general-purpose desktop needs plenty of time to analyze complicated 3D models, CAE software requires a high-end CPU and GPU-based workstation that can work fluently. In other words, for executing M&S, it is absolutely required to utilize high-performance computing resources. To mitigate the cost issue from equipping such tremendous computing resources, we propose HEMOS-Cloud service, an integrated cloud and cluster computing environment. The HEMOS-Cloud service provides CAE software and computing resources to users who want to experience M&S in business sectors or academics. In this paper, the economic ripple effect of HEMOS-Cloud service was analyzed by using industry-related analysis. The estimated results of using the experts-guided coefficients are the production inducement effect of KRW 7.4 billion, the value-added effect of KRW 4.1 billion, and the employment-inducing effect of 50 persons per KRW 1 billion.

Optimization for Removal of Nitrogen Using Non-consumable Anode Electrodes (비소모성 Anode(산화전극)을 이용한 질소 제거 최적화)

  • Hyunsang, Kim;Younghee, Kim
    • Clean Technology
    • /
    • v.28 no.4
    • /
    • pp.309-315
    • /
    • 2022
  • Research was conducted to derive the optimal operation conditions and the optimal cathode for using a DSA electrode as an anode to minimize electrode consumption during the removal of nitrogen from wastewater by the electro-chemical method. Of the various electrodes tested as cathodes, brass was determined to be the optimal electrode. It had the highest NO3-N removal rate and the lowest concentration of residual NH3-N, a by-product when Cl is present in the solution. Investigating the effect of current density found that when the initial concentration of NO3-N was 50 mg L-1, the optimal current density was 15 mA cm-2. In addition, current densities above 15 mA cm-2 did not significantly affect the NO3-N removal rate. The effect of electrolytes on removing NO3-N and minimizing NH3-N was investigated by using Na2SO4 and NaCl as electrolytes and varying the reaction times. When Na2SO4 and NaCl are mixed at a ratio of 1.0 g L-1 to 0.5 g L-1 and reacted for 90 min at a current density of 15 mA cm-2 and an initial NO3-N concentration of 50 mg L-1, the removal rate of NO3-N was about 48% and there was no residual NH3-N. On the other hand, when using only 1.5 g L-1 of NaCl as an electrolyte, the removal rate of NO3-N was the highest at about 55% and there was no residual NH3-N.

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.

A Study on the Development of Ultra-precision Small Angle Spindle for Curved Processing of Special Shape Pocket in the Fourth Industrial Revolution of Machine Tools (공작기계의 4차 산업혁명에서 특수한 형상 포켓 곡면가공을 위한 초정밀 소형 앵글 스핀들 개발에 관한 연구)

  • Lee Ji Woong
    • Journal of Practical Engineering Education
    • /
    • v.15 no.1
    • /
    • pp.119-126
    • /
    • 2023
  • Today, in order to improve fuel efficiency and dynamic behavior of automobiles, an era of light weight and simplification of automobile parts is being formed. In order to simplify and design and manufacture the shape of the product, various components are integrated. For example, in order to commercialize three products into one product, product processing is occurring to a very narrow area. In the case of existing parts, precision die casting or casting production is used for processing convenience, and the multi-piece method requires a lot of processes and reduces the precision and strength of the parts. It is very advantageous to manufacture integrally to simplify the processing air and secure the strength of the parts, but if a deep and narrow pocket part needs to be processed, it cannot be processed with the equipment's own spindle. To solve a problem, research on cutting processing is being actively conducted, and multi-axis composite processing technology not only solves this problem. It has many advantages, such as being able to cut into composite shapes that have been difficult to flexibly cut through various processes with one machine tool so far. However, the reality is that expensive equipment increases manufacturing costs and lacks engineers who can operate the machine. In the five-axis cutting processing machine, when producing products with deep and narrow sections, the cycle time increases in product production due to the indirectness of tools, and many problems occur in processing. Therefore, dedicated machine tools and multi-axis composite machines should be used. Alternatively, an angle spindle may be used as a special tool capable of multi-axis composite machining of five or more axes in a three-axis machining center. Various and continuous studies are needed in areas such as processing vibration absorption, low heat generation and operational stability, excellent dimensional stability, and strength securing by using the angle spindle.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Re-Analysis of Clark Model Based on Drainage Structure of Basin (배수구조를 기반으로 한 Clark 모형의 재해석)

  • Park, Sang Hyun;Kim, Joo Cheol;Jeong, Dong Kug;Jung, Kwan Sue
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.6
    • /
    • pp.2255-2265
    • /
    • 2013
  • This study presents the width function-based Clark model. To this end, rescaled width function with distinction between hillslope and channel velocity is used as time-area curve and then it is routed through linear storage within the framework of not finite difference scheme used in original Clark model but analytical expression of linear storage routing. There are three parameters focused in this study: storage coefficient, hillslope velocity and channel velocity. SCE-UA, one of the popular global optimization methods, is applied to estimate them. The shapes of resulting IUHs from this study are evaluated in terms of the three statistical moments of hydrologic response functions: mean, variance and the third moment about the center of IUH. The correlation coefficients to the three statistical moments simulated in this study against these of observed hydrographs were estimated at 0.995 for the mean, 0.993 for the variance and 0.983 for the third moment about the center of IUH. The shape of resulting IUHs from this study give rise to satisfactory simulation results in terms of the mean and variance. But the third moment about the center of IUH tend to be overestimated. Clark model proposed in this study is superior to the one only taking into account mean and variance of IUH with respect to skewness, peak discharge and peak time of runoff hydrograph. From this result it is confirmed that the method suggested in this study is useful tool to reflect the heterogeneity of drainage path and hydrodynamic parameters. The variation of statistical moments of IUH are mainly influenced by storage coefficient and in turn the effect of channel velocity is greater than the one of hillslope velocity. Therefore storage coefficient and channel velocity are the crucial factors in shaping the form of IUH and should be considered carefully to apply Clark model proposed in this study.