• Title/Summary/Keyword: Processing Parameter

Search Result 1,117, Processing Time 0.024 seconds

Development of smart car intelligent wheel hub bearing embedded system using predictive diagnosis algorithm

  • Sam-Taek Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.1-8
    • /
    • 2023
  • If there is a defect in the wheel bearing, which is a major part of the car, it can cause problems such as traffic accidents. In order to solve this problem, big data is collected and monitoring is conducted to provide early information on the presence or absence of wheel bearing failure and type of failure through predictive diagnosis and management technology. System development is needed. In this paper, to implement such an intelligent wheel hub bearing maintenance system, we develop an embedded system equipped with sensors for monitoring reliability and soundness and algorithms for predictive diagnosis. The algorithm used acquires vibration signals from acceleration sensors installed in wheel bearings and can predict and diagnose failures through big data technology through signal processing techniques, fault frequency analysis, and health characteristic parameter definition. The implemented algorithm applies a stable signal extraction algorithm that can minimize vibration frequency components and maximize vibration components occurring in wheel bearings. In noise removal using a filter, an artificial intelligence-based soundness extraction algorithm is applied, and FFT is applied. The fault frequency was analyzed and the fault was diagnosed by extracting fault characteristic factors. The performance target of this system was over 12,800 ODR, and the target was met through test results.

Modified AWSSDR method for frequency-dependent reverberation time estimation (주파수 대역별 잔향시간 추정을 위한 변형된 AWSSDR 방식)

  • Min Sik Kim;Hyung Soon Kim
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.91-100
    • /
    • 2023
  • Reverberation time (T60) is a typical acoustic parameter that provides information about reverberation. Since the impacts of reverberation vary depending on the frequency bands even in the same space, frequency-dependent (FD) T60, which offers detailed insights into the acoustic environments, can be useful. However, most conventional blind T60 estimation methods, which estimate the T60 from speech signals, focus on fullband T60 estimation, and a few blind FDT60 estimation methods commonly show poor performance in the low-frequency bands. This paper introduces a modified approach based on Attentive pooling based Weighted Sum of Spectral Decay Rates (AWSSDR), previously proposed for blind T60 estimation, by extending its target from fullband T60 to FDT60. The experimental results show that the proposed method outperforms conventional blind FDT60 estimation methods on the acoustic characterization of environments (ACE) challenge evaluation dataset. Notably, it consistently exhibits excellent estimation performance in all frequency bands. This demonstrates that the mechanism of the AWSSDR method is valuable for blind FDT60 estimation because it reflects the FD variations in the impact of reverberation, aggregating information about FDT60 from the speech signal by processing the spectral decay rates associated with the physical properties of reverberation in each frequency band.

Speed-up Techniques for High-Resolution Grid Data Processing in the Early Warning System for Agrometeorological Disaster (농업기상재해 조기경보시스템에서의 고해상도 격자형 자료의 처리 속도 향상 기법)

  • Park, J.H.;Shin, Y.S.;Kim, S.K.;Kang, W.S.;Han, Y.K.;Kim, J.H.;Kim, D.J.;Kim, S.O.;Shim, K.M.;Park, E.W.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.153-163
    • /
    • 2017
  • The objective of this study is to enhance the model's speed of estimating weather variables (e.g., minimum/maximum temperature, sunshine hour, PRISM (Parameter-elevation Regression on Independent Slopes Model) based precipitation), which are applied to the Agrometeorological Early Warning System (http://www.agmet.kr). The current process of weather estimation is operated on high-performance multi-core CPUs that have 8 physical cores and 16 logical threads. Nonetheless, the server is not even dedicated to the handling of a single county, indicating that very high overhead is involved in calculating the 10 counties of the Seomjin River Basin. In order to reduce such overhead, several cache and parallelization techniques were used to measure the performance and to check the applicability. Results are as follows: (1) for simple calculations such as Growing Degree Days accumulation, the time required for Input and Output (I/O) is significantly greater than that for calculation, suggesting the need of a technique which reduces disk I/O bottlenecks; (2) when there are many I/O, it is advantageous to distribute them on several servers. However, each server must have a cache for input data so that it does not compete for the same resource; and (3) GPU-based parallel processing method is most suitable for models such as PRISM with large computation loads.

A Study on the Performance Evaluation of G2B Procurement Process Innovation by Using MAS: Korea G2B KONEPS Case (멀티에이전트시스템(MAS)을 이용한 G2B 조달 프로세스 혁신의 효과평가에 관한 연구 : 나라장터 G2B사례)

  • Seo, Won-Jun;Lee, Dae-Cheor;Lim, Gyoo-Gun
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.157-175
    • /
    • 2012
  • It is difficult to evaluate the performance of process innovation of e-procurement which has large scale and complex processes. The existing evaluation methods for measuring the effects of process innovation have been mainly done with statistically quantitative methods by analyzing operational data or with qualitative methods by conducting surveys and interviews. However, these methods have some limitations to evaluate the effects because the performance evaluation of e-procurement process innovation should consider the interactions among participants who are active either directly or indirectly through the processes. This study considers the e-procurement process as a complex system and develops a simulation model based on MAS(Multi-Agent System) to evaluate the effects of e-procurement process innovation. Multi-agent based simulation allows observing interaction patterns of objects in virtual world through relationship among objects and their behavioral mechanism. Agent-based simulation is suitable especially for complex business problems. In this study, we used Netlogo Version 4.1.3 as a MAS simulation tool which was developed in Northwestern University. To do this, we developed a interaction model of agents in MAS environment. We defined process agents and task agents, and assigned their behavioral characteristics. The developed simulation model was applied to G2B system (KONEPS: Korea ON-line E-Procurement System) of Public Procurement Service (PPS) in Korea and used to evaluate the innovation effects of the G2B system. KONEPS is a successfully established e-procurement system started in the year 2002. KONEPS is a representative e-Procurement system which integrates characteristics of e-commerce into government for business procurement activities. KONEPS deserves the international recognition considering the annual transaction volume of 56 billion dollars, daily exchanges of electronic documents, users consisted of 121,000 suppliers and 37,000 public organizations, and the 4.5 billion dollars of cost saving. For the simulation, we analyzed the e-procurement of process of KONEPS into eight sub processes such as 'process 1: search products and acquisition of proposal', 'process 2 : review the methods of contracts and item features', 'process 3 : a notice of bid', 'process 4 : registration and confirmation of qualification', 'process 5 : bidding', 'process 6 : a screening test', 'process 7 : contracts', and 'process 8 : invoice and payment'. For the parameter settings of the agents behavior, we collected some data from the transactional database of PPS and some information by conducting a survey. The used data for the simulation are 'participants (government organizations, local government organizations and public institutions)', 'the number of bidding per year', 'the number of total contracts', 'the number of shopping mall transactions', 'the rate of contracts between bidding and shopping mall', 'the successful bidding ratio', and the estimated time for each process. The comparison was done for the difference of time consumption between 'before the innovation (As-was)' and 'after the innovation (As-is).' The results showed that there were productivity improvements in every eight sub processes. The decrease ratio of 'average number of task processing' was 92.7% and the decrease ratio of 'average time of task processing' was 95.4% in entire processes when we use G2B system comparing to the conventional method. Also, this study found that the process innovation effect will be enhanced if the task process related to the 'contract' can be improved. This study shows the usability and possibility of using MAS in process innovation evaluation and its modeling.

Design and Implementation of a Web Application Firewall with Multi-layered Web Filter (다중 계층 웹 필터를 사용하는 웹 애플리케이션 방화벽의 설계 및 구현)

  • Jang, Sung-Min;Won, Yoo-Hun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.12
    • /
    • pp.157-167
    • /
    • 2009
  • Recently, the leakage of confidential information and personal information is taking place on the Internet more frequently than ever before. Most of such online security incidents are caused by attacks on vulnerabilities in web applications developed carelessly. It is impossible to detect an attack on a web application with existing firewalls and intrusion detection systems. Besides, the signature-based detection has a limited capability in detecting new threats. Therefore, many researches concerning the method to detect attacks on web applications are employing anomaly-based detection methods that use the web traffic analysis. Much research about anomaly-based detection through the normal web traffic analysis focus on three problems - the method to accurately analyze given web traffic, system performance needed for inspecting application payload of the packet required to detect attack on application layer and the maintenance and costs of lots of network security devices newly installed. The UTM(Unified Threat Management) system, a suggested solution for the problem, had a goal of resolving all of security problems at a time, but is not being widely used due to its low efficiency and high costs. Besides, the web filter that performs one of the functions of the UTM system, can not adequately detect a variety of recent sophisticated attacks on web applications. In order to resolve such problems, studies are being carried out on the web application firewall to introduce a new network security system. As such studies focus on speeding up packet processing by depending on high-priced hardware, the costs to deploy a web application firewall are rising. In addition, the current anomaly-based detection technologies that do not take into account the characteristics of the web application is causing lots of false positives and false negatives. In order to reduce false positives and false negatives, this study suggested a realtime anomaly detection method based on the analysis of the length of parameter value contained in the web client's request. In addition, it designed and suggested a WAF(Web Application Firewall) that can be applied to a low-priced system or legacy system to process application data without the help of an exclusive hardware. Furthermore, it suggested a method to resolve sluggish performance attributed to copying packets into application area for application data processing, Consequently, this study provide to deploy an effective web application firewall at a low cost at the moment when the deployment of an additional security system was considered burdened due to lots of network security systems currently used.

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

Airborne Hyperspectral Imagery availability to estimate inland water quality parameter (수질 매개변수 추정에 있어서 항공 초분광영상의 가용성 고찰)

  • Kim, Tae-Woo;Shin, Han-Sup;Suh, Yong-Cheol
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.1
    • /
    • pp.61-73
    • /
    • 2014
  • This study reviewed an application of water quality estimation using an Airborne Hyperspectral Imagery (A-HSI) and tested a part of Han River water quality (especially suspended solid) estimation with available in-situ data. The estimation of water quality was processed two methods. One is using observation data as downwelling radiance to water surface and as scattering and reflectance into water body. Other is linear regression analysis with water quality in-situ measurement and upwelling data as at-sensor radiance (or reflectance). Both methods drive meaningful results of RS estimation. However it has more effects on the auxiliary dataset as water quality in-situ measurement and water body scattering measurement. The test processed a part of Han River located Paldang-dam downstream. We applied linear regression analysis with AISA eagle hyperspectral sensor data and water quality measurement in-situ data. The result of linear regression for a meaningful band combination shows $-24.847+0.013L_{560}$ as 560 nm in radiance (L) with 0.985 R-square. To comparison with Multispectral Imagery (MSI) case, we make simulated Landsat TM by spectral resampling. The regression using MSI shows -55.932 + 33.881 (TM1/TM3) as radiance with 0.968 R-square. Suspended Solid (SS) concentration was about 3.75 mg/l at in-situ data and estimated SS concentration by A-HIS was about 3.65 mg/l, and about 5.85mg/l with MSI with same location. It shows overestimation trends case of estimating using MSI. In order to upgrade value for practical use and to estimate more precisely, it needs that minimizing sun glint effect into whole image, constructing elaborate flight plan considering solar altitude angle, and making good pre-processing and calibration system. We found some limitations and restrictions such as precise atmospheric correction, sample count of water quality measurement, retrieve spectral bands into A-HSI, adequate linear regression model selection, and quantitative calibration/validation method through the literature review and test adopted general methods.

Study on the Methodology of the Microbial Risk Assessment in Food (식품중 미생물 위해성평가 방법론 연구)

  • 이효민;최시내;윤은경;한지연;김창민;김길생
    • Journal of Food Hygiene and Safety
    • /
    • v.14 no.4
    • /
    • pp.319-326
    • /
    • 1999
  • Recently, it is continuously rising to concern about the health risk being induced by microorganisms in food such as Escherichia coli O157:H7 and Listeria monocytogenes. Various organizations and regulatory agencies including U.S.FPA, U.S.DA and FAO/WHO are preparing the methodology building to apply microbial quantitative risk assessment to risk-based food safety program. Microbial risks are primarily the result of single exposure and its health impacts are immediate and serious. Therefore, the methodology of risk assessment differs from that of chemical risk assessment. Microbial quantitative risk assessment consists of tow steps; hazard identification, exposure assessment, dose-response assessment and risk characterization. Hazard identification is accomplished by observing and defining the types of adverse health effects in humans associated with exposure to foodborne agents. Epidemiological evidence which links the various disease with the particular exposure route is an important component of this identification. Exposure assessment includes the quantification of microbial exposure regarding the dynamics of microbial growth in food processing, transport, packaging and specific time-temperature conditions at various points from animal production to consumption. Dose-response assessment is the process characterizing dose-response correlation between microbial exposure and disease incidence. Unlike chemical carcinogens, the dose-response assessment for microbial pathogens has not focused on animal models for extrapolation to humans. Risk characterization links the exposure assessment and dose-response assessment and involve uncertainty analysis. The methodology of microbial dose-response assessment is classified as nonthreshold and thresh-old approach. The nonthreshold model have assumption that one organism is capable of producing an infection if it arrives at an appropriate site and organism have independence. Recently, the Exponential, Beta-poission, Gompertz, and Gamma-weibull models are using as nonthreshold model. The Log-normal and Log-logistic models are using as threshold model. The threshold has the assumption that a toxicant is produce by interaction of organisms. In this study, it was reviewed detailed process including risk value using model parameter and microbial exposure dose. Also this study suggested model application methodology in field of exposure assessment using assumed food microbial data(NaCl, water activity, temperature, pH, etc.) and the commercially used Food MicroModel. We recognized that human volunteer data to the healthy man are preferred rather than epidemiological data fur obtaining exact dose-response data. But, the foreign agencies are studying the characterization of correlation between human and animal. For the comparison of differences to the population sensitivity: it must be executed domestic study such as the establishment of dose-response data to the Korean volunteer by each microbial and microbial exposure assessment in food.

  • PDF

Preparation of Semi-Solid Apple-Based Baby Food (반고형 사과 이유보충식의 제조)

  • Sohn, Kyung-Hee;Kim, Mi-Ran;Yim, Sung-Kyoung;Park, Hyun-Kyung;Park, Ok-Jin
    • Korean Journal of Food Science and Technology
    • /
    • v.34 no.1
    • /
    • pp.43-50
    • /
    • 2002
  • To develop commercial semi-solid apple baby food, the physicochemical characteristics of apple puree in relation to different preparing methods and the effect of the addition methods of ascorbic acid on browning reaction were investigated. The preparing methods were classified into 3 groups by initial heating treatment: no heating (A), steaming at $120^{\circ}C$ (B), and blancing at $100^{\circ}C$ (C). The viscosity of tested apple puree was $2,600{\sim}5,856\;cp$, and contents of anhydrogalaturonic acid (AGA) and neutral sugar ranged $4.15{\sim}11.92\;mg%$ and $6.18{\sim}10.65\;mg%$, respectively. Among free sugars tested, level of fructose was the highest $(5.43{\sim}8.87%)$, followed by glucose $(2.11{\sim}4.23%)$, sucrose $(1.64{\sim}2.94%)$, in that order. Since small amounts of ascorbic acid were detected $(1.54{\sim}1.83\;mg%)$, it seemed to be lost by heating process in preparing of apple puree. For apple puree A, its lightness was lower and redness was higher than those of apple puree B and C. Its degree of browning of apple puree was so high that sodium ascorbic acid was added as a antibrowning agent. Puree had low sensory score and nutrient quality. The adding methods of ascorbic acid were classified into 4 groups by adding time: dipping, blending (2), heating (3), and blending + heating (4). Considering color and preference evaluation, preparing method B and adding method 2 showed the highest inhibitory activity on apple puree browning and desirable color for retort baby food. After retort sterilization, the viscosity of apple baby food was decreased from 3,477 cp to 2,294 cp, thiamin was destroyed completely, and the contents of riboflavin and ascorbic acid were decreased 41% and 21%, respectively. However, contents of free sugar and free amino acid and sensory parameter were not influenced by retort sterilization. In overall, the preparing method B-adding method 2 was a good processing condition for the retort apple baby food.

The Study of New Reconstruction Method for Brain SPECT on Dual Detector System (Dual detector system에서 Brain SPECT의 new reconstruction method의 연구)

  • Lee, Hyung-Jin;Kim, Su-Mi;Lee, Hong-Jae;Kim, Jin-Eui;Kim, Hyun-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.57-62
    • /
    • 2009
  • Purpose: Brain SPECT study is more sensitive to motion than other studies. Especially, when applying 1-day subtraction method for Diamox SPECT, it needs shorter study time in order to prevent reexamination. We were required to have new study condition and analysing method on dual detector system because triple head camera in Seoul National University Hospital is to be disposed. So we have tried to increase image quality and make the dual and triple head to have equivalent study time by using a new analysing program. Materials and Methods: Using IEC phantom, we estimated contrast, SNR and FWHM. In Hoffman 3D brain phantom which is similar with real brain, we were on the supposition that 5% of injected doses were distributed in brain tissue. To compare with existing FBP method, we used fan-beam collimator. And we applied 15 sec, 25 sec/frame for each SEPCT studies using LEHR and LEUHR. We used OSEM2D and Onco-flash3D reconstruction method and compared reconstruction methods between applied Gaussian post-filtering 5mm and not applied as well. Attenuation correction was applied by manual method. And we did Brain SPECT to patient injected 15 mCi of $^{99m}Tc$-HMPAO according to results of Phantom study. Lastly, technologist, MD, PhD estimated the results. Results: The study shows that reconstruction method by Flash3D is better than exiting FBP and OSEM2D when studied using IEC phantom. Flowing by estimation, when using Flash3D, both of 15 sec and 25 sec are needed postfiltering 5 mm. And 8 times are proper for subset 8 iteration in Flash3D. OSEM2D needs post-filtering. And it is proper that subset 4, iteration 8 times for 15sec and subset 8, iteration 12 times for 25sec. The study regarding to injected doses for a patient and study time, combination of input parameter-15 sec/frame, LEHR collimator, analysing program-Flash3D, subset 8, iteration 8times and Gaussian post-filtering 5mm is the most appropriate. On the other hands, it was not appropriate to apply LEUHR collimator to 1-day subtraction method of Diamox study because of lower sensitivity. Conclusions: We could prove that there was also an advantage of short study time effectiveness in Dual camera same as Triple gamma camera and get great result of alternation from existing fan-beam collimator to parallel collimator. In addition, resolution and contrast of new method was better than FBP method. And it could improve sensitivity and accuracy of image because lesser subjectivity was input than Metz filter of FBP. We expect better image quality and shorter study time of Brain SPECT on Dual detector system.

  • PDF