• Title/Summary/Keyword: Speed Improvement Method

Search Result 937, Processing Time 0.028 seconds

Development of deep learning network based low-quality image enhancement techniques for improving foreign object detection performance (이물 객체 탐지 성능 개선을 위한 딥러닝 네트워크 기반 저품질 영상 개선 기법 개발)

  • Ki-Yeol Eom;Byeong-Seok Min
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • Along with economic growth and industrial development, there is an increasing demand for various electronic components and device production of semiconductor, SMT component, and electrical battery products. However, these products may contain foreign substances coming from manufacturing process such as iron, aluminum, plastic and so on, which could lead to serious problems or malfunctioning of the product, and fire on the electric vehicle. To solve these problems, it is necessary to determine whether there are foreign materials inside the product, and may tests have been done by means of non-destructive testing methodology such as ultrasound ot X-ray. Nevertheless, there are technical challenges and limitation in acquiring X-ray images and determining the presence of foreign materials. In particular Small-sized or low-density foreign materials may not be visible even when X-ray equipment is used, and noise can also make it difficult to detect foreign objects. Moreover, in order to meet the manufacturing speed requirement, the x-ray acquisition time should be reduced, which can result in the very low signal- to-noise ratio(SNR) lowering the foreign material detection accuracy. Therefore, in this paper, we propose a five-step approach to overcome the limitations of low resolution, which make it challenging to detect foreign substances. Firstly, global contrast of X-ray images are increased through histogram stretching methodology. Second, to strengthen the high frequency signal and local contrast, we applied local contrast enhancement technique. Third, to improve the edge clearness, Unsharp masking is applied to enhance edges, making objects more visible. Forth, the super-resolution method of the Residual Dense Block (RDB) is used for noise reduction and image enhancement. Last, the Yolov5 algorithm is employed to train and detect foreign objects after learning. Using the proposed method in this study, experimental results show an improvement of more than 10% in performance metrics such as precision compared to low-density images.

Pipetting Stability and Improvement Test of the Robotic Liquid Handling System Depending on Types of Liquid (용액에 따른 자동분주기의 분주능력 평가와 분주력 향상 실험)

  • Back, Hyangmi;Kim, Youngsan;Yun, Sunhee;Heo, Uisung;Kim, Hosin;Ryu, Hyeonggi;Lee, Guiwon
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.20 no.2
    • /
    • pp.62-68
    • /
    • 2016
  • Purpose In a cyclosporine experiment using a robotic liquid handing system has found a deviation of its standard curve and low reproducibility of patients's results. The difference of the test is that methanol is mixed with samples and the extractions are used for the test. Therefore, we assumed that the abnormal test results came from using methanol and conducted this test. In a manual of a robotic liquid handling system mentions that we can choose several setting parameters depending on the viscosity of the liquids being used, the size of the sampling tips and the motor speeds that you elect to use but there's no exact order. This study was undertaken to confirm pipetting ability depending on types of liquids and investigate proper setting parameters for the optimum dispensing ability. Materials and Methods 4types of liquids(water, serum, methanol, PEG 6000(25%)) and $TSH^{125}I$ tracer(515 kBq) are used to confirm pipetting ability. 29 specimens for Cyclosporine test are used to compare results. Prepare 8 plastic tubes for each of the liquids and with multi pipette $400{\mu}l$ of each liquid is dispensed to 8 tubes and $100{\mu}l$ of $TSH^{125}I$ tracer are dispensed to all of the tubes. From the prepared samples, $100{\mu}l$ of liquids are dispensed using a robotic liquid handing system, counted and calculated its CV(%) depending on types of liquids. And then by adjusting several setting parameters(air gap, dispense time, delay time) the change of the CV(%)are calcutated and finds optimum setting parameters. 29 specimens are tested with 3 methods. The first(A) is manual method and the second(B) is used robotic liquid handling system with existing parameters. The third(C) is used robotic liquid handling system with adjusted parameters. Pipetting ability depending on types of liquids is assessed with CV(%). On the basis of (A), patients's test results are compared (A)and(B), (A)and(C) and they are assessed with %RE(%Relative error) and %Diff(%Difference). Results The CV(%) of the CPM depending on liquid types were water 0.88, serum 0.95, methanol 10.22 and PEG 0.68. As expected dispensing of methanol using a liquid handling system was the problem and others were good. The methanol's dispensing were conducted by adjusting several setting parameters. When transport air gap 0 was adjusted to 2 and 5, CV(%) were 20.16, 12.54 and when system air gap 0 was adjusted to 2 and 5, CV(%) were 8.94, 1.36. When adjusted to system air gap 2, transport air gap 2 was 12.96 and adjusted to system air gap 5, Transport air gap 5 was 1.33. When dispense speed was adjusted 300 to 100, CV(%) was 13.32 and when dispense delay was adjusted 200 to 100 was 13.55. When compared (B) to (A), the result increased 99.44% and %RE was 93.59%. When compared (C-system air gap was adjusted 0 to 5) to (A), the result increased 6.75% and %RE was 5.10%. Conclusion Adjusting speed and delay time of aspiration and dispense was meaningless but changing system air gap was effective. By adjusting several parameters proper value was found and it affected the practical result of the experiment. To optimize the system active efforts are needed through the test and in case of dispensing new types of liquids proper test is required to check the liquid is suitable for using the equipment.

  • PDF

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Development of Traffic Accident frequency Prediction Model by Administrative zone - A Case of Seoul (소규모 지역단위 교통사고예측모형 개발 - 서울시 행정동을 대상으로)

  • Hong, Ji Yeon;Lee, Soo Beom;Kim, Jeong Hyun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.35 no.6
    • /
    • pp.1297-1308
    • /
    • 2015
  • In Korea, the local traffic safety master plan has been established and implemented according to the Traffic Safety Act. Each local government is required to establish a customized traffic safety policy and share roles for improvement of traffic safety and this means that local governments lead and promote effective local traffic safety policies fit for local circumstances in substance. For implementing efficient traffic safety policies, which accord with many-sided characteristics of local governments, the prediction of community-based traffic accidents, which considers local characteristics and the analysis of accident influence factors must be preceded, but there is a shortage of research on this. Most of existing studies on the community-based traffic accident prediction used social and economic variables related to accident exposure environments in countries or cities due to the limit of collected data. For this reason, there was a limit in applying the developed models to the actual reduction of traffic accidents. Thus, this study developed a local traffic accident prediction model, based on smaller regional units, administrative districts, which were not omitted in existing studies and suggested a method to reflect traffic safety facility and policy variables that traffic safety policy makers can control, in addition to social and economic variables related to accident exposure environments, in the model and apply them to the development of local traffic safety policies. The model development result showed that in terms of accident exposure environments, road extension, gross floor area of buildings, the ratio of bus lane installation and the number of crossroads and crosswalks had a positive relation with accidents and the ratio of crosswalk sign installation, the number of speed bumps and the results of clampdown by police force had a negative relation with accidents.

Noise-robust electrocardiogram R-peak detection with adaptive filter and variable threshold (적응형 필터와 가변 임계값을 적용하여 잡음에 강인한 심전도 R-피크 검출)

  • Rahman, MD Saifur;Choi, Chul-Hyung;Kim, Si-Kyung;Park, In-Deok;Kim, Young-Pil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.126-134
    • /
    • 2017
  • There have been numerous studies on extracting the R-peak from electrocardiogram (ECG) signals. However, most of the detection methods are complicated to implement in a real-time portable electrocardiograph device and have the disadvantage of requiring a large amount of calculations. R-peak detection requires pre-processing and post-processing related to baseline drift and the removal of noise from the commercial power supply for ECG data. An adaptive filter technique is widely used for R-peak detection, but the R-peak value cannot be detected when the input is lower than a threshold value. Moreover, there is a problem in detecting the P-peak and T-peak values due to the derivation of an erroneous threshold value as a result of noise. We propose a robust R-peak detection algorithm with low complexity and simple computation to solve these problems. The proposed scheme removes the baseline drift in ECG signals using an adaptive filter to solve the problems involved in threshold extraction. We also propose a technique to extract the appropriate threshold value automatically using the minimum and maximum values of the filtered ECG signal. To detect the R-peak from the ECG signal, we propose a threshold neighborhood search technique. Through experiments, we confirmed the improvement of the R-peak detection accuracy of the proposed method and achieved a detection speed that is suitable for a mobile system by reducing the amount of calculation. The experimental results show that the heart rate detection accuracy and sensitivity were very high (about 100%).

Herbicidal Phytotoxicity under Adverse Environments and Countermeasures (불량환경하(不良環境下)에서의 제초제(除草劑) 약해(藥害)와 경감기술(輕減技術))

  • Kwon, Y.W.;Hwang, H.S.;Kang, B.H.
    • Korean Journal of Weed Science
    • /
    • v.13 no.4
    • /
    • pp.210-233
    • /
    • 1993
  • The herbicide has become indispensable as much as nitrogen fertilizer in Korean agriculture from 1970 onwards. It is estimated that in 1991 more than 40 herbicides were registered for rice crop and treated to an area 1.41 times the rice acreage ; more than 30 herbicides were registered for field crops and treated to 89% of the crop area ; the treatment acreage of 3 non-selective foliar-applied herbicides reached 2,555 thousand hectares. During the last 25 years herbicides have benefited the Korean farmers substantially in labor, cost and time of farming. Any herbicide which causes crop injury in ordinary uses is not allowed to register in most country. Herbicides, however, can cause crop injury more or less when they are misused, abused or used under adverse environments. The herbicide use more than 100% of crop acreage means an increased probability of which herbicides are used wrong or under adverse situation. This is true as evidenced by that about 25% of farmers have experienced the herbicide caused crop injury more than once during last 10 years on authors' nationwide surveys in 1992 and 1993 ; one-half of the injury incidences were with crop yield loss greater than 10%. Crop injury caused by herbicide had not occurred to a serious extent in the 1960s when the herbicides fewer than 5 were used by farmers to the field less than 12% of total acreage. Farmers ascribed about 53% of the herbicidal injury incidences at their fields to their misuses such as overdose, careless or improper application, off-time application or wrong choice of the herbicide, etc. While 47% of the incidences were mainly due to adverse natural conditions. Such misuses can be reduced to a minimum through enhanced education/extension services for right uses and, although undesirable, increased farmers' experiences of phytotoxicity. The most difficult primary problem arises from lack of countermeasures for farmers to cope with various adverse environmental conditions. At present almost all the herbicides have"Do not use!" instructions on label to avoid crop injury under adverse environments. These "Do not use!" situations Include sandy, highly percolating, or infertile soils, cool water gushing paddy, poorly draining paddy, terraced paddy, too wet or dry soils, days of abnormally cool or high air temperature, etc. Meanwhile, the cultivated lands are under poor conditions : the average organic matter content ranges 2.5 to 2.8% in paddy soil and 2.0 to 2.6% in upland soil ; the canon exchange capacity ranges 8 to 12 m.e. ; approximately 43% of paddy and 56% of upland are of sandy to sandy gravel soil ; only 42% of paddy and 16% of upland fields are on flat land. The present situation would mean that about 40 to 50% of soil applied herbicides are used on the field where the label instructs "Do not use!". Yet no positive effort has been made for 25 years long by government or companies to develop countermeasures. It is a really sophisticated social problem. In the 1960s and 1970s a subside program to incoporate hillside red clayish soil into sandy paddy as well as campaign for increased application of compost to the field had been operating. Yet majority of the sandy soils remains sandy and the program and campaign had been stopped. With regard to this sandy soil problem the authors have developed a method of "split application of a herbicide onto sandy soil field". A model case study has been carried out with success and is introduced with key procedure in this paper. Climate is variable in its nature. Among the climatic components sudden fall or rise in temperature is hardly avoidable for a crop plant. Our spring air temperature fluctuates so much ; for example, the daily mean air temperature of Inchon city varied from 6.31 to $16.81^{\circ}C$ on April 20, early seeding time of crops, within${\times}$2Sd range of 30 year records. Seeding early in season means an increased liability to phytotoxicity, and this will be more evident in direct water-seeding of rice. About 20% of farmers depend on the cold underground-water pumped for rice irrigation. If the well is deep over 70m, the fresh water may be about $10^{\circ}C$ cold. The water should be warmed to about $20^{\circ}C$ before irrigation. This is not so practiced well by farmers. In addition to the forementioned adverse conditions there exist many other aspects to be amended. Among them the worst for liquid spray type herbicides is almost total lacking in proper knowledge of nozzle types and concern with even spray by the administrative, rural extension officers, company and farmers. Even not available in the market are the nozzles and sprayers appropriate for herbicides spray. Most people perceive all the pesticide sprayers same and concern much with the speed and easiness of spray, not with correct spray. There exist many points to be improved to minimize herbicidal phytotoxicity in Korea and many ways to achieve the goal. First of all it is suggested that 1) the present evaluation of a new herbicide at standard and double doses in registration trials is to be an evaluation for standard, double and triple doses to exploit the response slope in making decision for approval and recommendation of different dose for different situation on label, 2) the government is to recognize the facts and nature of the present problem to correct the present misperceptions and to develop an appropriate national program for improvement of soil conditions, spray equipment, extention manpower and services, 3) the researchers are to enhance researches on the countermeasures and 4) the herbicide makers/dealers are to correct their misperceptions and policy for sales, to develop database on the detailed use conditions of consumer one by one and to serve the consumers with direct counsel based on the database.

  • PDF

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.