• Title/Summary/Keyword: automatic test

Search Result 1,637, Processing Time 0.033 seconds

Development of an Automatic Sprayer Arm Control System for Unmanned Pest Control of Pear Trees (배나무 무인 방제를 위한 약대 자동 제어시스템 개발)

  • Hwa, Ji-Ho;Lee, Bong-Ki;Lee, Min-Young;Choi, Dong-Sung;Hong, Jun-Taek;Lee, Dae-Weon
    • Journal of Bio-Environment Control
    • /
    • v.23 no.1
    • /
    • pp.26-30
    • /
    • 2014
  • Purpose of this study was a development of a sprayer arm auto control system that could be operated according to distance from pear trees for automation of pest control. Auto control system included two parts, hardware and software. First, controller was made with an MCU and relay switches. Two types of ultra-sonic sensors were installed to measure distance from pear trees: one on/off type that detect up to 3 m, and the other continuous type providing 0~5 V output corresponding to distance of 0~3 m. Second, an auto control algorithm was developed to control. Each spraying arm was controlled according to the sensor-based distance from the pear trees. And it could dodge obstacles to protect itself. Max and min signal values were eliminated, when five sensor signals was collected, and then signals were averaged to reduce sensor's noises. According to results of field experiment, auto control test result was better than non auto control test result. Spraying rates were 69.25% (left line) and 98.09% (right line) under non auto control mode, because pear trees were not planted uniformly. But, auto control test's results were 92.66% (left line) and 94.64% (right line). Spraying rate was increased by maintaining distance from tree.

Automatic Classification by Land Use Category of National Level LULUCF Sector using Deep Learning Model (딥러닝모델을 이용한 국가수준 LULUCF 분야 토지이용 범주별 자동화 분류)

  • Park, Jeong Mook;Sim, Woo Dam;Lee, Jung Soo
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_2
    • /
    • pp.1053-1065
    • /
    • 2019
  • Land use statistics calculation is very informative data as the activity data for calculating exact carbon absorption and emission in post-2020. To effective interpretation by land use category, This study classify automatically image interpretation by land use category applying forest aerial photography (FAP) to deep learning model and calculate national unit statistics. Dataset (DS) applied deep learning is divided into training dataset (training DS) and test dataset (test DS) by extracting image of FAP based national forest resource inventory permanent sample plot location. Training DS give label to image by definition of land use category and learn and verify deep learning model. When verified deep learning model, training accuracy of model is highest at epoch 1,500 with about 89%. As a result of applying the trained deep learning model to test DS, interpretation classification accuracy of image label was about 90%. When the estimating area of classification by category using sampling method and compare to national statistics, consistency also very high, so it judged that it is enough to be used for activity data of national GHG (Greenhouse Gas) inventory report of LULUCF sector in the future.

Test turnaround Time for Complete Blood Cell Count using Delta and Panic Value Checks and the Q-flag Limit

  • Koo, Bon-Kyung;Ryu, Kwang-Hyun;Lim, Dae-Jin;Cho, Young-Kuk;Kim, Hee-Jin
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.44 no.2
    • /
    • pp.66-74
    • /
    • 2012
  • Test turnaround time (TAT) is the lead time from reception to reporting. In the complete blood cell count (CBC), 4 units of the XE-2100 (Sysmex Corp., Japan) processed around 80% of quantity, 1 unit of the LH-780 (Beckman-Coulter Incorp., USA) processed around 10% and 1 unit of ADVIA-2120 (Siemens AG, Munich, Germany) processed around 10%. We analyzed the change in the TAT for the CBC for over 7 years, from January of 2005 to December of 2011. The delta check made alterations of delta to WBC, hemoglobin, hematocrit, platelet and metamyelocyte, however, did not made them to band neutrophil, eosinophil, basophil and monocyte. The panic value check made alterations of panic value to hemoglobin, hematocrit, platelet and monocyte. In the criteria of currently slide review, LH-780 and ADVI-2120 analyzers prepared suspect flags of "Blast, Imm NE2, Immature granulocyte, Imm NE1, Left shift, Variant lymphocyte, Atypical lymphocyte, Platelet clumps and NRBC". The New slide review in the XE-2100 analyzer altered the preparations of a smear slide more than a "Platelet clumps flag(${\geq}200unit$), a single flag excluding the "Platelet clumps flag (${\geq}250unit$) and a multiple flag (${\geq}200unit$)". Also, below the 240 unit, medical technologists prepared manual slides selectively according to their evaluations. The automatic reporting rate was 33.4% without alterations, whereas it was 41.0% without alterations, and was thus improved by 7.6%. The slide review rate was 15.2% before using the Q-flag limit, whereas it was 12.1% for a reduce 3.1%. TAT was 45 minutes without the creation alterations of the delta and panic value checks, whereas it was 35 minutes after making alterations of the delta and panic value checks and thus was shortened by 10 minutes. We came to the conclusion that the establishment and operation of delta and panic value checks and slide review criteria suitable for laboratory environment can reduce unnecessary smear slides, re-checking, re-sampling, re-testing, telephone inquiries and concentrated workloads during specific times of the day.

  • PDF

White striping degree assessment using computer vision system and consumer acceptance test

  • Kato, Talita;Mastelini, Saulo Martiello;Campos, Gabriel Fillipe Centini;Barbon, Ana Paula Ayub da Costa;Prudencio, Sandra Helena;Shimokomaki, Massami;Soares, Adriana Lourenco;Barbon, Sylvio Jr.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.32 no.7
    • /
    • pp.1015-1026
    • /
    • 2019
  • Objective: The objective of this study was to evaluate three different degrees of white striping (WS) addressing their automatic assessment and customer acceptance. The WS classification was performed based on a computer vision system (CVS), exploring different machine learning (ML) algorithms and the most important image features. Moreover, it was verified by consumer acceptance and purchase intent. Methods: The samples for image analysis were classified by trained specialists, according to severity degrees regarding visual and firmness aspects. Samples were obtained with a digital camera, and 25 features were extracted from these images. ML algorithms were applied aiming to induce a model capable of classifying the samples into three severity degrees. In addition, two sensory analyses were performed: 75 samples properly grilled were used for the first sensory test, and 9 photos for the second. All tests were performed using a 10-cm hybrid hedonic scale (acceptance test) and a 5-point scale (purchase intention). Results: The information gain metric ranked 13 attributes. However, just one type of image feature was not enough to describe the phenomenon. The classification models support vector machine, fuzzy-W, and random forest showed the best results with similar general accuracy (86.4%). The worst performance was obtained by multilayer perceptron (70.9%) with the high error rate in normal (NORM) sample predictions. The sensory analysis of acceptance verified that WS myopathy negatively affects the texture of the broiler breast fillets when grilled and the appearance attribute of the raw samples, which influenced the purchase intention scores of raw samples. Conclusion: The proposed system has proved to be adequate (fast and accurate) for the classification of WS samples. The sensory analysis of acceptance showed that WS myopathy negatively affects the tenderness of the broiler breast fillets when grilled, while the appearance attribute of the raw samples eventually influenced purchase intentions.

Regional Differences in Blood-Brain Barrier Permeability in Cognitively Normal Elderly Subjects: A Dynamic Contrast-Enhanced MRI-Based Study

  • Il Heon Ha;Changmok Lim;Yeahoon Kim;Yeonsil Moon;Seol-Heui Han;Won-Jin Moon
    • Korean Journal of Radiology
    • /
    • v.22 no.7
    • /
    • pp.1152-1162
    • /
    • 2021
  • Objective: This study aimed to determine whether there are regional differences in the blood-brain barrier (BBB) permeability of cognitively normal elderly participants and to identify factors influencing BBB permeability with a clinically feasible, 10-minute dynamic contrast-enhanced (DCE) MRI protocol. Materials and Methods: This IRB-approved prospective study recruited 35 cognitively normal adults (26 women; mean age, 64.5 ± 5.6 years) who underwent DCE T1-weighted imaging. Permeability maps (Ktrans) were coregistered with masks to calculate the mean regional values. The paired t test and Friedman test were used to compare Ktrans between different regions. The relationships between Ktrans and the factors of age, sex, education, cognition score, vascular risk burden, vascular factors on imaging, and medial temporal lobar atrophy were assessed using Pearson correlation and the Spearman rank test. Results: The mean permeability rates of the right and left hippocampi, as assessed with automatic segmentation, were 0.529 ± 0.472 and 0.585 ± 0.515 (Ktrans, x 10-3 min-1), respectively. Concerning the deep gray matter, the Ktrans of the thalamus was significantly greater than those of the putamen and hippocampus (p = 0.007, p = 0.041). Regarding the white matter, the Ktrans value of the occipital white matter was significantly greater than those of the frontal, cingulate, and temporal white matter (p < 0.0001, p = 0.0007, p = 0.0002). The variations in Ktrans across brain regions were not related to age, cognitive score, vascular risk burden, vascular risk factors on imaging, or medial temporal lobar atrophy in the study group. Conclusion: Our study demonstrated regional differences in BBB permeability (Ktrans) in cognitively normal elderly adults using a clinically acceptable 10-minutes DCE imaging protocol. The regional differences suggest that the integrity of the BBB varies across the brains of cognitively normal elderly adults. We recommend considering regional differences in Ktrans values when evaluating BBB permeability in patients with neurodegenerative diseases.

A Comparative Study on the CT Effective Dose by the Position of Patient's Arm (전신 PET/CT 검사에서 환자의 팔 위치에 따른 CT 유효선량의 비교 연구)

  • Seong, Ji-Hye;Park, Soon-Ki;Kim, Jung-Sun;Park, Seung-Yong;Jung, Woo-Young
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.1
    • /
    • pp.44-49
    • /
    • 2012
  • Purpose: In the whole body PET/CT scan, it is natural to lift the patient's arm for its quality improvement. However, when the lesion is located in head and neck, the arms should be located lower. This study was designed to compare the CT effective dose for each arm position applying Automatic Exposure Control (AEC). Materials and Methods: 45 patients who had $^{18}F$-FDG whole body PET/CT scan were studied with Biograph Truepoint 40 (SIEMENS, GERMANY), Biograph Sensation 16 (SIEMENS, GERMANY), Discovery STe 8 (GE healthcare, USA). The CT effective dose of 15 patients for each equipment was measured and comparatively analyzed in both arm-lifted position and lower-arm position. ImPACT v1.0 program was used as the method of measurement for CT effective dose. For the statistics analysis, Paired t-test which paired with SPSS 18.0 statistic program was applied. Results: In the case of arm-lifted, it was measured as $6.33{\pm}0.93mSv$ for Biograph Sensation 16, $8.01{\pm}1.34mSv$ for Biograph Truepoint 40, and $9.69{\pm}2.32mSv$ for Discovery STe 8. When arms are located lower position, it was measure as $6.97{\pm}0.76mSv$, $8.95{\pm}1.85mSv$, $13.07{\pm}2.87mSv$ for each. CT effective dose according to the arm position was 9.2% for Biograph Truepoint 40, 10.5% for Biograph Sensation 16, and 25.9% for Discovery Ste 8. The statistics analysis showed the meaningful difference ($p$<0.05). Conclusion: For the whole body PET/CT case, CT effective dose applying AEC was decreased the radiation exposure of the patients when the arm was lifted for 15.2% of average value. The patient who has no lesion in head and neck would decrease the artifact occurrence in objective part and lower the CT effective dose. Also, for the patient who had lesion in head and neck, the artifact in objective part can be lower by putting the arms down, the fact that CT effective dose increases should be concerned in its whole body PET/CT scan.

  • PDF

Accuracy Analysis of Target Recognition according to EOC Conditions (Target Occlusion and Depression Angle) using MSTAR Data (MSTAR 자료를 이용한 EOC 조건(표적 폐색 및 촬영부각)에 따른 표적인식 정확도 분석)

  • Kim, Sang-Wan;Han, Ahrim;Cho, Keunhoo;Kim, Donghan;Park, Sang-Eun
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.3
    • /
    • pp.457-470
    • /
    • 2019
  • Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) has been attracted attention in the fields of surveillance, reconnaissance, and national security due to its advantage of all-weather and day-and-night imaging capabilities. However, there have been some difficulties in automatically identifying targets in real situation due to various observational and environmental conditions. In this paper, ATR problems in Extended Operating Conditions (EOC) were investigated. In particular, we considered partial occlusions of the target (10% to 50%) and differences in the depression angle between training ($17^{\circ}$) and test data ($30^{\circ}$ and $45^{\circ}$). To simulate various occlusion conditions, SARBake algorithm was applied to Moving and Stationary Target Acquisition and Recognition (MSTAR) images. The ATR accuracies were evaluated by using the template matching and Adaboost algorithms. Experimental results on the depression angle showed that the target identification rate of the two algorithms decreased by more than 30% from the depression angle of $45^{\circ}$ to $30^{\circ}$. The accuracy of template matching was about 75.88% while Adaboost showed better results with an accuracy of about 86.80%. In the case of partial occlusion, the accuracy of template matching decreased significantly even in the slight occlusion (from 95.77% under no occlusion to 52.69% under 10% occlusion). The Adaboost algorithm showed better performance with an accuracy of 85.16% in no occlusion condition and 68.48% in 10% occlusion condition. Even in the 50% occlusion condition, the Adaboost provided an accuracy of 52.48%, which was much higher than the template matching (less than 30% under 50% occlusion).

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

An Positioning Error Analysis of 3D Face Recognition Apparatus (3차원 안면자동인식기의 Positioning 오차분석)

  • Kwak, Chang-Kyu;Cho, Yong-Beum;Sohn, Eun-Hae;Yoo, Jung-Hee;Kho, Byung-Hee;Kim, Jong-Won;Kim, Kyu-Kon;Lee, Eui-Ju
    • Journal of Sasang Constitutional Medicine
    • /
    • v.18 no.2
    • /
    • pp.34-40
    • /
    • 2006
  • 1. Objectives We are going to develope 3D Face Recognition Apparatus to analyse the facial characteristics of the Sasangin. In the process, we should identify the recognition rate of the three dimensional position using this Apparatus. 2. Methods We took a photograph of calibrator($280{\times}400mm$) with interval of 20mm longitudinal direction of 10 times using 3D Face Recognition Apparatus. In the practice, we obtained 967 point to the exclusion of points deviating from the visual field of dual camera. And we made a comparison between measurement values and three dimensional standard values to calculate the errors. 3. Results and Conclusions In this test, the average error rate of X axis values was 0.019% and the maximum error rate of X axis values was 0.033%, the average error rate of Y axis values was 0.025% and the maximum error rate of Y axis values was 0.044%, the average error rate of Z axis values was 0.158% and the maximum error rate of Z axis values was 0.269%. This results exhibit much improvement upon the average error rate 1% and the maximum error rate 2.242% of the existing 3D Recognition Apparatus. In conclusion, we assessed that this apparatus was adaptable to abstract the facial characteristic point from three dimensional face shape in the mechanical aspects.

  • PDF