• Title/Summary/Keyword: Machine-vision

Search Result 878, Processing Time 0.027 seconds

Improved Environment Recognition Algorithms for Autonomous Vehicle Control (자율주행 제어를 위한 향상된 주변환경 인식 알고리즘)

  • Bae, Inhwan;Kim, Yeounghoo;Kim, Taekyung;Oh, Minho;Ju, Hyunsu;Kim, Seulki;Shin, Gwanjun;Yoon, Sunjae;Lee, Chaejin;Lim, Yongseob;Choi, Gyeungho
    • Journal of Auto-vehicle Safety Association
    • /
    • v.11 no.2
    • /
    • pp.35-43
    • /
    • 2019
  • This paper describes the improved environment recognition algorithms using some type of sensors like LiDAR and cameras. Additionally, integrated control algorithm for an autonomous vehicle is included. The integrated algorithm was based on C++ environment and supported the stability of the whole driving control algorithms. As to the improved vision algorithms, lane tracing and traffic sign recognition were mainly operated with three cameras. There are two algorithms developed for lane tracing, Improved Lane Tracing (ILT) and Histogram Extension (HIX). Two independent algorithms were combined into one algorithm - Enhanced Lane Tracing with Histogram Extension (ELIX). As for the enhanced traffic sign recognition algorithm, integrated Mutual Validation Procedure (MVP) by using three algorithms - Cascade, Reinforced DSIFT SVM and YOLO was developed. Comparing to the results for those, it is convincing that the precision of traffic sign recognition is substantially increased. With the LiDAR sensor, static and dynamic obstacle detection and obstacle avoidance algorithms were focused. Therefore, improved environment recognition algorithms, which are higher accuracy and faster processing speed than ones of the previous algorithms, were proposed. Moreover, by optimizing with integrated control algorithm, the memory issue of irregular system shutdown was prevented. Therefore, the maneuvering stability of the autonomous vehicle in severe environment were enhanced.

Development of a Measuring Device for Coefficient of Friction between Connection Parts in Vehicle Head Lamps (자동차 헤드램프내 체결부품사이의 마찰계수 실험장치 개발)

  • Baek, Hong;Moon, Ji-Seung;Park, Sang-Shin;Park, Jong-Myeong
    • Tribology and Lubricants
    • /
    • v.35 no.1
    • /
    • pp.59-64
    • /
    • 2019
  • When slipping occurs between two materials, the coefficients of friction must be considered because these values determine the overall efficiency of the machine or slip characteristics. Therefore, it is important to find the coefficient of friction between two materials. This paper focuses on obtaining the coefficient of friction between an aiming bolt and a retainer located in the headlamps of a vehicle. This bolt supports the headlamp, and if the bolt is loosened by external vibration, the angle of the light will change and block the vision of pedestrians or other drivers. In order to study these situations, the coefficient of friction between aiming bolts and retainers needs to be measured. In addition, the coefficient of friction of materials used in the headlamp should be obtained. To determine these two factors, a new device is designed for two cases: surface-surface contact and surface-line contact. To increase reliability of the results, the device is designed using an air-bearing stage which uses compressed air as lubricant to eliminate the friction of the stage itself. Experiments were carried out by applying various vertical forces, and the results show that the coefficient of friction can be measured consistently. The procedure for designing the device and the results are discussed.

A Study on Smoke Detection using LBP and GLCM in Engine Room (선박의 기관실에서의 연기 검출을 위한 LBP-GLCM 알고리즘에 관한 연구)

  • Park, Kyung-Min
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.25 no.1
    • /
    • pp.111-116
    • /
    • 2019
  • The fire detectors used in the engine rooms of ships offer only a slow response to emergencies because smoke or heat must reach detectors installed on ceilings, but the air flow in engine rooms can be very fluid depending on the use of equipment. In order to overcome these disadvantages, much research on video-based fire detection has been conducted in recent years. Video-based fire detection is effective for initial detection of fire because it is not affected by air flow and transmission speed is fast. In this paper, experiments were performed using images of smoke from a smoke generator in an engine room. Data generated using LBP and GLCM operators that extract the textural features of smoke was classified using SVM, which is a machine learning classifier. Even if smoke did not rise to the ceiling, where detectors were installed, smoke detection was confirmed using the image-based technique.

Changes in the Industrial Structure caused by the IoT and AI (사물인터넷과 AI가 가져올 산업구조의 변화)

  • Kim, Jang-Hwan
    • Convergence Security Journal
    • /
    • v.17 no.5
    • /
    • pp.93-99
    • /
    • 2017
  • Recently IoT(Internet of Things) service industry has grown very rapidly. In this paper, we investigated the changes in IoT service industry as well as new direction of human life in future global society. Under these changing market conditions, competition has been also changed into global and ecological competition. But compared to the platform initiatives and ecological strategies of global companies, Korean companies' vision of building ecosystems is still unclear. In addition, there is a need of internetworking between mobile and IoT services. IoT security Protocol has weakness of leaking out information from Gateway which connected wire and wireless communication. As such, we investigate the structure of IoT and AI service ecosystem in order to gain strategic implications and insights for the security industry in this paper.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

Defect Diagnosis and Classification of Machine Parts Based on Deep Learning

  • Kim, Hyun-Tae;Lee, Sang-Hyeop;Wesonga, Sheilla;Park, Jang-Sik
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.25 no.2_1
    • /
    • pp.177-184
    • /
    • 2022
  • The automatic defect sorting function of machinery parts is being introduced to the automation of the manufacturing process. In the final stage of automation of the manufacturing process, it is necessary to apply computer vision rather than human visual judgment to determine whether there is a defect. In this paper, we introduce a deep learning method to improve the classification performance of typical mechanical parts, such as welding parts, galvanized round plugs, and electro galvanized nuts, based on the results of experiments. In the case of poor welding, the method to further increase the depth of layer of the basic deep learning model was effective, and in the case of a circular plug, the surrounding data outside the defective target area affected it, so it could be solved through an appropriate pre-processing technique. Finally, in the case of a nut plated with zinc, since it receives data from multiple cameras due to its three-dimensional structure, it is greatly affected by lighting and has a problem in that it also affects the background image. To solve this problem, methods such as two-dimensional connectivity were applied in the object segmentation preprocessing process. Although the experiments suggested that the proposed methods are effective, most of the provided good/defective images data sets are relatively small, which may cause a learning balance problem of the deep learning model, so we plan to secure more data in the future.

A Study on Design and Interpretation of Pattern Laser Coordinate Tracking Method for Curved Screen Using Multiple Cameras (다중카메라를 이용한 곡면 스크린의 패턴 레이저 좌표 추적 방법 설계와 해석 연구)

  • Jo, Jinpyo;Kim, Jeongho;Jeong, Yongbae
    • Journal of Platform Technology
    • /
    • v.9 no.4
    • /
    • pp.60-70
    • /
    • 2021
  • This paper proposes a method capable of stably tracking the coordinates of a patterned laser image in a curved screen shooting system using two or more channels of multiple cameras. This method can track and acquire target points very effectively when applied to a multi-screen shooting method that can replace the HMD shooting method. Images of curved screens with severe deformation obtained from individual cameras are corrected through image normalization, image binarization, and noise removal. This corrected image is created and applied as an Euclidean space map that is easy to track the firing point based on the matching point. As a result of the experiment, the image coordinates of the pattern laser were stably extracted in the curved screen shooting system, and the error of the target point position of the real-world coordinate position and the broadband Euclidean map was minimized. The reliability of the proposed method was confirmed through the experiment.

Development of a Backpack-Based Wearable Proximity Detection System

  • Shin, Hyungsub;Chang, Seokhee;Yu, Namgyenong;Jeong, Chaeeun;Xi, Wen;Bae, Jihyun
    • Fashion & Textile Research Journal
    • /
    • v.24 no.5
    • /
    • pp.647-654
    • /
    • 2022
  • Wearable devices come in a variety of shapes and sizes in numerous fields in numerous fields and are available in various forms. They can be integrated into clothing, gloves, hats, glasses, and bags and used in healthcare, the medical field, and machine interfaces. These devices keep track individuals' biological and behavioral data to help with health communication and are often used for injury prevention. Those with hearing loss or impaired vision find it more difficult to recognize an approaching person or object; these sensing devices are particularly useful for such individuals, as they assist them with injury prevention by alerting them to the presence of people or objects in their immediate vicinity. Despite these obvious preventive benefits to developing Internet of Things based devices for the disabled, the development of these devices has been sluggish thus far. In particular, when compared with people without disabilities, people with hearing impairment have a much higher probability of averting danger when they are able to notice it in advance. However, research and development remain severely underfunded. In this study, we incorporated a wearable detection system, which uses an infrared proximity sensor, into a backpack. This system helps its users recognize when someone is approaching from behind through visual and tactile notification, even if they have difficulty hearing or seeing the objects in their surroundings. Furthermore, this backpack could help prevent accidents for all users, particularly those with visual or hearing impairments.

Local Dehazing Method using a Haziness Degree Evaluator (흐릿함 농도 평가기를 이용한 국부적 안개 제거 방법)

  • Lee, Seungmin;Kang, Bongsoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1477-1482
    • /
    • 2022
  • Haze is a local weather phenomenon in which very small droplets float in the atmosphere, and the amount and characteristics of haze may vary depending on the region. In particular, these haze reduce visibility, which can cause air traffic interference and vehicle traffic accidents, and degrade the quality of security CCTVs and so on. Therefore, in the past 10 years, research on haze removal has been actively conducted to reduce damage caused by haze. In this study, local haze removal is performed by weight generation using a haziness degree evaluator to adaptively respond to haze-free, homogeneous haze, and non-homogeneous haze cases. And the proposed method improves the limitations of the existing static haze removal method, which assumes that there is haze in the input image and removes the haze. We also demonstrate the superiority of the proposed method through quantitative and qualitative performance evaluations with benchmark algorithms.

Deep Learning Methods for Recognition of Orchard Crops' Diseases

  • Sabitov, Baratbek;Biibsunova, Saltanat;Kashkaroeva, Altyn;Biibosunov, Bolotbek
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.257-261
    • /
    • 2022
  • Diseases of agricultural plants in recent years have spread greatly across the regions of the Kyrgyz Republic and pose a serious threat to the yield of many crops. The consequences of it can greatly affect the food security for an entire country. Due to force majeure, abnormal cases in climatic conditions, the annual incomes of many farmers and agricultural producers can be destroyed locally. Along with this, the rapid detection of plant diseases also remains difficult in many parts of the regions due to the lack of necessary infrastructure. In this case, it is possible to pave the way for the diagnosis of diseases with the help of the latest achievements due to the possibilities of feedback from the farmer - developer in the formation and updating of the database of sick and healthy plants with the help of advances in computer vision, developing on the basis of machine and deep learning. Currently, model training is increasingly used already on publicly available datasets, i.e. it has become popular to build new models already on trained models. The latter is called as transfer training and is developing very quickly. Using a publicly available data set from PlantVillage, which consists of 54,306 or NewPlantVillage with a data volumed with 87,356 images of sick and healthy plant leaves collected under controlled conditions, it is possible to build a deep convolutional neural network to identify 14 types of crops and 26 diseases. At the same time, the trained model can achieve an accuracy of more than 99% on a specially selected test set.