• Title/Summary/Keyword: 영상 감지 시스템

Search Result 438, Processing Time 0.032 seconds

Development of Video Image-Guided Setup (VIGS) System for Tomotherapy: Preliminary Study (단층치료용 비디오 영상기반 셋업 장치의 개발: 예비연구)

  • Kim, Jin Sung;Ju, Sang Gyu;Hong, Chae Seon;Jeong, Jaewon;Son, Kihong;Shin, Jung Suk;Shin, Eunheak;Ahn, Sung Hwan;Han, Youngyih;Choi, Doo Ho
    • Progress in Medical Physics
    • /
    • v.24 no.2
    • /
    • pp.85-91
    • /
    • 2013
  • At present, megavoltage computed tomography (MVCT) is the only method used to correct the position of tomotherapy patients. MVCT produces extra radiation, in addition to the radiation used for treatment, and repositioning also takes up much of the total treatment time. To address these issues, we suggest the use of a video image-guided setup (VIGS) system for correcting the position of tomotherapy patients. We developed an in-house program to correct the exact position of patients using two orthogonal images obtained from two video cameras installed at $90^{\circ}$ and fastened inside the tomotherapy gantry. The system is programmed to make automatic registration possible with the use of edge detection of the user-defined region of interest (ROI). A head-and-neck patient is then simulated using a humanoid phantom. After taking the computed tomography (CT) image, tomotherapy planning is performed. To mimic a clinical treatment course, we used an immobilization device to position the phantom on the tomotherapy couch and, using MVCT, corrected its position to match the one captured when the treatment was planned. Video images of the corrected position were used as reference images for the VIGS system. First, the position was repeatedly corrected 10 times using MVCT, and based on the saved reference video image, the patient position was then corrected 10 times using the VIGS method. Thereafter, the results of the two correction methods were compared. The results demonstrated that patient positioning using a video-imaging method ($41.7{\pm}11.2$ seconds) significantly reduces the overall time of the MVCT method ($420{\pm}6$ seconds) (p<0.05). However, there was no meaningful difference in accuracy between the two methods (x=0.11 mm, y=0.27 mm, z=0.58 mm, p>0.05). Because VIGS provides a more accurate result and reduces the required time, compared with the MVCT method, it is expected to manage the overall tomotherapy treatment process more efficiently.

Development of vision-based security and service robot (영상 기반의 보안 및 서비스 로봇 개발)

  • Kim Jung-Nyun;Park Sang-Sung;Jang Dong-Sik
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.308-316
    • /
    • 2004
  • As we know that there are so many restrictions controlling the autonomous robot to turn and move in an indoor space. In this research, Ive adopted the concept ‘Omni-directional wheel’ as a driving equipment, which makes it possible for the robot to move in horizontal and diagonal directions. Most of all, we eliminated the slip error problem, which can occur when the system generates power by means of slip. In order to solve this problem, we developed a ‘slip error correction algorithm’. Following this program, whenever the robot moves in any directions, it defines its course by comparing pre-programmed direction and the current moving way, which can be decided by extracted image of floor line. Additionally, this robot also provides the limited security and service function. It detects the motion of vehicle, transmits pictures to multiple users and can be moved by simple order's. In this paper, we tried to propose a practical model which can be used in an office.

  • PDF

A Detection System of Drowsy Driving based on Depth Information for Ship Safety Navigation (선박의 안전운항을 위한 깊이정보 기반의 졸음 감지 시스템)

  • Ha, Jun;Yang, Won-Jae;Choi, Hyun-Jun
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.20 no.5
    • /
    • pp.564-570
    • /
    • 2014
  • This paper propose a method to detect and track a human face using depth information as well as color images for detection of drowsy driving. It consists of a face detection procedure and a face tracking procedure. The face detection procedure basically uses the Adaboost method which shows the best performance so far. But it restricts the area to be searched as the region where the face is highly possible to exist. The face detected in the detection procedure is used as the template to start the face tracking procedure. The experimental results showed that the proposed detection method takes only about 23 % of the execution time of the existing method. In all the cases except a special one, the tracking error ratio is as low as about 1 %.

Image Recognition Using Colored-hear Transformation Based On Human Synesthesia (인간의 공감각에 기반을 둔 색청변환을 이용한 영상 인식)

  • Shin, Seong-Yoon;Moon, Hyung-Yoon;Pyo, Seong-Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.2
    • /
    • pp.135-141
    • /
    • 2008
  • In this paper, we propose colored-hear recognition that distinguishing feature of synesthesia for human sensing by shared vision and specific sense of hearing. We perceived what potential influence of human's structured object recognition by visual analysis through the camera, So we've studied how to make blind persons can feel similar vision of real object. First of all, object boundaries are detected in the image data representing a specific scene. Then, four specific features such as object location in the image focus, feeling of average color, distance information of each object, and object area are extracted from picture. Finally, mapping these features to the audition factors. The audition factors are used to recognize vision for blind persons. Proposed colored-hear transformation for recognition can get fast and detail perception, and can be transmit information for sense at the same time. Thus, we were get a food result when applied this concepts to blind person's case of image recognition.

  • PDF

Efficient Object Recognition by Masking Semantic Pixel Difference Region of Vision Snapshot for Lightweight Embedded Systems (경량화된 임베디드 시스템에서 의미론적인 픽셀 분할 마스킹을 이용한 효율적인 영상 객체 인식 기법)

  • Yun, Heuijee;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.6
    • /
    • pp.813-826
    • /
    • 2022
  • AI-based image processing technologies in various fields have been widely studied. However, the lighter the board, the more difficult it is to reduce the weight of image processing algorithm due to a lot of computation. In this paper, we propose a method using deep learning for object recognition algorithm in lightweight embedded boards. We can determine the area using a deep neural network architecture algorithm that processes semantic segmentation with a relatively small amount of computation. After masking the area, by using more accurate deep learning algorithm we could operate object detection with improved accuracy for efficient neural network (ENet) and You Only Look Once (YOLO) toward executing object recognition in real time for lightweighted embedded boards. This research is expected to be used for autonomous driving applications, which have to be much lighter and cheaper than the existing approaches used for object recognition.

A Study on Radar Video Fusion Systems for Pedestrian and Vehicle Detection (보행자 및 차량 검지를 위한 레이더 영상 융복합 시스템 연구)

  • Sung-Youn Cho;Yeo-Hwan Yoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.197-205
    • /
    • 2024
  • Development of AI and big data-based algorithms to advance and optimize the recognition and detection performance of various static/dynamic vehicles in front and around the vehicle at a time when securing driving safety is the most important point in the development and commercialization of autonomous vehicles. etc. are being studied. However, there are many research cases for recognizing the same vehicle by using the unique advantages of radar and camera, but deep learning image processing technology is not used, or only a short distance is detected as the same target due to radar performance problems. Therefore, there is a need for a convergence-based vehicle recognition method that configures a dataset that can be collected from radar equipment and camera equipment, calculates the error of the dataset, and recognizes it as the same target. In this paper, we aim to develop a technology that can link location information according to the installation location because data errors occur because it is judged as the same object depending on the installation location of the radar and CCTV (video).

Haptic recognition of the palm using ultrasound radiation force and its application (초음파 방사힘을 이용한 손바닥의 촉각 인식과 응용)

  • Kim, Sun Ae;Kim, Tae Yang;Lee, Yeol Eum;Lee, Soo Yeon;Jeong, Mok Kun;Kwon, Sung Jae
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.4
    • /
    • pp.467-475
    • /
    • 2019
  • A high-intensity ultrasound wave generates acoustic streaming and acoustic radiation forces when propagating through a medium. An acoustic radiation force generated in a three-dimensional space can produce a solid tactile sensation, delivering spatial information directly to the human skin. We placed 154 ultrasound transmit elements with a frequency of 40 kHz on a concave circular dish, and generated an acoustic radiation force at the focal point by transmitting the ultrasound wave. To feel the tactile sensation better, the transmit elements were excited by sine waves whose amplitude was modulated by a 60 Hz square wave. As an application of ultrasonic tactile sensing, a region where tactile sense is formed in the air is used as an indicator for the position of the hand. We confirmed the utility of ultrasonic tactile feedback by implementing a system that provides the number of fingers to a machine by receiving the shape of the hand at the focal point where the tactile sense is detected.

Real-Time Tracking of Moving Object by Adaptive Search in Spatial-temporal Spaces (시공간 적응탐색에 의한 실시간 이동물체 추적)

  • Kim, Gye-Young;Choi, Hyung-Ill
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.11
    • /
    • pp.63-77
    • /
    • 1994
  • This paper describes the real-time system which, through analyzing a sequence of images, can extract motional information on a moving object and can contol servo equipment to always locate the moving object at the center of an image frame. An image is a vast amount of two-dimensional signal, so it takes a lot of time to analyze the whole quantity of a given image. Especially, the time needed to load pixels from a memory to processor increase exponentially as the size of an image increases. To solve such a problem and track a moving object in real-time, this paper addresses how to selectively search the spatial and time domain. Based on the selective search of spatial and time domain, this paper suggests various types of techniques which are essential in implementing a real-time tracking system. That is, this paper describes how to detect an entrance of a moving object in the field of view of a camera and the direction of the entrance, how to determine the time interval of adjacent images, how to determine nonstationary areas formed by a moving object and calculated velocity and position information of a moving object based on the determined areas, how to control servo equipment to locate the moving object at the center of an image frame, and how to properly adjust time interval(${\Delta}$t) to track an object taking variable speed.

  • PDF

Development of Fender Segmentation System for Port Structures using Vision Sensor and Deep Learning (비전센서 및 딥러닝을 이용한 항만구조물 방충설비 세분화 시스템 개발)

  • Min, Jiyoung;Yu, Byeongjun;Kim, Jonghyeok;Jeon, Haemin
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.26 no.2
    • /
    • pp.28-36
    • /
    • 2022
  • As port structures are exposed to various extreme external loads such as wind (typhoons), sea waves, or collision with ships; it is important to evaluate the structural safety periodically. To monitor the port structure, especially the rubber fender, a fender segmentation system using a vision sensor and deep learning method has been proposed in this study. For fender segmentation, a new deep learning network that improves the encoder-decoder framework with the receptive field block convolution module inspired by the eccentric function of the human visual system into the DenseNet format has been proposed. In order to train the network, various fender images such as BP, V, cell, cylindrical, and tire-types have been collected, and the images are augmented by applying four augmentation methods such as elastic distortion, horizontal flip, color jitter, and affine transforms. The proposed algorithm has been trained and verified with the collected various types of fender images, and the performance results showed that the system precisely segmented in real time with high IoU rate (84%) and F1 score (90%) in comparison with the conventional segmentation model, VGG16 with U-net. The trained network has been applied to the real images taken at one port in Republic of Korea, and found that the fenders are segmented with high accuracy even with a small dataset.

Implementation of A Safe Driving Assistance System and Doze Detection (졸음 인식과 안전운전 보조시스템 구현)

  • Song, Hyok;Choi, Jin-Mo;Lee, Chul-Dong;Choi, Byeong-Ho;Yoo, Ji-Sang
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.3
    • /
    • pp.30-39
    • /
    • 2012
  • In this paper, a safe driving assistance system is proposed by detecting the status of driver's doze based on face and eye detection. By the level of the fatigue, safe driving system alarms or set the seatbelt on vibration. To reduce the effect of backward light and too strong solar light which cause a decrease of face and eye detection rate and false fatigue detection, post processing techniques like image equalization are used. Haar transform and PCA are used for face detection. By using the statistic of the face and eye structural ratio of normal Koreans, we can reduce the eye candidate area in the face, which results in reduction of the computational load. We also propose a new eye status detection algorithm based on Hough transform and eye width-height ratio, which are used to detect eye's blinking status which decides doze level by measuring the blinking period. The system alarms and operates seatbelt on vibration through controller area network(CAN) when the driver's doze level is detected. In this paper, four algorithms are implemented and proposed algorithm is made based on the probability model and we achieves 84.88% of correct detection rate through indoor and in-car environment experiments. And also we achieves 69.81% of detection rate which is better result than that of other algorithms using IR camera.