• Title/Summary/Keyword: 카메라 기반 인식

Search Result 700, Processing Time 0.029 seconds

High Quality Video Streaming System in Ultra-Low Latency over 5G-MEC (5G-MEC 기반 초저지연 고화질 영상 전송 시스템)

  • Kim, Jeongseok;Lee, Jaeho
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.2
    • /
    • pp.29-38
    • /
    • 2021
  • The Internet including mobile networks is developing to overcoming the limitation of physical distance and providing or acquiring information from remote locations. However, the systems that use video as primary information require higher bandwidth for recognizing the situation in remote places more accurately through high-quality video as well as lower latency for faster interaction between devices and users. The emergence of the 5th generation mobile network provides features such as high bandwidth and precise location recognition that were not experienced in previous-generation technologies. In addition, the Mobile Edge Computing that minimizes network latency in the mobile network requires a change in the traditional system architecture that was composed of the existing smart device and high availability server system. However, even with 5G and MEC, since there is a limit to overcome the mobile network state fluctuations only by enhancing the network infrastructure, this study proposes a high-definition video streaming system in ultra-low latency based on the SRT protocol that provides Forward Error Correction and Fast Retransmission. The proposed system shows how to deploy software components that are developed in consideration of the nature of 5G and MEC to achieve sub-1 second latency for 4K real-time video streaming. In the last of this paper, we analyze the most significant factor in the entire video transmission process to achieve the lowest possible latency.

Fast Natural Feature Tracking Using Optical Flow (광류를 사용한 빠른 자연특징 추적)

  • Bae, Byung-Jo;Park, Jong-Seung
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.345-354
    • /
    • 2010
  • Visual tracking techniques for Augmented Reality are classified as either a marker tracking approach or a natural feature tracking approach. Marker-based tracking algorithms can be efficiently implemented sufficient to work in real-time on mobile devices. On the other hand, natural feature tracking methods require a lot of computationally expensive procedures. Most previous natural feature tracking methods include heavy feature extraction and pattern matching procedures for each of the input image frame. It is difficult to implement real-time augmented reality applications including the capability of natural feature tracking on low performance devices. The required computational time cost is also in proportion to the number of patterns to be matched. To speed up the natural feature tracking process, we propose a novel fast tracking method based on optical flow. We implemented the proposed method on mobile devices to run in real-time and be appropriately used with mobile augmented reality applications. Moreover, during tracking, we keep up the total number of feature points by inserting new feature points proportional to the number of vanished feature points. Experimental results showed that the proposed method reduces the computational cost and also stabilizes the camera pose estimation results.

Accident Prevention and Safety Management System for a Children School Bus (어린이 통학버스 사고 방지 및 안전 관리 시스템)

  • Kim, Hyeonju;Lee, Seungmin;Ham, Sojeong;Kim, Sunhee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.446-452
    • /
    • 2020
  • As the use of children's school buses increases, accidents caused by the negligence of school bus drivers and ride carers have also increased significantly. To prevent such accidents, the government is coming up with various policies. We propose an accident prevention and safety management system for children's school buses. Through this system, bus drivers can easily check whether each child is seated and whether the seat belt is used, so it is possible to quickly respond to children's conditions while driving. With the ability to recognize faces by analyzing camera images, children can use a seat belt that is automatically adjusted to their height. It is therefore possible to prevent secondary injuries that may occur in the event of a traffic accident. In addition, a sleeping child-check system is provided to confirm that all children get off the bus, and a text service is provided to inform parents of their children's locations in real time. Based on Raspberry Pi, the system is implemented with cameras, pressure sensors, motors, Bluetooth modules, and so on. This proposed system was attached to a bus model to confirm that the series of functions work correctly.

Design and Implementation of Mobile Vision-based Augmented Galaga using Real Objects (실제 물체를 이용한 모바일 비전 기술 기반의 실감형 갤러그의 설계 및 구현)

  • Park, An-Jin;Yang, Jong-Yeol;Jung, Kee-Chul
    • Journal of Korea Game Society
    • /
    • v.8 no.2
    • /
    • pp.85-96
    • /
    • 2008
  • Recently, research on augmented games as a new game genre has attracted a lot of attention. An augmented game overlaps virtual objects in an augmented reality(AR) environment, allowing game players to interact with the AR environment through manipulating real and virtual objects. However, it is difficult to release existing augmented games to ordinary game players, as the games generally use very expensive and inconvenient 'backpack' systems: To solve this problem, several augmented games have been proposed using mobile devices equipped with cameras, but it can be only enjoyed at a previously-installed location, as a ‘color marker' or 'pattern marker’ is used to overlap the virtual object with the real environment. Accordingly, this paper introduces an augmented game, called augmented galaga based on traditional well-known galaga, executed on mobile devices to make game players experience the game without any economic burdens. Augmented galaga uses real object in real environments, and uses scale-invariant features(SIFT), and Euclidean distance to recognize the real objects. The virtural aliens are randomly appeared around the specific objects, several specific objects are used to improve the interest aspect, andgame players attack the virtual aliens by moving the mobile devices towards specific objects and clicking a button of mobile devices. As a result, we expect that augmented galaga provides an exciting experience without any economic burdens for players based on the game paradigm, where the user interacts with both the physical world captured by a mobile camera and the virtual aliens automatically generated by a mobile devices.

  • PDF

An Object Detection and Tracking System using Fuzzy C-means and CONDENSATION (Fuzzy C-means와 CONDENSATION을 이용한 객체 검출 및 추적 시스템)

  • Kim, Jong-Ho;Kim, Sang-Kyoon;Hang, Goo-Seun;Ahn, Sang-Ho;Kang, Byoung-Doo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.4
    • /
    • pp.87-98
    • /
    • 2011
  • Detecting a moving object from videos and tracking it are basic and necessary preprocessing steps in many video systems like object recognition, context aware, and intelligent visual surveillance. In this paper, we propose a method that is able to detect a moving object quickly and accurately in a condition that background and light change in a real time. Furthermore, our system detects strongly an object in a condition that the target object is covered with other objects. For effective detection, effective Eigen-space and FCM are combined and employed, and a CONDENSATION algorithm is used to trace a detected object strongly. First, training data collected from a background image are linear-transformed using Principal Component Analysis (PCA). Second, an Eigen-background is organized from selected principal components having excellent discrimination ability on an object and a background. Next, an object is detected with FCM that uses a convolution result of the Eigen-vector of previous steps and the input image. Finally, an object is tracked by using coordinates of an detected object as an input value of condensation algorithm. Images including various moving objects in a same time are collected and used as training data to realize our system that is able to be adapted to change of light and background in a fixed camera. The result of test shows that the proposed method detects an object strongly in a condition having a change of light and a background, and partial movement of an object.

Gaze Detection by Computing Facial and Eye Movement (얼굴 및 눈동자 움직임에 의한 시선 위치 추적)

  • 박강령
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.79-88
    • /
    • 2004
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Gaze detection systems have numerous fields of application. They are applicable to the man-machine interface for helping the handicapped to use computers and the view control in three dimensional simulation programs. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.8 cm of RMS error.

A Study on the Measurement of Respiratory Rate Using Image Alignment and Statistical Pattern Classification (영상 정합 및 통계학적 패턴 분류를 이용한 호흡률 측정에 관한 연구)

  • Moon, Sujin;Lee, Eui Chul
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.10
    • /
    • pp.63-70
    • /
    • 2018
  • Biomedical signal measurement technology using images has been developed, and researches on respiration signal measurement technology for maintaining life have been continuously carried out. The existing technology measured respiratory signals through a thermal imaging camera that measures heat emitted from a person's body. In addition, research was conducted to measure respiration rate by analyzing human chest movement in real time. However, the image processing using the infrared thermal image may be difficult to detect the respiratory organ due to the external environmental factors (temperature change, noise, etc.), and thus the accuracy of the measurement of the respiration rate is low.In this study, the images were acquired using visible light and infrared thermal camera to enhance the area of the respiratory tract. Then, based on the two images, features of the respiratory tract region are extracted through processes such as face recognition and image matching. The pattern of the respiratory signal is classified through the k-nearest neighbor classifier, which is one of the statistical classification methods. The respiration rate was calculated according to the characteristics of the classified patterns and the possibility of breathing rate measurement was verified by analyzing the measured respiration rate with the actual respiration rate.

Distortion Calibration and FOV Adjustment in Video See-through AR using Mobile Phones (모바일 폰을 사용한 비디오 투과식 증강현실에서의 왜곡 보정과 시야각 조정)

  • Widjojo, Elisabeth Adelia;Hwang, Jae-In
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.43-50
    • /
    • 2016
  • In this paper, we present a distortion correction for wearable Augmented Reality (AR) on mobile phones. Head Mounted Display (HMD) using mobile phones, such as Samsung Gear VR or Google's cardboard, introduces lens distortion of the rendered image to user. Especially, in case of AR the distortion is more complicated due to the duplicated optical systems from mobile phone's camera and HMD's lens. Furthermore, such distortions generate mismatches of the visual cognition or perception of the user. In a natural way, we can assume that transparent wearable displays are the ultimate visual system which generates the least misperception. Therefore, the image from the mobile phone must be corrected to cancel this distortion to make transparent-like AR display with mobile phone based HMD. We developed a transparent-like display in the mobile wearable AR environment focusing on two issues: pincushion distortion and field-of view. We implemented our technique and evaluated their performance.

Depth Image based Chinese Learning Machine System Using Adjusted Chain Code (깊이 영상 기반 적응적 체인 코드를 이용한 한자 학습 시스템)

  • Kim, Kisang;Choi, Hyung-Il
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.12
    • /
    • pp.545-554
    • /
    • 2014
  • In this paper, we propose online Chinese character learning machine with a depth camera, where a system presents a Chinese character on a screen and a user is supposed to draw the presented Chinese character by his or her hand gesture. We develop the hand tracking method and suggest the adjusted chain code to represent constituent strokes of a Chinese character. For hand tracking, a fingertip is detected and verified. The adjusted chain code is designed to contain the information on order and relative length of each constituent stroke as well as the information on the directional variation of sample points. Such information is very efficient for a real-time match process and checking incorrectly drawn parts of a stroke.

Classifying Color Codes Via k-Mean Clustering and L*a*b* Color Model (k-평균 클러스터링과 L*a*b* 칼라 모델에 의한 칼라코드 분류)

  • Yoo, Hyeon-Joong
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.109-116
    • /
    • 2007
  • To reduce the effect of color distortions on reading colors, it is more desirable to statistically process as many pixels in the individual color region as possible. This process may require segmentation, which usually requires edge detection. However, edges in color codes can be disconnected due to various distortions such as dark current, color cross, zipper effect, shade and reflection, to name a few. Edge linking is also a difficult process. In this paper, k-means clustering was performed on the images where edge detectors failed segmentation. Experiments were conducted on 311 images taken in different environments with different cameras. The primary and secondary colors were randomly selected for each color code region. While segmentation rate by edge detectors was 89.4%, the proposed method increased it to 99.4%. Color recognition was performed based on hue, a*, and b* components, with the accuracy of 100% for the successfully segmented cases.