• Title/Summary/Keyword: RGB camera

Search Result 316, Processing Time 0.024 seconds

Use of Mini-maps for Detection and Visualization of Surrounding Risk Factors of Mobile Virtual Reality (미니맵을 사용한 모바일 VR 사용자 주변 위험요소 시각화 연구)

  • Kim, Jin;Park, Jun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.5
    • /
    • pp.49-56
    • /
    • 2016
  • Mobile Virtual Reality Head Mount Displays such as Google Cardboard and Samsung Gear VR are being released, as well as PC-based VR HMDs such as Oculus Rift and HTC Vive. However, when the user wears HMD, it hides the external view of the user. Therefore, it may happen that the user is struck by the surrounding objects such as furniture, and there is no definite solution to this problem. In this paper, we propose a method to reduce the risk of injuries by visualizing the location and information of obstacles scanned by using a RGB-D camera.

Motion correction captured by Kinect based on synchronized motion database (동기화된 동작 데이터베이스를 활용한 Kinect 포착 동작의 보정 기술)

  • Park, Sang Il
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.41-47
    • /
    • 2017
  • In this paper, we present a method for data-driven correction of the noisy motion data captured from a low-end RGB-D camera such as the Kinect device. For this purpose, our key idea is to construct a synchronized motion database captured with Kinect and additional specialized motion capture device simultaneously, so that the database contains a set of erroneous poses from Kinect and their corresponding correct poses from the mocap device together. In runtime, given motion captured data from Kinect, we search the similar K candidate Kinect poses from the database, and synthesize a new motion only by using their corresponding poses from the mocap device. We present how to build such motion database effectively, and provide a method for querying and searching a desired motion from the database. We also adapt the laze learning framework to synthesize the corrected poses from the querying results.

Implementation of Picture Surveillance System using xHTML (xHTML을 이용한 화상 감시 시스템 구현)

  • 정경택;송병만;마석주;전용일;정동수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.7
    • /
    • pp.1421-1426
    • /
    • 2003
  • In this paper, we implement a picture surveillance system using IBM-compatible PC with web camera. The reflex RGB data captured from WebCam is stored to HDD through USB port at every 5 seconds intervals. Also, if strangers are detected through Motion Detection routine, warning voice message is broadcasted and invasion message is transmitted by e-mail and transmit the e-mail title to mobile phone through WAP(Wireless Application Protocol) push. The detected image is stored to hard-disk in ‘month-day-hour-minute-second. jpg’ data type. And the image data is transmitted to web server through FTP(File Transfer Protocol) because invader can deletes or destroys the image data on hard-disk We implement a surveillance system which is able to utilize through internet regardless of time and space.

Wound Contraction Effects of Percutaneous Electrical Stimulation on Excision Wound Models (백서의 적출창상에 대한 피하전기자극이 창상수축에 미치는 효과)

  • Gong, Gwang-Sik;Kim, Su-Hyon;Oh, Seok;Kim, Yong-Nam;Kim, Tae-Youl
    • The Journal of Korean Physical Therapy
    • /
    • v.23 no.1
    • /
    • pp.45-51
    • /
    • 2011
  • Purpose: This study investigated the effect of needle electrode stimulation at various frequencies on change in wound healing in excision wound rats Methods: Twenty-four Sprague-Dawley adult male rats were assigned to one of four groups: control (n=6), acupuncture group (n=6), low rate (2 Hz) percutaneous electrical stimulation group, high rate (100 Hz) percutaneous electrical stimulation group (n=6). We analyzed morphological effects by measuring the area of the excision wound, the contraction rate, and chromatic red. A digital camera and an image analysis program were used to measure and analyze the wound area,which was also used for the contraction rate. Chromatic red was obtained by calculating red, green, and blue (RGB) values of the wound area. Results: The electro acupuncture stimulation groupsshowed significant healing effects compared to the control and acupuncture groups. Conclusion: The results of this study showed that various frequencies of percutaneous electrical stimulation have a therapeutic effect on wound healing.

Map Error Measuring Mechanism Design and Algorithm Robust to Lidar Sparsity (라이다 점군 밀도에 강인한 맵 오차 측정 기구 설계 및 알고리즘)

  • Jung, Sangwoo;Jung, Minwoo;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.189-198
    • /
    • 2021
  • In this paper, we introduce the software/hardware system that can reliably calculate the distance from sensor to the model regardless of point cloud density. As the 3d point cloud map is widely adopted for SLAM and computer vision, the accuracy of point cloud map is of great importance. However, the 3D point cloud map obtained from Lidar may reveal different point cloud density depending on the choice of sensor, measurement distance and the object shape. Currently, when measuring map accuracy, high reflective bands are used to generate specific points in point cloud map where distances are measured manually. This manual process is time and labor consuming being highly affected by Lidar sparsity level. To overcome these problems, this paper presents a hardware design that leverage high intensity point from three planar surface. Furthermore, by calculating distance from sensor to the device, we verified that the automated method is much faster than the manual procedure and robust to sparsity by testing with RGB-D camera and Lidar. As will be shown, the system performance is not limited to indoor environment by progressing the experiment using Lidar sensor at outdoor environment.

Improving Eye-gaze Mouse System Using Mouth Open Detection and Pop Up Menu (입 벌림 인식과 팝업 메뉴를 이용한 시선추적 마우스 시스템 성능 개선)

  • Byeon, Ju Yeong;Jung, Keechul
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.12
    • /
    • pp.1454-1463
    • /
    • 2020
  • An important factor in eye-tracking PC interface for general paralyzed patients is the implementation of the mouse interface, for manipulating the GUI. With a successfully implemented mouse interface, users can generate mouse events exactly at the point of their choosing. However, it is difficult to define this interaction in the eye-tracking interface. This problem has been defined as the Midas touch problem and has been a major focus of eye-tracking research. There have been many attempts to solve this problem using blink, voice input, etc. However, it was not suitable for general paralyzed patients because some of them cannot wink or speak. In this paper, we propose a mouth-pop-up, eye-tracking mouse interface that solves the Midas touch problem as well as becoming a suitable interface for general paralyzed patients using a common RGB camera. The interface presented in this paper implements a mouse interface that detects the opening and closing of the mouth to activate a pop-up menu that the user can select the mouse event. After implementation, a performance experiment was conducted. As a result, we found that the number of malfunctions and the time to perform tasks were reduced compared to the existing method.

Non-Contact Heart Rate Monitoring from Face Video Utilizing Color Intensity

  • Sahin, Sarker Md;Deng, Qikang;Castelo, Jose;Lee, DoHoon
    • Journal of Multimedia Information System
    • /
    • v.8 no.1
    • /
    • pp.1-10
    • /
    • 2021
  • Heart Rate is a crucial physiological parameter that provides basic information about the state of the human body in the cardiovascular system, as well as in medical diagnostics and fitness assessments. At present day, it has been demonstrated that facial video-based photoplethysmographic signal captured using a low-cost RGB camera is possible to retrieve remote heart rate. Traditional heart rate measurement is mostly obtained by direct contact with the human body, therefore, it can result inconvenient for long-term measurement due to the discomfort that it causes to the subject. In this paper, we propose a non-contact-based remote heart rate measuring approach of the subject which depends on the color intensity variation of the subject's facial skin. The proposed method is applied in two regions of the subject's face, forehead and cheeks. For this, three different algorithms are used to measure the heart rate. i.e., Fast Fourier Transform (FFT), Independent Component Analysis (ICA) and Principal Component Analysis (PCA). The average accuracy for the three algorithms utilizing the proposed method was 89.25% in both regions. It is also noteworthy that the FastICA algorithm showed a higher average accuracy of more than 92% in both regions. The proposed method obtained 1.94% higher average accuracy than the traditional method based on average color value.

Traffic Signal Detection and Recognition Using a Color Segmentation in a HSI Color Model (HSI 색상 모델에서 색상 분할을 이용한 교통 신호등 검출과 인식)

  • Jung, Min Chul
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.4
    • /
    • pp.92-98
    • /
    • 2022
  • This paper proposes a new method of the traffic signal detection and the recognition in an HSI color model. The proposed method firstly converts a ROI image in the RGB model to in the HSI model to segment the color of a traffic signal. Secondly, the segmented colors are dilated by the morphological processing to connect the traffic signal light and the signal light case and finally, it extracts the traffic signal light and the case by the aspect ratio using the connected component analysis. The extracted components show the detection and the recognition of the traffic signal lights. The proposed method is implemented using C language in Raspberry Pi 4 system with a camera module for a real-time image processing. The system was fixedly installed in a moving vehicle, and it recorded a video like a vehicle black box. Each frame of the recorded video was extracted, and then the proposed method was tested. The results show that the proposed method is successful for the detection and the recognition of traffic signals.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

Estimation of tomato maturity as a continuous index using deep neural networks

  • Taehyeong Kim;Dae-Hyun Lee;Seung-Woo Kang;Soo-Hyun Cho;Kyoung-Chul Kim
    • Korean Journal of Agricultural Science
    • /
    • v.49 no.4
    • /
    • pp.785-793
    • /
    • 2022
  • In this study, tomato maturity was estimated based on deep learning for a harvesting robot. Tomato images were obtained using a RGB camera installed on a monitoring robot, which was developed previously, and the samples were cropped to 128 × 128 size images to generate a dataset for training the classification model. The classification model was constructed based on convolutional neural networks, and the mean-variance loss was used to learn implicitly the distribution of the data features by class. In the test stage, the tomato maturity was estimated as a continuous index, which has a range of 0 to 1, by calculating the expected class value. The results show that the F1-score of the classification was approximately 0.94, and the performance was similar to that of a deep learning-based classification task in the agriculture field. In addition, it was possible to estimate the distribution in each maturity stage. From the results, it was found that our approach can not only classify the discrete maturation stages of the tomatoes but also can estimate the continuous maturity.