• Title/Summary/Keyword: stereo-camera

Search Result 610, Processing Time 0.024 seconds

Joint Reasoning of Real-time Visual Risk Zone Identification and Numeric Checking for Construction Safety Management

  • Ali, Ahmed Khairadeen;Khan, Numan;Lee, Do Yeop;Park, Chansik
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.313-322
    • /
    • 2020
  • The recognition of the risk hazards is a vital step to effectively prevent accidents on a construction site. The advanced development in computer vision systems and the availability of the large visual database related to construction site made it possible to take quick action in the event of human error and disaster situations that may occur during management supervision. Therefore, it is necessary to analyze the risk factors that need to be managed at the construction site and review appropriate and effective technical methods for each risk factor. This research focuses on analyzing Occupational Safety and Health Agency (OSHA) related to risk zone identification rules that can be adopted by the image recognition technology and classify their risk factors depending on the effective technical method. Therefore, this research developed a pattern-oriented classification of OSHA rules that can employ a large scale of safety hazard recognition. This research uses joint reasoning of risk zone Identification and numeric input by utilizing a stereo camera integrated with an image detection algorithm such as (YOLOv3) and Pyramid Stereo Matching Network (PSMNet). The research result identifies risk zones and raises alarm if a target object enters this zone. It also determines numerical information of a target, which recognizes the length, spacing, and angle of the target. Applying image detection joint logic algorithms might leverage the speed and accuracy of hazard detection due to merging more than one factor to prevent accidents in the job site.

  • PDF

Images Grouping Technology based on Camera Sensors for Efficient Stitching of Multiple Images (다수의 영상간 효율적인 스티칭을 위한 카메라 센서 정보 기반 영상 그룹핑 기술)

  • Im, Jiheon;Lee, Euisang;Kim, Hoejung;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.713-723
    • /
    • 2017
  • Since the panoramic image can overcome the limitation of the viewing angle of the camera and have a wide field of view, it has been studied effectively in the fields of computer vision and stereo camera. In order to generate a panoramic image, stitching images taken by a plurality of general cameras instead of using a wide-angle camera, which is distorted, is widely used because it can reduce image distortion. The image stitching technique creates descriptors of feature points extracted from multiple images, compares the similarities of feature points, and links them together into one image. Each feature point has several hundreds of dimensions of information, and data processing time increases as more images are stitched. In particular, when a panorama is generated on the basis of an image photographed by a plurality of unspecified cameras with respect to an object, the extraction processing time of the overlapping feature points for similar images becomes longer. In this paper, we propose a preprocessing process to efficiently process stitching based on an image obtained from a number of unspecified cameras for one object or environment. In this way, the data processing time can be reduced by pre-grouping images based on camera sensor information and reducing the number of images to be stitched at one time. Later, stitching is done hierarchically to create one large panorama. Through the grouping preprocessing proposed in this paper, we confirmed that the stitching time for a large number of images is greatly reduced by experimental results.

Automation of Bio-Industrial Process Via Tele-Task Command(I) -identification and 3D coordinate extraction of object- (원격작업 지시를 이용한 생물산업공정의 생력화 (I) -대상체 인식 및 3차원 좌표 추출-)

  • Kim, S. C.;Choi, D. Y.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.1
    • /
    • pp.21-28
    • /
    • 2001
  • Major deficiencies of current automation scheme including various robots for bioproduction include the lack of task adaptability and real time processing, low job performance for diverse tasks, and the lack of robustness of take results, high system cost, failure of the credit from the operator, and so on. This paper proposed a scheme that could solve the current limitation of task abilities of conventional computer controlled automatic system. The proposed scheme is the man-machine hybrid automation via tele-operation which can handle various bioproduction processes. And it was classified into two categories. One category was the efficient task sharing between operator and CCM(computer controlled machine). The other was the efficient interface between operator and CCM. To realize the proposed concept, task of the object identification and extraction of 3D coordinate of an object was selected. 3D coordinate information was obtained from camera calibration using camera as a measurement device. Two stereo images were obtained by moving a camera certain distance in horizontal direction normal to focal axis and by acquiring two images at different locations. Transformation matrix for camera calibration was obtained via least square error approach using specified 6 known pairs of data points in 2D image and 3D world space. 3D world coordinate was obtained from two sets of image pixel coordinates of both camera images with calibrated transformation matrix. As an interface system between operator and CCM, a touch pad screen mounted on the monitor and remotely captured imaging system were used. Object indication was done by the operator’s finger touch to the captured image using the touch pad screen. A certain size of local image processing area was specified after the touch was made. And image processing was performed with the specified local area to extract desired features of the object. An MS Windows based interface software was developed using Visual C++6.0. The software was developed with four modules such as remote image acquisiton module, task command module, local image processing module and 3D coordinate extraction module. Proposed scheme shoed the feasibility of real time processing, robust and precise object identification, and adaptability of various job and environments though selected sample tasks.

  • PDF

Robust 3-D Motion Estimation Based on Stereo Vision and Kalman Filtering (스테레오 시각과 Kalman 필터링을 이용한 강인한 3차원 운동추정)

  • 계영철
    • Journal of Broadcast Engineering
    • /
    • v.1 no.2
    • /
    • pp.176-187
    • /
    • 1996
  • This paper deals with the accurate estimation of 3- D pose (position and orientation) of a moving object with reference to the world frame (or robot base frame), based on a sequence of stereo images taken by cameras mounted on the end - effector of a robot manipulator. This work is an extension of the previous work[1]. Emphasis is given to the 3-D pose estimation relative to the world (or robot base) frame under the presence of not only the measurement noise in 2 - D images[ 1] but also the camera position errors due to the random noise involved in joint angles of a robot manipulator. To this end, a new set of discrete linear Kalman filter equations is derived, based on the following: 1) the orientation error of the object frame due to measurement noise in 2 - D images is modeled with reference to the camera frame by analyzing the noise propagation through 3- D reconstruction; 2) an extended Jacobian matrix is formulated by combining the result of 1) and the orientation error of the end-effector frame due to joint angle errors through robot differential kinematics; and 3) the rotational motion of an object, which is nonlinear in nature, is linearized based on quaternions. Motion parameters are computed from the estimated quaternions based on the iterated least-squares method. Simulation results show the significant reduction of estimation errors and also demonstrate an accurate convergence of the actual motion parameters to the true values.

  • PDF

IoT Based Intelligent Position and Posture Control of Home Wellness Robots (홈 웰니스 로봇의 사물인터넷 기반 지능형 자기 위치 및 자세 제어)

  • Lee, Byoungsu;Hyun, Chang-Ho;Kim, Seungwoo
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.636-644
    • /
    • 2014
  • This paper is to technically implement the sensing platform for Home-Wellness Robot. First, self-localization technique is based on a smart home and object in a home environment, and IOT(Internet of Thing) between Home Wellness Robots. RF tag is set in a smart home and the absolute coordinate information is acquired by a object included RF reader. Then bluetooth communication between object and home wellness robot provides the absolute coordinate information to home wellness robot. After that, the relative coordinate of home wellness robot is found and self-localization through a stereo camera in a home wellness robot. Second, this paper proposed fuzzy control methode based on a vision sensor for approach object of home wellness robot. Based on a stereo camera equipped with face of home wellness robot, depth information to the object is extracted. Then figure out the angle difference between the object and home wellness robot by calculating a warped angle based on the center of the image. The obtained information is written Look-Up table and makes the attitude control for approaching object. Through the experimental with home wellness robot and the smart home environment, confirm performance about the proposed self-localization and posture control method respectively.

Developing a Sensory Ride Film 'Dragon Dungeon Racing' (효율적인 입체 라이드 콘텐츠 제작을 위한 연구)

  • Chae, Eel-Jin;Choi, Chul-Young;Choi, Kyu-Don;Kim, Ki-Hong
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.2
    • /
    • pp.178-185
    • /
    • 2011
  • The recent development of 3D and its application contents have made it possible for people to experience more various 3D contents such as 3D/4D, VR, 3D ride film, I-max, sensory 3D games at theme parks, large-scale exhibitions, 4D cinemas and Video ride. Among them, Video ride, a motion-based genre, especially is getting more popularity, where viewers are immersed in and get indirect experiences in virtual reality. In this study, the production process of the genre of sensory 3D image getting attention recently and ride film are introduced. In the material selection of 3D images, the space and the setting up which is suitable to the fierce movement of rides are studied and some examples of the realization of creative direction ideas and effective technologies using the functions of Stereo Camera which has been first applied to MAYA 2009 are also illustrated. When experts in this 3D image production create more interesting stories with the cultural diversity and introduce enhanced 3D production techniques for excellent contents, domestic relevant companies will be sufficiently able to compete with their foreign counterparts and further establish their successfully unique and strong domains in the image contents sector.

An Image Processing System for the Harvesting robot$^{1)}$ (포도수확용 로봇 개발을 위한 영상처리시스템)

  • Lee, Dae-Weon;Kim, Dong-Woo;Kim, Hyun-Tae;Lee, Yong-Kuk;Si-Heung
    • Journal of Bio-Environment Control
    • /
    • v.10 no.3
    • /
    • pp.172-180
    • /
    • 2001
  • A grape fruit is required for a lot of labor to harvest in time in Korea, since the fruit is cut and grabbed currently by hand. In foreign country, especially France, a grape harvester has been developed for processing to make wine out of a grape, not to eat a fresh grape fruit. However, a harvester which harvests to eat a fresh grape fruit has not been developed yet. Therefore, this study was designed and constructed to develope a image processing system for a fresh grape harvester. Its development involved the integration of a vision system along with an personal computer and two cameras. Grape recognition, which was able to found the accurate cutting position in three dimension by the end-effector, needed to find out the object from the background by using two different images from two cameras. Based on the results of this research the following conclusions were made: The model grape was located and measured within less than 1,100 mm from camera center, which means center between two cameras. The distance error of the calculated distance had the distance error within 5mm by using model image in the laboratory. The image processing system proved to be a reliable system for measuring the accurate distance between the camera center and the grape fruit. Also, difference between actual distance and calculated distance was found within 5 mm using stereo vision system in the field. Therefore, the image processing system would be mounted on a grape harvester to be founded to the position of the a grape fruit.

  • PDF

A Home-Based Remote Rehabilitation System with Motion Recognition for Joint Range of Motion Improvement (관절 가동범위 향상을 위한 원격 모션 인식 재활 시스템)

  • Kim, Kyungah;Chung, Wan-Young
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.3
    • /
    • pp.151-158
    • /
    • 2019
  • Patients with disabilities from various reasons such as disasters, injuries or chronic illness or elderly with limited body motion range due to aging are recommended to participate in rehabilitation programs at hospitals. But typically, it's not as simple for them to commute without help as they have limited access outside of the home. Also, regarding the perspectives of hospitals, having to maintain the workforce and have them take care of the rehabilitation sessions leads them to more expenses in cost aspects. For those reasons, in this paper, a home-based remote rehabilitation system using motion recognition is developed without needing help from others. This system can be executed by a personal computer and a stereo camera at home, the real-time user motion status is monitored using motion recognition feature. The system tracks the joint range of motion(Joint ROM) of particular body parts of users to check the body function improvement. For demonstration, total of 4 subjects with various ages and health conditions participated in this project. Their motion data were collected during all 3 exercise sessions, and each session was repeated 9 times per person and was compared in the results.

(Distance and Speed Measurements of Moving Object Using Difference Image in Stereo Vision System) (스테레오 비전 시스템에서 차 영상을 이용한 이동 물체의 거리와 속도측정)

  • 허상민;조미령;이상훈;강준길;전형준
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.9
    • /
    • pp.1145-1156
    • /
    • 2002
  • A method to measure the speed and distance of moving object is proposed using the stereo vision system. One of the most important factors for measuring the speed and distance of moving object is the accuracy of object tracking. Accordingly, the background image algorithm is adopted to track the rapidly moving object and the local opening operator algorithm is used to remove the shadow and noise of object. The extraction efficiency of moving object is improved by using the adaptive threshold algorithm independent to variation of brightness. Since the left and right central points are compensated, the more exact speed and distance of object can be measured. Using the background image algorithm and local opening operator algorithm, the computational processes are reduced and it is possible to achieve the real-time processing of the speed and distance of moving object. The simulation results show that background image algorithm can track the moving object more rapidly than any other algorithm. The application of adaptive threshold algorithm improved the extraction efficiency of the target by reducing the candidate areas. Since the central point of the target is compensated by using the binocular parallax, the error of measurement for the speed and distance of moving object is reduced. The error rate of measurement for the distance from the stereo camera to moving object and for the speed of moving object are 2.68% and 3.32%, respectively.

  • PDF

Forward Vehicle Detection Algorithm Using Column Detection and Bird's-Eye View Mapping Based on Stereo Vision (스테레오 비전기반의 컬럼 검출과 조감도 맵핑을 이용한 전방 차량 검출 알고리즘)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Kim, Jong-Hwan
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.255-264
    • /
    • 2011
  • In this paper, we propose a forward vehicle detection algorithm using column detection and bird's-eye view mapping based on stereo vision. The algorithm can detect forward vehicles robustly in real complex traffic situations. The algorithm consists of the three steps, namely road feature-based column detection, bird's-eye view mapping-based obstacle segmentation, obstacle area remerging and vehicle verification. First, we extract a road feature using maximum frequent values in v-disparity map. And we perform a column detection using the road feature as a new criterion. The road feature is more appropriate criterion than the median value because it is not affected by a road traffic situation, for example the changing of obstacle size or the number of obstacles. But there are still multiple obstacles in the obstacle areas. Thus, we perform a bird's-eye view mapping-based obstacle segmentation to divide obstacle accurately. We can segment obstacle easily because a bird's-eye view mapping can represent the position of obstacle on planar plane using depth map and camera information. Additionally, we perform obstacle area remerging processing because a segmented obstacle area may be same obstacle. Finally, we verify the obstacles whether those are vehicles or not using a depth map and gray image. We conduct experiments to prove the vehicle detection performance by applying our algorithm to real complex traffic situations.