• 제목/요약/키워드: RGB camera

검색결과 316건 처리시간 0.027초

영상기반 축사 내 육계 검출 및 밀집도 평가 연구 (Study on image-based flock density evaluation of broiler chicks)

  • 이대현;김애경;최창현;김용주
    • 한국정보전자통신기술학회논문지
    • /
    • 제12권4호
    • /
    • pp.373-379
    • /
    • 2019
  • 본 연구에서는 실시간 육계 복지관리를 위해 영상 기반의 육계 군집 모니터링을 수행하였으며, 촬영된 영상 내육계 영역을 검출하고 좌표변환을 이용한 지면 투영과 밀집도를 평가하였다. 평사 구조의 육계사에서 광역 범위의 육계 군집을 촬영하였으며 육계 영역은 카메라로부터 수집된 RGB 영상을 HSV 모델로 변환한 후 임계치 처리 및 군집화를 통해 검출하였다. 영상처리를 통해 검출된 육계 영역은 카메라-월드 좌표계 변환을 이용하여 지면에 투영한 후 실제 면적을 계산하였으며, 계산된 면적을 이용하여 밀집도를 평가하였다. 영역 검출 결과 상대오차 및 IoU가 평균 5%, 0.81로 각각 계산되었으며, 좌표변환을 통한 실제 면적은 약 7%의 오차 수준으로 평가되었다. 실제 면적 내 육계영역의 비율을 이용하여 밀집도를 계산하였으며 평균 80% 수준으로 나타났다. 영역 검출의 경우 작거나 멀리 존재하는 면적에 대해서는 검출 성능이 다소 떨어졌으며, 실제 면적 평가는 축사 내 구조물 등에 따른 오차가 관찰되었다. 따라서 본 기술을 축사에 적용하기 위해서는 다양한 데이터 기반의 알고리즘의 검출 성능 향상 및 마커 등을 이용한 기준정보 추가 설치가 필요할 것으로 판단된다.

모바일 로봇을 위한 카메라 탑재 매니퓰레이터 (Manipulator with Camera for Mobile Robots)

  • 이준우;조경근;조훈희;정성균;봉재환
    • 한국전자통신학회논문지
    • /
    • 제17권3호
    • /
    • pp.507-514
    • /
    • 2022
  • 가정에서 사람을 보조하기 위해 이동과 작업이 모두 가능한 모바일 매니퓰레이터의 필요성이 커지고 있다. 본 논문에서는 크기가 작고 낮은 가격으로 구성할 수 있는 모바일 매니퓰레이터를 개발하기 위해 모바일 로봇에 탑재할 수 있는 소형 매니퓰레이터 시스템을 개발하였다. 개발한 매니퓰레이터는 4자유도를 가지며, 끝단에 그리퍼와 카메라를 부착하여 물체의 인식과 인식한 물체에 대한 작업 수행이 가능하다. 개발한 매니퓰레이터는 수직 방향의 선형 이동이 가능하여 상대적으로 높이 위치한 사람의 손에 물건을 전달하거나 협업을 수행하는 데 유리하다. 개발한 매니퓰레이터의 4자유도 동작을 위한 4개의 액츄에이터를 매니퓰레이터의 베이스에 가깝게 배치하고 매니퓰레이터의 회전 관성을 줄임으로써 매니퓰레이터의 작업 중 안정성을 높이고 모바일 매니퓰레이터의 전복 위험을 낮추었다. 개발한 매니퓰레이터의 끝단에 위치한 카메라에서 RGB 영상을 획득하고 영상처리를 통해 물체를 인식하여 목표한 위치로 옮기는 픽 앤 플레이스 동작을 시험하였으며 로봇의 작업영역(workspace) 내에서 성공적으로 동작함을 확인하였다.

스테레오 카메라 기반 모바일 로봇의 위치 추정 향상을 위한 특징맵 생성 (Generation of Feature Map for Improving Localization of Mobile Robot based on Stereo Camera)

  • 김은경;김성신
    • 한국정보전자통신기술학회논문지
    • /
    • 제13권1호
    • /
    • pp.58-63
    • /
    • 2020
  • 본 논문은 스테레오 카메라를 이용한 모바일 로봇의 위치 추정 정확도 향상을 위한 방법을 제안한다. 스테레오 카메라로 획득한 스테레오 이미지로부터 위치 정보를 복원하기 위해서는 왼쪽 영상의 각 픽셀에 대응하는 대응점을 오른쪽 영상에서 찾아야 한다. 일반적으로 에피폴라 라인 위의 점들과 픽셀 유사도를 연산하여 대응점을 찾는 방법이 있다. 하지만 모든 에피폴라 라인 위의 점들을 다 탐색해야한다는 단점이 있고, 픽셀 값에 의해서만 유사도가 계산된다는 단점이 있다. 이를 보완하기 위해 본 논문에서는 좌/우 영상의 특징점을 추출 및 매칭하여 대응하는 점들이 같은 y축 상에 있을 경우, x좌표 값의 차를 구함으로 대응점 탐색방법을 간략하게 구현하였다. 또한 매칭이 되지 않아 소실되는 점들의 정보는 기존 알고리즘을 통해 대응점을 구함으로 특징점 수를 최대한 보존하고자 하였다. 특징점 및 대응점의 좌표를 통해 복원된 특징점의 3D 좌표를 기반으로 모바일 로봇의 위치를 보정하였다. 실험 결과, 제안하는 방법을 통해 좌표 보정을 위한 특징점 수를 증가시켰고, 특징점 추출만 수행한 경우보다 모바일 로봇의 위치도 보정 가능함을 확인하였다.

A study on visual tracking of the underwater mobile robot for nuclear reactor vessel inspection

  • Cho, Jai-Wan;Kim, Chang-Hoi;Choi, Young-Soo;Seo, Yong-Chil;Kim, Seung-Ho
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.1244-1248
    • /
    • 2003
  • This paper describes visual tracking procedure of the underwater mobile robot for nuclear reactor vessel inspection, which is required to find the foreign objects such as loose parts. The yellowish underwater robot body tends to present a big contrast to boron solute cold water of nuclear reactor vessel, tinged with indigo by Cerenkov effect. In this paper, we have found and tracked the positions of underwater mobile robot using the two color information, yellow and indigo. The center coordinates extraction procedures are as follows. The first step is to segment the underwater robot body to cold water with indigo background. From the RGB color components of the entire monitoring image taken with the color CCD camera, we have selected the red color component. In the selected red image, we extracted the positions of the underwater mobile robot using the following process sequences; binarization, labelling, and centroid extraction techniques. In the experiment carried out at the Youngkwang unit 5 nuclear reactor vessel, we have tracked the center positions of the underwater robot submerged near the cold leg and the hot leg way, which is fathomed to 10m deep in depth.

  • PDF

Fall Detection Based on Human Skeleton Keypoints Using GRU

  • Kang, Yoon-Kyu;Kang, Hee-Yong;Weon, Dal-Soo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제12권4호
    • /
    • pp.83-92
    • /
    • 2020
  • A recent study to determine the fall is focused on analyzing fall motions using a recurrent neural network (RNN), and uses a deep learning approach to get good results for detecting human poses in 2D from a mono color image. In this paper, we investigated the improved detection method to estimate the position of the head and shoulder key points and the acceleration of position change using the skeletal key points information extracted using PoseNet from the image obtained from the 2D RGB low-cost camera, and to increase the accuracy of the fall judgment. In particular, we propose a fall detection method based on the characteristics of post-fall posture in the fall motion analysis method and on the velocity of human body skeleton key points change as well as the ratio change of body bounding box's width and height. The public data set was used to extract human skeletal features and to train deep learning, GRU, and as a result of an experiment to find a feature extraction method that can achieve high classification accuracy, the proposed method showed a 99.8% success rate in detecting falls more effectively than the conventional primitive skeletal data use method.

Application of computer vision for rapid measurement of seed germination

  • Tran, Quoc Huy;Wakholi, Collins;Cho, Byoung-Kwan
    • 한국농업기계학회:학술대회논문집
    • /
    • 한국농업기계학회 2017년도 춘계공동학술대회
    • /
    • pp.154-154
    • /
    • 2017
  • Root is an important organ of plant that typically lies below the surface of the soil. Root surface determines the ability of plants to absorb nutrient and water from the surrounding soil. This study describes an application of image processing and computer vision which was implemented for rapid measurement of seed germination such as root length, surface area, average diameter, branching points of roots. A CCD camera was used to obtain RGB image of seed germination which have been planted by wet paper in a humidity chamber. Temperature was controlled at approximately 250C and 90% relative humidity. Pre-processing techniques such as color space, binarized image by customized threshold, removal noise, dilation, skeleton method were applied to the obtained images for root segmentation. The various morphological parameters of roots were estimated from a root skeleton image with the accuracy of 95% and the speed of within 10 seconds. These results demonstrated the high potential of computer vision technique for the measurement of seed germination.

  • PDF

통합환경 계획을 위한 작업복과 작업현장의 색채실태 사례연구 -조선업체를 중심으로- (Case Study Color Analysis of Work Clothes and Industrial Factories for Coordinating Environment Planning -Focus on Shipbuilding Companies-)

  • 박혜원
    • 한국의류학회지
    • /
    • 제34권3호
    • /
    • pp.540-552
    • /
    • 2010
  • This research forms preliminary data for the coordination of environmental color planning in industry through a color analysis of work clothes and the work environment. A digital camera was used to study the work environment of two major shipbuilding companies located in Geoje city and Goseong county. The picture data was analyzed as G (ground: environment) and F (figure: clothes), and analyzed hue, value, and the chroma value through a Muncell conversion 9.0.6 from the color cluster, number of pixel, and RGB value. The results are as follows: First, GY, Y color were mostly used in the shipbuilding environment and work clothes. The color value was used in a relatively wide range but very low chroma (0-3), dark grayish, grayish tone dominated both fields. Second, the use of limited colors cannot be secured for safety in attention of the shipbuilding field. Third, unclear and vogue colors lessened the optical tiredness of workers that helped in the prevention of industrial accidents. Color combination and color selection should be considered for a secure safety color coordination between work clothes and the work environment when it comes to complicated color principles.

HSI/YCbCr 색상모델과 에이다부스트 알고리즘을 이용한 실시간 교통신호 인식 (Real Time Traffic Signal Recognition Using HSI and YCbCr Color Models and Adaboost Algorithm)

  • 박상훈;이준웅
    • 한국자동차공학회논문집
    • /
    • 제24권2호
    • /
    • pp.214-224
    • /
    • 2016
  • This paper proposes an algorithm to effectively detect the traffic lights and recognize the traffic signals using a monocular camera mounted on the front windshield glass of a vehicle in day time. The algorithm consists of three main parts. The first part is to generate the candidates of a traffic light. After conversion of RGB color model into HSI and YCbCr color spaces, the regions considered as a traffic light are detected. For these regions, edge processing is applied to extract the borders of the traffic light. The second part is to divide the candidates into traffic lights and non-traffic lights using Haar-like features and Adaboost algorithm. The third part is to recognize the signals of the traffic light using a template matching. Experimental results show that the proposed algorithm successfully detects the traffic lights and recognizes the traffic signals in real time in a variety of environments.

MultiView-Based Hand Posture Recognition Method Based on Point Cloud

  • Xu, Wenkai;Lee, Ick-Soo;Lee, Suk-Kwan;Lu, Bo;Lee, Eung-Joo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권7호
    • /
    • pp.2585-2598
    • /
    • 2015
  • Hand posture recognition has played a very important role in Human Computer Interaction (HCI) and Computer Vision (CV) for many years. The challenge arises mainly due to self-occlusions caused by the limited view of the camera. In this paper, a robust hand posture recognition approach based on 3D point cloud from two RGB-D sensors (Kinect) is proposed to make maximum use of 3D information from depth map. Through noise reduction and registering two point sets obtained satisfactory from two views as we designed, a multi-viewed hand posture point cloud with most 3D information can be acquired. Moreover, we utilize the accurate reconstruction and classify each point cloud by directly matching the normalized point set with the templates of different classes from dataset, which can reduce the training time and calculation. Experimental results based on posture dataset captured by Kinect sensors (from digit 1 to 10) demonstrate the effectiveness of the proposed method.

Enhanced Sign Language Transcription System via Hand Tracking and Pose Estimation

  • Kim, Jung-Ho;Kim, Najoung;Park, Hancheol;Park, Jong C.
    • Journal of Computing Science and Engineering
    • /
    • 제10권3호
    • /
    • pp.95-101
    • /
    • 2016
  • In this study, we propose a new system for constructing parallel corpora for sign languages, which are generally under-resourced in comparison to spoken languages. In order to achieve scalability and accessibility regarding data collection and corpus construction, our system utilizes deep learning-based techniques and predicts depth information to perform pose estimation on hand information obtainable from video recordings by a single RGB camera. These estimated poses are then transcribed into expressions in SignWriting. We evaluate the accuracy of hand tracking and hand pose estimation modules of our system quantitatively, using the American Sign Language Image Dataset and the American Sign Language Lexicon Video Dataset. The evaluation results show that our transcription system has a high potential to be successfully employed in constructing a sizable sign language corpus using various types of video resources.