• Title/Summary/Keyword: Multiple Cameras Tracking

Search Result 57, Processing Time 0.029 seconds

Object detection and tracking using a high-performance artificial intelligence-based 3D depth camera: towards early detection of African swine fever

  • Ryu, Harry Wooseuk;Tai, Joo Ho
    • Journal of Veterinary Science
    • /
    • v.23 no.1
    • /
    • pp.17.1-17.10
    • /
    • 2022
  • Background: Inspection of livestock farms using surveillance cameras is emerging as a means of early detection of transboundary animal disease such as African swine fever (ASF). Object tracking, a developing technology derived from object detection aims to the consistent identification of individual objects in farms. Objectives: This study was conducted as a preliminary investigation for practical application to livestock farms. With the use of a high-performance artificial intelligence (AI)-based 3D depth camera, the aim is to establish a pathway for utilizing AI models to perform advanced object tracking. Methods: Multiple crossovers by two humans will be simulated to investigate the potential of object tracking. Inspection of consistent identification will be the evidence of object tracking after crossing over. Two AI models, a fast model and an accurate model, were tested and compared with regard to their object tracking performance in 3D. Finally, the recording of pig pen was also processed with aforementioned AI model to test the possibility of 3D object detection. Results: Both AI successfully processed and provided a 3D bounding box, identification number, and distance away from camera for each individual human. The accurate detection model had better evidence than the fast detection model on 3D object tracking and showed the potential application onto pigs as a livestock. Conclusions: Preparing a custom dataset to train AI models in an appropriate farm is required for proper 3D object detection to operate object tracking for pigs at an ideal level. This will allow the farm to smoothly transit traditional methods to ASF-preventing precision livestock farming.

Vision-based hand gesture recognition system for object manipulation in virtual space (가상 공간에서의 객체 조작을 위한 비전 기반의 손동작 인식 시스템)

  • Park, Ho-Sik;Jung, Ha-Young;Ra, Sang-Dong;Bae, Cheol-Soo
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.553-556
    • /
    • 2005
  • We present a vision-based hand gesture recognition system for object manipulation in virtual space. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. Experimental results show the effectiveness of our method.

  • PDF

Cross-covariance 3D Coordinate Estimation Method for Virtual Space Movement Platform (가상공간 이동플랫폼을 위한 교차 공분산 3D 좌표 추정 방법)

  • Jung, HaHyoung;Park, Jinha;Kim, Min Kyoung;Chang, Min Hyuk
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.5
    • /
    • pp.41-48
    • /
    • 2020
  • Recently, as the demand for the mobile platform market in the virtual/augmented/mixed reality field is increasing, experiential content that gives users a real-world felt through a virtual environment is drawing attention. In this paper, as a method of tracking a tracker for user location estimation in a virtual space movement platform for motion capture of trainees, we present a method of estimating 3D coordinates of the 3D cross covariance through the coordinates of the markers projected on the image. In addition, the validity of the proposed algorithm is verified through rigid body tracking experiments.

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.

Collaborative Tracking Using Multiple Network Cameras (다수의 네트워크 카메라를 이용한 협동 추적)

  • Jeon, Hyoung-Seok;Jung, Jun-Young;Joo, Young-Hoon;Shin, Sang-Keun
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1888-1889
    • /
    • 2011
  • 본 논문에서는 다수의 네트워크 카메라를 이용한 협동 추적 알고리즘을 제안하고자 한다. 이를 위해 먼저 모션 템플릿 기법을 통하여 영상내의 움직임 영역을 추출한다. 이후 움직임 영역이 추출되면 이웃한 카메라에 협동요청을 하고 칼만 필터를 이용하여 움직임 영역의 위치를 보정하여 정확한 PTZ변수를 설정한다. 또한 협동요청을 받은 이웃 카메라는 요청받은 PTZ변수를 이용하여 움직임 물체를 협동 추적한다. 마지막으로, 본 논문에서 제안하는 협동 추적 알고리즘에 대한 실험을 통하여 제안된 협동 추적 알고리즘의 성능분석 및 그 응용 가능성을 증명한다.

  • PDF

Tracking People under Occlusion using Multiple Cameras (다중 카메라를 이용한 겹침 상황에서의 사람 추적)

  • Ryu, Jung-Hun;Nam, Yun-Young;Cho, We-Duke
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06c
    • /
    • pp.445-449
    • /
    • 2008
  • 개인과 공공의 안전에 대한 요구가 증가함에 따라 카메라를 이용한 영상 감시 시스템이 점차적으로 증가하고 있다. 보안의 필요성에 따라 한 지역에 여러 대의 카메라를 설치하여 FOV(Field Of View)를 겹치는 경우도 있다. 이처럼 FOV의 중첩 영역에서 다중 카메라들로부터 얻은 영상을 처리하여 객체의 위치를 파악하고 추적하는 연구가 활발히 이루어지고 있다. 본 논문에서는 다수의 카메라를 이용해 감시 영역이 중첩되는 지역에서의 겹침이 발생해도 객체를 계속적으로 추적하는 방법을 제안한다. 이 방법으로 단일 카메라 상에서의 외형 식별자를 이용하여 추적하고 다중 카메라 간의 호모그래픽 매트릭스를 이용하여 노이즈에 강건한 시스템을 구현하였다.

  • PDF

Block-based Multiple Cameras Hand-off for Continuous Object Tracking and Surveillance (연속적인 물체 추적과 감시를 위한 Block 기반 다중 카메라들 간의 Hand-off 기술)

  • Kim, Ji-Man;Kim, Dai-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10c
    • /
    • pp.419-423
    • /
    • 2007
  • 감시 및 보안의 중요성이 커지고 있다. 따라서 여러 대의 카메라로 움직이는 물체를 연속적으로 추적하는 효율적인 알고리즘 및 시스템에 대한 개발이 활발하다. 본 논문에서는 물체를 연속적으로 추적하기 위해 다중 카메라 간의 hand-off 기술을 제안한다. 먼저 움직이는 물체의 검출을 위한 몇 가지 단계의 전처리 과정을 거친다. 그리고 나서 검출된 영역들 간의 상관관계를 파악하기 위해 물체를 가장 잘 검출 한 주 카메라를 선택하고 이동 경로에 따른 다음 주 카메라를 예측한다. 예측된 카메라 정보와 칼라 정보 등을 이용해서 동일 물체를 추적하고 있음을 확인한다. 실험 결과는 움직이는 특정 물체에 대해 주 카메라가 어떻게 변해 가는지를 보여준다.

  • PDF

Development Of Four-Dimensional Digital Speckle Tomography For Experimental Analysis Of High-Speed Helium Jet Flow (고속 헬륨 제트 유동의 실험적 분석을 위한 4차원 디지털 스펙클 토모그래피 기법 개발)

  • Ko, Han-Seo;Kim, Yong-Jae
    • Transactions of the Korean hydrogen and new energy society
    • /
    • v.17 no.2
    • /
    • pp.193-203
    • /
    • 2006
  • A high-speed and initial helium jet flow has been analyzed by a developed four-dimensional digital speckle tomography. Multiple high-speed cameras have been used to capture movements of speckles in multiple angles of view simultaneously because a shape of a nozzle for the jet flow is asymmetric and the initial jet flow is fast and unsteady. The speckle movements between no flow and helium jet flow from the asymmetric nozzle controlled by a solenoid valve have been obtained by a cross-correlation tracking method so that those distances can be transferred to deflection angles of laser rays for density gradients. The four-dimensional density fields for the high-speed helium jet flow have been reconstructed from the deflection angles by a developed real-time tomography method.

PTZ Camera Based Multi Event Processing for Intelligent Video Network (지능형 영상네트워크 연계형 PTZ카메라 기반 다중 이벤트처리)

  • Chang, Il-Sik;Ahn, Seong-Je;Park, Gwang-Yeong;Cha, Jae-Sang;Park, Goo-Man
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11A
    • /
    • pp.1066-1072
    • /
    • 2010
  • In this paper we proposed a multi event handling surveillance system using multiple PTZ cameras. One event is assigned to each PTZ camera to detect unusual situation. If a new object appears in the scene while a camera is tracking the old one, it can not handle two objects simultaneously. In the second case that the object moves out of the scene during the tracking, the camera loses the object. In the proposed method, the nearby camera takes the role to trace the new one or detect the lost one in each case. The nearby camera can get the new object location information from old camera and set the seamless event link for the object. Our simulation result shows the continuous camera-to-camera object tracking performance.

Activity-based key-frame detection and video summarization in a wide-area surveillance system (광범위한 지역 감시시스템에서의 행동기반 키프레임 검출 및 비디오 요약)

  • Kwon, Hye-Young;Lee, Kyoung-Mi
    • Journal of Internet Computing and Services
    • /
    • v.9 no.3
    • /
    • pp.169-178
    • /
    • 2008
  • In this paper, we propose a video summarization system which is based on activity in video acquired by multiple non-overlapping cameras for wide-area surveillance. The proposed system separates persons by time-independent background removal and detects activities of the segmented persons by their motions. In this paper, we extract eleven activities based on whose direction the persons move to and consider a key-frame as a frame which contains a meaningful activity. The proposed system summarizes based on activity-based key-frames and controls an amount of summarization according to an amount of activities. Thus the system can summarize videos by camera, time, and activity.

  • PDF