• 제목/요약/키워드: multiple cameras

Search Result 225, Processing Time 0.022 seconds

Android-Based Devices Control System Using Web Server (웹 서버를 이용한 안드로이드 기반 기기 제어 시스템)

  • Jung, Chee-Oh;Kim, Wung-Jun;Jung, Hoe-Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.3
    • /
    • pp.741-746
    • /
    • 2015
  • Recently, as mobile operating system market and wireless communication technology have been rapidly developed, many devices such as smart phones, air conditioners, smart TVs, cleaning robot, and cameras become available with android operating system. Accordingly, collecting variety of information through many everyday use devices with network connections is now enabled. However, in the current market, most devices are controlled individually developed applications, and there is growing need to develop a master application that can control multiple devices. In this paper, we propose and implement a system that can control multiple android-based devices on a Wired/Wireless router(AP) registered through web server. we expect such an effort can attribute to future IoT researches.

Surveillance Video Summarization System based on Multi-person Tracking Status (다수 사람 추적상태에 따른 감시영상 요약 시스템)

  • Yoo, Ju Hee;Lee, Kyoung Mi
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.2
    • /
    • pp.61-68
    • /
    • 2016
  • Surveillance cameras have been installed in many places because security and safety has become an important issue in modern society. However, watching surveillance videos and judging accidental situations is very labor-intensive and time-consuming. So now, requests for research to automatically analyze the surveillance videos is growing. In this paper, we propose a surveillance system to track multiple persons in videos and to summarize the videos based on tracking information. The proposed surveillance summarization system applies an adaptive illumination correction, subtracts the background, detects multiple persons, tracks the persons, and saves their tracking information in a database. The tracking information includes tracking one's path, their movement status, length of staying time at the location, enterance/exit times, and so on. The movement status is classified into six statuses(Enter, Stay, Slow, Normal, Fast, and Exit). This proposed summarization system provides a person's status as a graph in time and space and helps to quickly determine the status of the tracked person.

Adult Image Detection Using Skin Color and Multiple Features (피부색상과 복합 특징을 이용한 유해영상 인식)

  • Jang, Seok-Woo;Choi, Hyung-Il;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.12
    • /
    • pp.27-35
    • /
    • 2010
  • Extracting skin color is significant in adult image detection. However, conventional methods still have essential problems in extracting skin color. That is, colors of human skins are basically not the same because of individual skin difference or difference races. Moreover, skin regions of images may not have identical color due to makeup, different cameras used, etc. Therefore, most of the existing methods use predefined skin color models. To resolve these problems, in this paper, we propose a new adult image detection method that robustly segments skin areas with an input image-adapted skin color distribution model, and verifies if the segmented skin regions contain naked bodies by fusing several representative features through a neural network scheme. Experimental results show that our method outperforms others through various experiments. We expect that the suggested method will be useful in many applications such as face detection and objectionable image filtering.

Moving Object Preserving Seamline Estimation (이동 객체를 보존하는 시접선 추정 기술)

  • Gwak, Moonsung;Lee, Chanhyuk;Lee, HeeKyung;Cheong, Won-Sik;Yang, Seungjoon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.992-1001
    • /
    • 2019
  • In many applications, images acquired from multiple cameras are stitched to form an image with a wide viewing angle. We propose a method of estimating a seam line using motion information to stitch multiple images without distortion of the moving object. Existing seam estimation techniques usually utilize an energy function based on image gradient information and parallax. In this paper, we propose a seam estimation technique that prevents distortion of moving object by adding temporal motion information, which is calculated from the gradient information of each frame. We also propose a measure to quantify the distortion level of stitched images and to verify the performance differences between the existing and proposed methods.

Online Video Synopsis via Multiple Object Detection

  • Lee, JaeWon;Kim, DoHyeon;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.8
    • /
    • pp.19-28
    • /
    • 2019
  • In this paper, an online video summarization algorithm based on multiple object detection is proposed. As crime has been on the rise due to the recent rapid urbanization, the people's appetite for safety has been growing and the installation of surveillance cameras such as a closed-circuit television(CCTV) has been increasing in many cities. However, it takes a lot of time and labor to retrieve and analyze a huge amount of video data from numerous CCTVs. As a result, there is an increasing demand for intelligent video recognition systems that can automatically detect and summarize various events occurring on CCTVs. Video summarization is a method of generating synopsis video of a long time original video so that users can watch it in a short time. The proposed video summarization method can be divided into two stages. The object extraction step detects a specific object in the video and extracts a specific object desired by the user. The video summary step creates a final synopsis video based on the objects extracted in the previous object extraction step. While the existed methods do not consider the interaction between objects from the original video when generating the synopsis video, in the proposed method, new object clustering algorithm can effectively maintain interaction between objects in original video in synopsis video. This paper also proposed an online optimization method that can efficiently summarize the large number of objects appearing in long-time videos. Finally, Experimental results show that the performance of the proposed method is superior to that of the existing video synopsis algorithm.

Development of an intelligent camera for multiple body temperature detection (다중 체온 감지용 지능형 카메라 개발)

  • Lee, Su-In;Kim, Yun-Su;Seok, Jong-Won
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.430-436
    • /
    • 2022
  • In this paper, we propose an intelligent camera for multiple body temperature detection. The proposed camera is composed of optical(4056*3040) and thermal(640*480), which detects abnormal symptoms by analyzing a person's facial expression and body temperature from the acquired image. The optical and thermal imaging cameras are operated simultaneously and detect an object in the optical image, in which the facial region and expression analysis are calculated from the object. Additionally, the calculated coordinate values from the optical image facial region are applied to the thermal image, also the maximum temperature is measured from the region and displayed on the screen. Abnormal symptom detection is determined by using the analyzed three facial expressions(neutral, happy, sadness) and body temperature values. In order to evaluate the performance of the proposed camera, the optical image processing part is tested on Caltech, WIDER FACE, and CK+ datasets for three algorithms(object detection, facial region detection, and expression analysis). Experimental results have shown 91%, 91%, and 84% accuracy scores each.

Generation and Coding of Layered Depth Images for Multi-view Video Representation with Depth Information (깊이정보를 포함한 다시점 비디오로부터 계층적 깊이영상 생성 및 부호화 기법)

  • Yoon, Seung-Uk;Lee, Eun-Kyung;Kim, Sung-Yeol;Ho, Yo-Sung;Yun, Kug-Jin;Kim, Dae-Hee;Hur, Nam-Ho;Lee, Soo-In
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.375-378
    • /
    • 2005
  • The multi-view video is a collection of multiple videos capturing the same scene at different viewpoints. The multi-view video can be used in various applications, including free viewpoint TV and three-dimensional TV. Since the data size of the multi-view video linearly increases as the number of cameras, it is necessary to compress multi-view video data for efficient storage and transmission. The multi-view video can be coded using the concept of the layered depth image (LDI). In this paper, we describe a procedure to generate LDI from the natural multi-view video and present a method to encode multi-view video using the concept of LDI.

  • PDF

Stitching Method of Videos Recorded by Multiple Handheld Cameras (다중 사용자 촬영 영상의 영상 스티칭)

  • Billah, Meer Sadeq;Ahn, Heejune
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.22 no.3
    • /
    • pp.27-38
    • /
    • 2017
  • This Paper Presents a Method for Stitching a Large Number of Images Recorded by a Large Number of Individual Users Through a Cellular Phone Camera at a Venue. In Contrast to 360 Camera Solutions that Use Existing Fixed Rigs, these Conditions must Address New Challenges Such as Time Synchronization, Repeated Transformation Matrix Calculations, and Camera Sensor Mismatch Correction. In this Paper, we Solve this Problem by Updating the Transformation Matrix Using Time Synchronization Method Using Audio, Sensor Mismatch Removal by Color Transfer Method, and Global Operation Stabilization Algorithm. Experimental Results Show that the Proposed Algorithm Shows better Performance in Terms of Computation Speed and Subjective Image Quality than that of Screen Stitching.

Entity Matching for Vision-Based Tracking of Construction Workers Using Epipolar Geometry (영상 내 건설인력 위치 추적을 위한 등극선 기하학 기반의 개체 매칭 기법)

  • Lee, Yong-Joo;Kim, Do-Wan;Park, Man-Woo
    • Journal of KIBIM
    • /
    • v.5 no.2
    • /
    • pp.46-54
    • /
    • 2015
  • Vision-based tracking has been proposed as a means to efficiently track a large number of construction resources operating in a congested site. In order to obtain 3D coordinates of an object, it is necessary to employ stereo-vision theories. Detecting and tracking of multiple objects require an entity matching process that finds corresponding pairs of detected entities across the two camera views. This paper proposes an efficient way of entity matching for tracking of construction workers. The proposed method basically uses epipolar geometry which represents the relationship between the two fixed cameras. Each pixel coordinate in a camera view is projected onto the other camera view as an epipolar line. The proposed method finds the matching pair of a worker entity by comparing the proximity of the all detected entities in the other view to the epipolar line. Experimental results demonstrate its suitability for automated entity matching for 3D vision-based tracking of construction workers.

Activity-based key-frame detection and video summarization in a wide-area surveillance system (광범위한 지역 감시시스템에서의 행동기반 키프레임 검출 및 비디오 요약)

  • Kwon, Hye-Young;Lee, Kyoung-Mi
    • Journal of Internet Computing and Services
    • /
    • v.9 no.3
    • /
    • pp.169-178
    • /
    • 2008
  • In this paper, we propose a video summarization system which is based on activity in video acquired by multiple non-overlapping cameras for wide-area surveillance. The proposed system separates persons by time-independent background removal and detects activities of the segmented persons by their motions. In this paper, we extract eleven activities based on whose direction the persons move to and consider a key-frame as a frame which contains a meaningful activity. The proposed system summarizes based on activity-based key-frames and controls an amount of summarization according to an amount of activities. Thus the system can summarize videos by camera, time, and activity.

  • PDF