• Title/Summary/Keyword: Network camera

Search Result 646, Processing Time 0.023 seconds

Design and Implementation of A Dual CPU Based Embedded Web Camera Streaming Server (Dual CPU 기반 임베디드 웹 카메라 스트리밍 서버의 설계 및 구현)

  • 홍진기;문종려;백승걸;정선태
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.417-420
    • /
    • 2003
  • Most Embedded Web Camera Server products currently deployed on the market adopt JPEG for compression of video data continuously acquired from the cameras. However, JPEG does not efficiently compress the continuous video stream, and is not appropriate for the Internet where the transmission bandwidth is not guaranteed. In our previous work, we presented the result of designing and implementing an embedded web camera streaming server using MPEG4 codec. But the server in our previous work did not show good performance since one CPU had to both compress and process the network transmission. In this paper, we present our efforts to improve our previous result by using dual CPUs, where DSP is employed for data compression and StrongARM is used for network processing. Better performance has been observed, but it is found that still more time is needed to optimize the performance.

  • PDF

Search for Gravity Waves with n New All-sky Camera System

  • Kim, Yong-Ha;Chung, Jong-Kyun;Won, Yong-In;Lee, Bang-Yong
    • Ocean and Polar Research
    • /
    • v.24 no.3
    • /
    • pp.263-266
    • /
    • 2002
  • Gravity waves have been searched for with a new all-sky camera system over Korean Peninsular. The all-sky camera consists of a 37mm/F4.5 Mamiya fisheye lens with a 180 dog field of view, interference filters and a 1024 by 1024 CCD camera. The all-sky camera has been tested near Daejeon city, and moved to Mt. Bohyun where the largest astronomical telescope is operated in Korea. A clear wave pattern was successfully detected in OH filter images over Mt. Bohyun on July 18, 2001, indicating that small scale coherent gravity waves perturbed OH airglow near the mesopause. Other wave features are since then observed with Na 589.8nm and OI 630.0nm filters. Since a Japanese all-sky camera network has already detected traveling ionospheric disturbances (TID) over the northeast-southwest range of Japanese islands, we hope our all-sky camera extends the coverage of the TID's observations to the west direction. We plan to operate our all-sky camera all year around to study seasonal variation of wave activities over the mid-latitude upper atmosphere.

Real-time Zoom Tracking for DM36x-based IP Network Camera

  • Cong, Bui Duy;Seol, Tae In;Chung, Sun-Tae;Kang, HoSeok;Cho, Seongwon
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.11
    • /
    • pp.1261-1271
    • /
    • 2013
  • Zoom tracking involves the automatic adjustment of the focus motor in response to the zoom motor movements for the purpose of keeping an object of interest in focus, and is typically achieved by moving the zoom and focus motors in a zoom lens module so as to follow the so-called "trace curve", which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. Thus, one can simply implement zoom tracking by following the most closest trace curve after all the trace curve data are stored in memory. However, this approach is often prohibitive in practical implementation because of its large memory requirement. Many other zoom tracking methods such as GZT, AZT and etc. have been proposed to avoid large memory requirement but with a deteriorated performance. In this paper, we propose a new zoom tracking method called 'Approximate Feedback Zoom Tracking method (AFZT)' on DM36x-based IP network camera, which does not need large memory by approximating nearby trace curves, but generates better zoom tracking accuracy than GZT or AZT by utilizing focus value as feedback information. Experiments through real implementation shows the proposed zoom tracking method improves the tracking performance and works in real-time.

Lateral Control of Vision-Based Autonomous Vehicle using Neural Network (신형회로망을 이용한 비젼기반 자율주행차량의 횡방향제어)

  • 김영주;이경백;김영배
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.687-690
    • /
    • 2000
  • Lately, many studies have been progressed for the protection human's lives and property as holding in check accidents happened by human's carelessness or mistakes. One part of these is the development of an autonomouse vehicle. General control method of vision-based autonomous vehicle system is to determine the navigation direction by analyzing lane images from a camera, and to navigate using proper control algorithm. In this paper, characteristic points are abstracted from lane images using lane recognition algorithm with sobel operator. And then the vehicle is controlled using two proposed auto-steering algorithms. Two steering control algorithms are introduced in this paper. First method is to use the geometric relation of a camera. After transforming from an image coordinate to a vehicle coordinate, a steering angle is calculated using Ackermann angle. Second one is using a neural network algorithm. It doesn't need to use the geometric relation of a camera and is easy to apply a steering algorithm. In addition, It is a nearest algorithm for the driving style of human driver. Proposed controller is a multilayer neural network using Levenberg-Marquardt backpropagation learning algorithm which was estimated much better than other methods, i.e. Conjugate Gradient or Gradient Decent ones.

  • PDF

Study on the Localization Improvement of the Dead Reckoning using the INS Calibrated by the Fusion Sensor Network Information (융합 센서 네트워크 정보로 보정된 관성항법센서를 이용한 추측항법의 위치추정 향상에 관한 연구)

  • Choi, Jae-Young;Kim, Sung-Gaun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.8
    • /
    • pp.744-749
    • /
    • 2012
  • In this paper, we suggest that how to improve an accuracy of mobile robot's localization by using the sensor network information which fuses the machine vision camera, encoder and IMU sensor. The heading value of IMU sensor is measured using terrestrial magnetism sensor which is based on magnetic field. However, this sensor is constantly affected by its surrounding environment. So, we isolated template of ceiling using vision camera to increase the sensor's accuracy when we use IMU sensor; we measured the angles by pattern matching algorithm; and to calibrate IMU sensor, we compared the obtained values with IMU sensor values and the offset value. The values that were used to obtain information on the robot's position which were of Encoder, IMU sensor, angle sensor of vision camera are transferred to the Host PC by wireless network. Then, the Host PC estimates the location of robot using all these values. As a result, we were able to get more accurate information on estimated positions than when using IMU sensor calibration solely.

Vulnerability Analysis Model for IoT Smart Home Camera

  • Aljahdali, Asia Othman;Alsaidi, Nawal;Alsafri, Maram
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.229-239
    • /
    • 2022
  • Today's Internet of Things (IoT) has had a dramatic increase in the use of various daily aspects. As a consequence, many homes adopt IoT technology to move towards the smart home. So, the home can be called smart when it has a range of smart devices that are united into one network, such as cameras, sensors, etc. While IoT smart home devices bring numerous benefits to human life, there are many security concerns associated with these devices. These security concerns, such as user privacy, can result in an insecure application. In this research, we focused on analyzing the vulnerabilities of IoT smart home cameras. This will be done by designing a new model that follows the STRIDE approach to identify these threats in order to afford an efficient and secure IoT device. Then, apply a number of test cases on a smart home camera in order to verify the usage of the proposed model. Lastly, we present a scheme for mitigation techniques to prevent any vulnerabilities that might occur in IoT devices.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

A novel visual servoing techniques considering robot dynamics (로봇의 운동특성을 고려한 새로운 시각구동 방법)

  • 이준수;서일홍;김태원
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.410-414
    • /
    • 1996
  • A visual servoing algorithm is proposed for a robot with a camera in hand. Specifically, novel image features are suggested by employing a viewing model of perspective projection to estimate relative pitching and yawing angles between the object and the camera. To compensate dynamic characteristics of the robot, desired feature trajectories for the learning of visually guided line-of-sight robot motion are obtained by measuring features by the camera in hand not in the entire workspace, but on a single linear path along which the robot moves under the control of a, commercially provided function of linear motion. And then, control actions of the camera are approximately found by fuzzy-neural networks to follow such desired feature trajectories. To show the validity of proposed algorithm, some experimental results are illustrated, where a four axis SCARA robot with a B/W CCD camera is used.

  • PDF

3-D Position Analysis of an Object using a Monocular USB port Camera through JAVA (한 대의 USB 카메라와 자바를 이용한 3차원 정보 추출)

  • Ji, Chang-Ho;Dong-Youp, Dong-Youp;Chang, Yu-Shin;Lee, M.H.
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2326-2328
    • /
    • 2001
  • This paper's purpose is to obtain 3-Dimension information by using a monocular camera. This system embodies to obtain the height of object by using trigonometry method between a reference point of circumstance and an object. It is possible to build up system regardless of operating system, and then set it up. An comfortable USB port camera is used everywhere without the capture board. The internet can be used by using the applet and JMF everywhere. We regard the camera as a fixed. And we have developed a Real-Time JPEG/RTP Network Camera system using UDP/IP on Ethernet.

  • PDF

Artificial Landmark based Pose-Graph SLAM for AGVs in Factory Environments (공장환경에서 AGV를 위한 인공표식 기반의 포즈그래프 SLAM)

  • Heo, Hwan;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.2
    • /
    • pp.112-118
    • /
    • 2015
  • This paper proposes a pose-graph based SLAM method using an upward-looking camera and artificial landmarks for AGVs in factory environments. The proposed method provides a way to acquire the camera extrinsic matrix and improves the accuracy of feature observation using a low-cost camera. SLAM is conducted by optimizing AGV's explored path using the artificial landmarks installed on the ceiling at various locations. As the AGV explores, the pose nodes are added based on the certain distance from odometry and the landmark nodes are registered when AGV recognizes the fiducial marks. As a result of the proposed scheme, a graph network is created and optimized through a G2O optimization tool so that the accumulated error due to the slip is minimized. The experiment shows that the proposed method is robust for SLAM in real factory environments.