• Title/Summary/Keyword: camera image

Search Result 4,918, Processing Time 0.04 seconds

A Research on Expressional Directing Methods for Film and Video - Focused on the of Expression derived from Spatial Characteristics of the Filming Zone - (영상 표현 연출 방법에 관한 연구 - 촬영 공간의 형태적 특성에 기인하는 이미지 표현을 중심으로 -)

  • Yoo, Taek-Sang
    • Archives of design research
    • /
    • v.19 no.2 s.64
    • /
    • pp.217-228
    • /
    • 2006
  • It is obvious that the characteristics of the media are related with the visual expression found in film and video shots. It can be assumed that there would be unavoidable distortion when the space and objects are being framed into an image by the camera and that unavoidable distortion can be utilized for creative expression. Therefore the relationship among the shape of filming zone, the structure of the image, and the strategies of disposition of camera, actors, and objects were studied in accordance with expressional attempts found in film and video images. The classified cases of the expressions found in film and video was studied from the view of the dispositions and movements of the physical elements such as camera, actors, and objects which made the designated expression possible. Finally the organization of method to arrange of the elements used for film within film zone was resulted as the expressional directing methods for filming. Later the usefulness of the method was tested by the application of the method in educational procedure and the evaluation of the analysis of the students' results.

  • PDF

Extraction of Road Facility Information Using Multi-Imagery (다중영상을 이용한 도로시설물 정보추출)

  • Sohn, Duk-Jae;Yoo, Hwan-Hee;Lee, Hey-Jin
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.10 no.1 s.19
    • /
    • pp.91-100
    • /
    • 2002
  • Recently, many studies on the construction of management system for road facility have been accomplished, and in its process the digital map is used as the essential data for spatial database. But in case where the existing topographical map or completion map of construction data is not sufficient for Data Base construction, the compilation, modification or renewal of digital map should be conflicted with large obstacles. This study intended to extract the road facility information using the image data of various form such as aerial photographs, terrestrial photographs and so on. The terrestrial photographic images are taken by hand-held camera, digital camera and video camera which are widely used and of low price in general. This study used the single frame images only for the raw image data, and the extracted spatial and attribute data from the images are used for modifying and updating the database. In addition, the creating possibility of the digital map in the relative scale using the spatial data extracted from the single images was tried.

  • PDF

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

Development of Texture Neutralization System for the Invisible e-Performance (투명 e-퍼포먼스를 위한 텍스쳐 중화 시스템 개발)

  • Lee, Dong-Hoon;Yun, Tae-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.4
    • /
    • pp.585-594
    • /
    • 2011
  • On live performance such as play and musical, various stage effects are utilized to attract a diverse audience. These stage effects include traditional direction techniques, striking display effects and all kinds of ways of being immersed in the scene. In this paper, we propose a novel digital visual effects(digilog) for controlling the surface texture of objects based on spatial augmented reality. For this purpose, we present a method of neutralizing the appearance of an arbitrary object using a projector-camera system. To make the object appear as if it is a transparent object by projecting a carefully determined compensation image onto the surface of objects, we use the homography method for a simple and effective off-line projector-camera calibration. The successful uses of the basic algorithm of Smart Projector for measuring radiometric parameters led us believe that this method may be used for temporal variation of plays and musicals.

Development of Unmanned Video Recording System using Mobile (모바일을 이용한 무인 영상 녹화 시스템 개발)

  • Ahn, Byeongtae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.6
    • /
    • pp.254-260
    • /
    • 2019
  • Recently, a self-camera that generates and distributes a large amount of moving images has been rapidly increasing due to the appearance of SNS such as Facebook, Instagram, and Tweet using mobile. In particular, the amount of SNS connections using mobile phones is significantly increasing in terms of usage, number of connections, and usage time. However, the use of a self-recording system using a smartphone by itself is extremely limited not only in terms of usage but also in frequency of use. In addition, the conventional unattended recording system is a very expensive system that automatically records and tracks an object to be photographed using an infrared signal. Therefore, this paper developed a low cost unmanned recording system using mobile phone. The system consists of a commercial mobile camera, a servomotor for moving the camera from side to side, a microcontroller for controlling the motor, and a commercial wireless Bluetooth earset for video audio input. And it is an unmanned automation system using mobile, and anyone can record image by self image tracking.

Calibration of VLP-16 Lidar Sensor and Vision Cameras Using the Center Coordinates of a Spherical Object (구형물체의 중심좌표를 이용한 VLP-16 라이다 센서와 비전 카메라 사이의 보정)

  • Lee, Ju-Hwan;Lee, Geun-Mo;Park, Soon-Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.2
    • /
    • pp.89-96
    • /
    • 2019
  • 360 degree 3-dimensional lidar sensors and vision cameras are commonly used in the development of autonomous driving techniques for automobile, drone, etc. By the way, existing calibration techniques for obtaining th e external transformation of the lidar and the camera sensors have disadvantages in that special calibration objects are used or the object size is too large. In this paper, we introduce a simple calibration method between two sensors using a spherical object. We calculated the sphere center coordinates using four 3-D points selected by RANSAC of the range data of the sphere. The 2-dimensional coordinates of the object center in the camera image are also detected to calibrate the two sensors. Even when the range data is acquired from various angles, the image of the spherical object always maintains a circular shape. The proposed method results in about 2 pixel reprojection error, and the performance of the proposed technique is analyzed by comparing with the existing methods.

Estimating Location in Real-world of a Observer for Adaptive Parallax Barrier (적응적 패럴랙스 베리어를 위한 사용자 위치 추적 방법)

  • Kang, Seok-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1492-1499
    • /
    • 2019
  • This paper propose how to track the position of the observer to control the viewing zone using an adaptive parallax barrier. The pose is estimated using a Constrained Local Model based on the shape model and Landmark for robust eye-distance measurement in the face pose. Camera's correlation converts distance and horizontal location to centimeter. The pixel pitch of the adaptive parallax barrier is adjusted according to the position of the observer's eyes, and the barrier is moved to adjust the viewing area. This paper propose a method for tracking the observer in the range of 60cm to 490cm, and measure the error, measurable range, and fps according to the resolution of the camera image. As a result, the observer can be measured within the absolute error range of 3.1642cm on average, and it was able to measure about 278cm at 320×240, about 488cm at 640×480, and about 493cm at 1280×960 depending on the resolution of the image.

Design and Analysis of Coaxial Optical System for Improvement of Image Fusion of Visible and Far-infrared Dual Cameras (가시광선과 원적외선 듀얼카메라의 영상 정합도 향상을 위한 동축광학계 설계 및 분석)

  • Kyu Lee Kang;Young Il Kim;Byeong Soo Son;Jin Yeong Park
    • Korean Journal of Optics and Photonics
    • /
    • v.34 no.3
    • /
    • pp.106-116
    • /
    • 2023
  • In this paper, we designed a coaxial dual camera incorporating two optical systems-one for the visible rays and the other for far-infrared ones-with the aim of capturing images in both wavelength ranges. The far-infrared system, which uses an uncooled detector, has a sensor array of 640×480 pixels. The visible ray system has 1,945×1,097 pixels. The coaxial dual optical system was designed using a hot mirror beam splitter to minimize heat transfer caused by infrared rays in the visible ray optical system. The optimization process revealed that the final version of the dual camera system reached more than 90% of the fusion performance between two separate images from dual systems. Multiple rigorous testing processes confirmed that the coaxial dual camera we designed demonstrates meaningful design efficiency and improved image conformity degree compared to existing dual cameras.

The flight Test Procedures For Agricultural Drones Based on 5G Communication (5G 통신기반 농업용 드론 비행시험 절차)

  • Byeong Gyu Gang
    • Journal of Aerospace System Engineering
    • /
    • v.17 no.2
    • /
    • pp.38-44
    • /
    • 2023
  • This study aims to determine how agricultural drones are operated for flight tests using a 5G communication in order to carry out a mission such as sensing agricultural crop healthy status with special cameras. Drones were installed with a multi-spectral and IR camera to capture images of crop status in separate altitudes with different speeds. A multi-spectral camera can capture crop image data using five different particular wavelengths with a built-in GPS so that captured images with synchronized time could provide better accuracy of position and altitude during the flight time. Captured thermal videos are then sent to a ground server to be analyzed via 5G communication. Thus, combining two cameras can result in better visualization of vegetation areas. The flight test verified how agricultural drones equipped with special cameras could collect image data in vegetation areas.

Volume Control using Gesture Recognition System

  • Shreyansh Gupta;Samyak Barnwal
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.161-170
    • /
    • 2024
  • With the technological advances, the humans have made so much progress in the ease of living and now incorporating the use of sight, motion, sound, speech etc. for various application and software controls. In this paper, we have explored the project in which gestures plays a very significant role in the project. The topic of gesture control which has been researched a lot and is just getting evolved every day. We see the usage of computer vision in this project. The main objective that we achieved in this project is controlling the computer settings with hand gestures using computer vision. In this project we are creating a module which acts a volume controlling program in which we use hand gestures to control the computer system volume. We have included the use of OpenCV. This module is used in the implementation of hand gestures in computer controls. The module in execution uses the web camera of the computer to record the images or videos and then processes them to find the needed information and then based on the input, performs the action on the volume settings if that computer. The program has the functionality of increasing and decreasing the volume of the computer. The setup needed for the program execution is a web camera to record the input images and videos which will be given by the user. The program will perform gesture recognition with the help of OpenCV and python and its libraries and them it will recognize or identify the specified human gestures and use them to perform or carry out the changes in the device setting. The objective is to adjust the volume of a computer device without the need for physical interaction using a mouse or keyboard. OpenCV, a widely utilized tool for image processing and computer vision applications in this domain, enjoys extensive popularity. The OpenCV community consists of over 47,000 individuals, and as of a survey conducted in 2020, the estimated number of downloads exceeds 18 million.