• Title/Summary/Keyword: Video image processing system

Search Result 409, Processing Time 0.024 seconds

Image Map Generation using the Airship Photogrammetric System (비행선촬영시스템을 이용한 영상지도 제작)

  • 유환희;제정형;김성삼
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.20 no.1
    • /
    • pp.59-67
    • /
    • 2002
  • Recently, much demand of vector data have increased rapidly such as a digital map instead of traditional a paper map and the raster data such as a high-resolution orthoimage have been used for many GIS application with the advent of industrial high-resolution satellites and development of aerial optical sensor technologies. Aerial photogrammetric technologies using an airship can offer cost-effective and high-resolution color images as well as real time images, different from conventional remote sensing measurements. Also, it can acquire images easily and its processing procedure is short and simple relatively. On the other hand, it has often been used for the production of a small-scale land use map not required high accuracy, monitoring of linear infrastructure features through mosaicking strip images and construction of GIS data. Through this study, the developed aerial photogrammetric system using the airship expects to be applied to not only producing of scale 1:5, 000 digital map but also verifying, editing, and updating the digital map which was need to be reproduced. Further more, providing the various type of video-images, it expects to use many other GIS applications such as facilities management, scenery management and construction of GIS data for Urban area.

Implemented of Integrated Interface Control Unit with Compatible and Improve Brightness of Existing Full Color LED Display System (Full Color LED 디스플레이장치와 휘도 개선과 호환성을 갖는 통합인터페이스 제어장치 구현)

  • Lee, Ju-Yeon
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.12
    • /
    • pp.90-96
    • /
    • 2021
  • In this paper, we designed manufactured and design an integrated interface control unit that has compatibility with brightness control unit, color control unit, and existing control unit. As the implementation method the standard of DVI/HDMI transmission method is applied to the data transmission method, and the Sil 1169 IC is used as the applied IC. Brightness control is programmed to have eight levels of brightness control using the AT89C2051. Also, EPM240T100C5 IC was used for image and dimming data processing. As a result, it is compatible with the control unit using the DVI/HDMI transmission method manufactured by each company and can reproduce clear high quality full HD video according to the surrounding brightness through the full color LED display system.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Multimedia Network Teaching System based on SMIL (SMIL을 기반으로 한 멀티미디어 네트워크 교육시스템)

  • Yu, Lei;Cao, Ke-Rang;Bang, Jin-Suk;Cho, Tae-Beom;Jung, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.524-527
    • /
    • 2008
  • Recently, digital and the Internet are widespread out of the world, and multimedia processing technology and the development of information and communication technology in education using the Internet as the demand is rapidly increasing. Also, we tan easily use informations with less restrictions of time and space. however, several kinds of audio, media to integrate multimedia data, such as the proliferation of demands for representation. Therefore, in 1998, W3C presented an international standard, SMIL in order to solve multimedia object representation and synchronization problems. By using SMIL, various multimedia elements can be integrated as a multimedia document with proper view in a spate and time. Using this SMIL document, we can create new internet radio broadcasting service that delivers not noly audio data but also various text, image and video. In this paper, with the system, teachers can easily create multimedia courseware and living broadcast their torture on network, students can receive audio-video information of the teacher, screen displays of the teachers computer. Moreover students can communicate with teacher simultaneously by text editor windows. Students can also order courseware after class.

  • PDF

Study On The Signal Radar Plan Position Indicator Scope Of The Data Expressed Scanning System Implemented As An Sticking Image On LCD Display (Plan Position Indicator Scope 주사방식의 Radar 영상신호를 LCD Display에 잔상영상으로 데이터 표출 구현에 관한 연구)

  • Shin, Hyun Jong;Yu, Hyeung Keun
    • Journal of Satellite, Information and Communications
    • /
    • v.10 no.3
    • /
    • pp.94-101
    • /
    • 2015
  • The display device is an important video information communication system device to connect between human and device. it transfers the information as characters, shapes, images and pattern to enable recognizing by eyes. Theres absolutely needs some key functions and role to quickly display informations. It can analyse a information through a PPI Scope of a cathode-ray tube(CRT) displays information which can perform a role. this research proposed a radar device to display informations as received signal. The radar display researches can apply to fixed function graphics pipeline algorithms of the large capacity type through a vertical blanking interval and buffer swap of display unit. Also, it can be possible to apply to performed algorithms to FPGA logic without high-performance graphics processing unit GPU through synchronization which can implement a display system. In this paper, we improved the affordability and reliability through proposed research. 이So, we have studied the radar display unit which can change a flat display from radar display of CRT radar display.

Effects of Imagery Tennis Training on Cerebral Activity

  • Jung, Seokwon;Choi, Min-sun;Kim, Min-uk;An, Hye-jin;Shin, Min-gyeong;Kwon, Oh-Young
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.47 no.1
    • /
    • pp.46-50
    • /
    • 2015
  • The previous studies showed that the visual imagery activated the occipital and posterior inferior temporal area of the brain, and the damage to the occipital cortex impaired the visual mental imagery. We studied current-source distribution of electroencephalography (EEG) to observe neuronal activity during imagery tennis playing. Eleven healthy volunteers were enrolled. All volunteers were right-handed males and novices for tennis playing. The mean age of them was 24.9 years. The EEGs were recorded on the scalp electrodes located according to the International 10~20 System. The number of electrodes was 25 channels including subtemporal electrodes. The EEG recording session was 13 min including 5 segments: resting-I, scenery-slide show, resting-II, watching tennis-game video, and imagery-tennis playing. The recoding durations were 3, 2, 3, 2, and 3 min respectively. Five 'artifact free 3-sec segments' were selected in each segment of 'imagery-tennis playing' and 'resting-II'. We did the frequency domain analysis with the EEG segments using a distributed model of current-source analysis. The statistical-nonparametric maps (SnPMs) were obtained between the segments of 'imagery-tennis playing' and the segments of 'resting-II' (p<0.01). The significant change of current-source density was observed only in alpha-2 frequency band (10~12 Hz). The current-sourcedensity was increased in the hippocampus, parahippocampus, and occipital fusiform gyrus in the right cerebral hemisphere (p<0.01). Imaginary-tennis playing may activate the hippocampal-occipital alpha networks of nondominant hemisphere.

Design Around Algorithm view Using wireless camera (무선 카메라를 이용한 어라운드 뷰 알고리즘 설계)

  • Kim, Gyu-Hyun;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.466-469
    • /
    • 2013
  • Cars that are currently available to the operator to ensure convenience and safety for electronics devices now on the market supply is developed. The current car black box of electronics service, parking, is to help when reversing. The black box is necessary at the time of the accident. After-market through a lot of these are advertised. However, these products are known only to the rear or the front of the picture, as well, at the time of driving, the accident and the front left and right lateral images of the boundary of the car can not be confirmed. Electronics devices on the market, but they can not give this problem solving. In this paper, we propose these to the algorithm-around view of the driver's operation of the vehicle after the car sideways, left and right of the room with integrated video Black Box is designed to provide.

  • PDF

Design and Implementation of AR Model based Automatic Identification and Restoration Scheme for Line Scratches in Old Films (AR 모델 기반의 고전영화의 긁힘 손상의 자동 탐지 및 복원 시스템 설계와 구현)

  • Han, Ngoc-Soc;Kim, Seong-Whan
    • The KIPS Transactions:PartB
    • /
    • v.17B no.1
    • /
    • pp.47-54
    • /
    • 2010
  • Old archived film shows two major defects: line scratch and blobs. In this paper, we present a design and implementation of an automatic video restoration system for line scratches observed in archived film. We use autoregressive (AR) image model because we can make stochastic and specifically autoregressive image generation process with our PAST-PRESENT model and Sampling Pattern. We designed locality maximizing scanning pattern, which can generate nearly stationary time-like series of pixels, which is a strong requirement for a stochastic series to be autoregressive. The sampled pixel series undergoes filtering and model fitting using Durbin-Levinson algorithm before interpolation process. We designed three-stage film restoration system, which includes (1) film acquisition from VHS tapes, (2) simple line scratch detection and restoration, and (3) manual blob identification and sophisticated inpainting scheme. We implemented film acquisition and simple inpainting scheme on Texas Instruments DSP board TMS320DM642 EVM, and implemented our AR inpainting scheme on PC for sophisticated restoration. We experimented our scheme with two old Korean films: "Viva Freedom" and "Robot Tae-Kwon-V", and the experimental results show that our scheme improves Bertalmio's scheme for subjective quality (MOS), objective quality (PSNR), and especially restoration ratio (RR), which reflects how much similar to the manual inpainting results.

Development of Embedded RFID R/W System Using PXA255 ARM Chip (PXA255 ARM칩을 활용한 임베디드 RFID R/W 시스템 개발)

  • Hwang, G.H.;Jang, W.T.;Sim, H.J.
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.6 s.312
    • /
    • pp.61-67
    • /
    • 2006
  • In this paper it was introduced that embedded RFID Reader /Writer system including PXA255 ARM chip which enables the Tag signal to be used by data and video processing via IEEE 802.11 communication protocol. Embedded RFID R/W middle ware was developed which transmit the searched result in the data base using the received Tag signal via IEEE 802.11 communication protocol. Developed embedded RFID R/W system was composed of three parts - PXA255 ARM chid (Core Part) 13.56 MHz RFID Reader /Writer, wireless LAN for data communication with server and TFT-LCD terminal. Once this system receives the Tag signal through the serial port, it transmits the data through the wireless LAN to the server and it displays the received image data which was processed by the server onto the TFT-LCD screen. Embedded RFID R/W Middle ware transmits the received Tag signal from RFID R/W to the embedded system, which activates the socket program to connect to the window server via IEEE 802.11 communication protocol and transmits the Tag signal. Window server program searches the Database using this Tag information and displays the result on to the TFT-LCD window in the embedded system via IEEE 802.11 protocol.

A Study on Treatment Target Position Verification by using Electronic Portal Imaging Device & Fractionated Stereotatic Radiotherapy (EPID와 FSRT를 이용한 치료표적위치 검증에 관한 연구)

  • Lee, Dong-Hoon;Kwon, Jang-Woo;Park, Seung-Woo;Kim, Yoon-Jong;Lee, Dong-Han;Ji, Young-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.3
    • /
    • pp.44-51
    • /
    • 2009
  • It is very important to verify generated setup errors in cancer therapy by using a high energy radiation and to perform the precise radiation therapy. Specially, the verification of treatment position is very crucial in special therapies like fractionated stereotatic radiotherapy (FSRT). The FSRT uses normally high-dose, small field size for treating small intracranial lesions. To estimate the developed FSRT system, the isocenter accuracy of gantry, couch and collimator were performed and a total of inaccuracy was less than ${\pm}1mm$. Precise beam targeting is crucial when using high-dose, small field size FSRT for treating small intracranial lesions. The EPID image of the 3mm lead ball mounted on the isocenter with a 25mm collimator cone was acquired and detected to the extent of one pixel (0.76mm) after comparing the difference between the center of a 25mm collimator cone and a 3 mm ball after processing the EPID image. In this paper, the radiation treatment efficiency can be improved by performing precise radiation therapy with a developed video based EPID and FSRT at near real time