• Title/Summary/Keyword: Moving color

Search Result 342, Processing Time 0.026 seconds

Wide-QQVGA Flexible Full-Color Active-Matrix OLED Display with an Organic TFT Backplane

  • Nakajima, Yoshiki;Takei, Tatsuya;Tsuzuki, Toshimitsu;Suzuki, Mitsunori;Fukagawa, Hirohiko;Fujisaki, Yoshihide;Yamamoto, Toshihiro;Kikuchi, Hiroshi;Tokito, Shizuo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.189-192
    • /
    • 2008
  • A 5.8-inch wide-QQVGA flexible full-color active-matrix OLED display was fabricated on a plastic substrate. Low-voltage-operation organic TFTs and high-efficiency phosphorescent OLEDs were used as the backplane and emissive pixels, respectively. The fabricated display clearly showed color moving images when the driving voltage was below 15 V.

  • PDF

Quantitative measure for motion-induced artifacts in LCDs

  • Kim, Yu-Hoon;Chen, Qiao Song;Kang, Yoo-Jin;Kim, Choon-Woo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.987-990
    • /
    • 2009
  • Motion-induced artifacts on LCDs appear as blurred boundaries and/or color aberration between the moving objects and background. Perceived degree of the motion-induced artifacts depends on the blur width as well as color difference. This paper presents a quantitative measure for the motion-induced artifacts on LCDs. Performance of the proposed measure is verified by calculating correlation coefficients between the proposed measures and the results of human visual tests performed on the 240 Hz and 120 Hz scanning LCD TVs.

  • PDF

A Study on Applying Constant Luminance for Quality Enhancement of Moving Images (동영상 화질 개선을 위한 Constant Luminance 적용에 관한 연구)

  • Kim, Jin-Seo;Kang, Byoung-Ho;Cho, Maeng-Sub
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11a
    • /
    • pp.735-738
    • /
    • 2002
  • 휘도 신호만을 이용한 흑백 TV 에서부터 오늘날의 디지털 HDTV 에 이르기까지의 TV 시스템에서 사용되어 온 영상 신호의 생성, 전송 및 재생 방법은 HDTV, 디지털 시네마 등 영상 시스템의 획기적인 발전과 더불어, 보다 고선명, 고화질을 요구하게 되었다. 본 논문에서는 기존의 TV 시스템에서의 영상재생 결과와, constant luminance방법에 의한 영상 재생 결과를 관찰자 평가 실험을 통해 비교 분석한 데이터를 제시한다.

  • PDF

Realtime Face Tracking using Motion Analysis and Color Information (움직임분석 및 색상정보를 이용한 실시간 얼굴추적)

  • Lee, Kyu-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.5
    • /
    • pp.977-984
    • /
    • 2007
  • A realtime face tracking algorithm using motion analysis from image sequences and color information is proposed. Motion area from the realtime moving images is detected by calculating temporal derivatives first, candidate pixels which represent face region is extracted by the fusion filtering with multiple color models, and realtime face tracking is performed by discriminating face components which includes eyes and lips. We improve the stability of face tracking performance by using template matching with face region in an image sequence and the reference template of face components.

RDO-based joint bit allocation for MPEG G-PCC

  • Ye, Xiangyu;Cui, Li;Chang, Eun-Young;Cha, Jihun;Ahn, Jae Young;Jang, Euee S.
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.81-84
    • /
    • 2021
  • In this paper, a rate-distortion optimization (RDO) model is proposed to find the joint bit allocation of geometry data and color data based on geometry-based point cloud compression (G-PCC) of Moving Picture Experts Group (MPEG). The mechanism of the method is to construct the RD models for geometry and color data through the training process. Afterward, two rate-distortion (RD) models are integrated as well as the decision of the parameter λ to obtain the final RDO model. The experimental results show that the proposed method can decrease 20% of the geometry Bjøntegaard delta bit rate and increase 37% of the color Bjøntegaard delta bit rate compared to the MPEG G-PCC TMC13v12.0 software.

  • PDF

Color Pattern Recognition and Tracking for Multi-Object Tracking in Artificial Intelligence Space (인공지능 공간상의 다중객체 구분을 위한 컬러 패턴 인식과 추적)

  • Tae-Seok Jin
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.2_2
    • /
    • pp.319-324
    • /
    • 2024
  • In this paper, the Artificial Intelligence Space(AI-Space) for human-robot interface is presented, which can enable human-computer interfacing, networked camera conferencing, industrial monitoring, service and training applications. We present a method for representing, tracking, and objects(human, robot, chair) following by fusing distributed multiple vision systems in AI-Space. The article presents the integration of color distributions into particle filtering. Particle filters provide a robust tracking framework under ambiguous conditions. We propose to track the moving objects(human, robot, chair) by generating hypotheses not in the image plane but on the top-view reconstruction of the scene.

Traffic Signal Detection and Recognition Using a Color Segmentation in a HSI Color Model (HSI 색상 모델에서 색상 분할을 이용한 교통 신호등 검출과 인식)

  • Jung, Min Chul
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.4
    • /
    • pp.92-98
    • /
    • 2022
  • This paper proposes a new method of the traffic signal detection and the recognition in an HSI color model. The proposed method firstly converts a ROI image in the RGB model to in the HSI model to segment the color of a traffic signal. Secondly, the segmented colors are dilated by the morphological processing to connect the traffic signal light and the signal light case and finally, it extracts the traffic signal light and the case by the aspect ratio using the connected component analysis. The extracted components show the detection and the recognition of the traffic signal lights. The proposed method is implemented using C language in Raspberry Pi 4 system with a camera module for a real-time image processing. The system was fixedly installed in a moving vehicle, and it recorded a video like a vehicle black box. Each frame of the recorded video was extracted, and then the proposed method was tested. The results show that the proposed method is successful for the detection and the recognition of traffic signals.

Object Detection Method on Vision Robot using Sensor Fusion (센서 융합을 이용한 이동 로봇의 물체 검출 방법)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.249-254
    • /
    • 2007
  • A mobile robot with various types of sensors and wireless camera is introduced. We show this mobile robot can detect objects well by combining the results of active sensors and image processing algorithm. First, to detect objects, active sensors such as infrared rays sensors and supersonic waves sensors are employed together and calculates the distance in real time between the object and the robot using sensor's output. The difference between the measured value and calculated value is less than 5%. We focus on how to detect a object region well using image processing algorithm because it gives robots the ability of working for human. This paper suggests effective visual detecting system for moving objects with specified color and motion information. The proposed method includes the object extraction and definition process which uses color transformation and AWUPC computation to decide the existence of moving object. Shape information and signature algorithm are used to segment the objects from background regardless of shape changes. We add weighing values to each results from sensors and the camera. Final results are combined to only one value which represents the probability of an object in the limited distance. Sensor fusion technique improves the detection rate at least 7% higher than the technique using individual sensor.

Moving Object Tracking Using Co-occurrence Features of Objects (이동 물체의 상호 발생 특징정보를 이용한 동영상에서의 이동물체 추적)

  • Kim, Seongdong;Seongah Chin;Moonwon Choo
    • Journal of Intelligence and Information Systems
    • /
    • v.8 no.2
    • /
    • pp.1-13
    • /
    • 2002
  • In this paper, we propose an object tracking system which can be convinced of moving area shaped on objects through color sequential images, decided moving directions of foot messengers or vehicles of image sequences. In static camera, we suggests a new evaluating method extracting co-occurrence matrix with feature vectors of RGB after analyzing and blocking difference images, which is accessed to field of camera view for motion. They are energy, entropy, contrast, maximum probability, inverse difference moment, and correlation of RGB color vectors. we describe how to analyze and compute corresponding relations of objects between adjacent frames. In the clustering, we apply an algorithm of FCM(fuzzy c means) to analyze matching and clustering problems of adjacent frames of the featured vectors, energy and entropy, gotten from previous phase. In the matching phase, we also propose a method to know correspondence relation that can track motion each objects by clustering with similar area, compute object centers and cluster around them in case of same objects based on membership function of motion area of adjacent frames.

  • PDF

ROI Based Object Extraction Using Features of Depth and Color Images (깊이와 칼라 영상의 특징을 사용한 ROI 기반 객체 추출)

  • Ryu, Ga-Ae;Jang, Ho-Wook;Kim, Yoo-Sung;Yoo, Kwan-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.8
    • /
    • pp.395-403
    • /
    • 2016
  • Recently, Image processing has been used in many areas. In the image processing techniques that a lot of research is tracking of moving object in real time. There are a number of popular methods for tracking an object such as HOG(Histogram of Oriented Gradients) to track pedestrians, and Codebook to subtract background. However, object extraction has difficulty because that a moving object has dynamic background in the image, and occurs severe lighting changes. In this paper, we propose a method of object extraction using depth image and color image features based on ROI(Region of Interest). First of all, we look for the feature points using the color image after setting the ROI a range to find the location of object in depth image. And we are extracting an object by creating a new contour using the convex hull point of object and the feature points. Finally, we compare the proposed method with the existing methods to find out how accurate extracting the object is.