• Title/Summary/Keyword: Real-time color matching

Search Result 48, Processing Time 0.029 seconds

Robust Real-Time Visual Odometry Estimation for 3D Scene Reconstruction (3차원 장면 복원을 위한 강건한 실시간 시각 주행 거리 측정)

  • Kim, Joo-Hee;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.187-194
    • /
    • 2015
  • In this paper, we present an effective visual odometry estimation system to track the real-time pose of a camera moving in 3D space. In order to meet the real-time requirement as well as to make full use of rich information from color and depth images, our system adopts a feature-based sparse odometry estimation method. After matching features extracted from across image frames, it repeats both the additional inlier set refinement and the motion refinement to get more accurate estimate of camera odometry. Moreover, even when the remaining inlier set is not sufficient, our system computes the final odometry estimate in proportion to the size of the inlier set, which improves the tracking success rate greatly. Through experiments with TUM benchmark datasets and implementation of the 3D scene reconstruction application, we confirmed the high performance of the proposed visual odometry estimation method.

Real Time Lip Reading System Implementation in Embedded Environment (임베디드 환경에서의 실시간 립리딩 시스템 구현)

  • Kim, Young-Un;Kang, Sun-Kyung;Jung, Sung-Tae
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.227-232
    • /
    • 2010
  • This paper proposes the real time lip reading method in the embedded environment. The embedded environment has the limited sources to use compared to existing PC environment, so it is hard to drive the lip reading system with existing PC environment in the embedded environment in real time. To solve the problem, this paper suggests detection methods of lip region, feature extraction of lips, and awareness methods of phonetic words suitable to the embedded environment. First, it detects the face region by using face color information to find out the accurate lip region and then detects the exact lip region by finding the position of both eyes from the detected face region and using the geometric relations. To detect strong features of lighting variables by the changing surroundings, histogram matching, lip folding, and RASTA filter were applied, and the properties extracted by using the principal component analysis(PCA) were used for recognition. The result of the test has shown the processing speed between 1.15 and 2.35 sec. according to vocalizations in the embedded environment of CPU 806Mhz, RAM 128MB specifications and obtained 77% of recognition as 139 among 180 words were recognized.

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF

Automatic Pattern Setting System Reacting to Customer Design

  • Yuan, Ying;Huh, Jun-Ho
    • Journal of Information Processing Systems
    • /
    • v.15 no.6
    • /
    • pp.1277-1295
    • /
    • 2019
  • With its technical development, digital printing is being universally introduced to the mass production of clothing factories. At the same time, many fashion platforms have been made for customers' participation using digital printing, and a tool is provided in platforms for customers to make designs. However, there is no sufficient solution in the production stage for automatically converting a customer's design into a file before printing other than designating a square area for the pattern designed by the customer. That is, if 30 different designs come in from customers for one shirt, designers have to do the work of reproducing the design on the clothing pattern in the same location and in the same angle, and this work requires a great deal of manpower. Therefore, it is necessary to develop a technology which can let the customer make the design and, at the same time, reflect it in the clothing pattern. This is defined in relation to the existing clothing pattern with digital printing. This study yields a clothing pattern for digital printing which reflects a customer's design in real time by matching the diagram area where a customer designs on a given clothing model and the area where a standard pattern reflects the customer's actual design information. Designers can substitute the complex mapping operation of programmers with a simple area-matching operation. As there is no limit to clothing designs, the variousfashion design creations of designers and the diverse customizing demands of customers can be satisfied at low cost with high efficiency. This is not restricted to T-shirts or eco-bags but can be applied to all woven wear, including men's, women's, and children's clothing, except knitwear.

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

The Comparative Analysis of 3D Software Virtual and Actual Wedding Dress

  • Yuan, Xin-Yi;Bae, Soo-Jeong
    • Journal of Fashion Business
    • /
    • v.21 no.6
    • /
    • pp.47-65
    • /
    • 2017
  • This study is intended to compare an actual wedding dress being made completely through 3D software, and compare it with an actual dress of a real model by using collective tools for comparative analysis. The method of the study was conducted via a literature review along with the production of the dresses. In the production, two wedding dresses for the small wedding ceremony were designed. Each of the design was made into both 3D and an actual garment. The results are as follows. First, the 3D whole body scanner reflects the measure of the exact human body size, however there were some difficulties in matching what the customer wanted, because the difference of the skin color and the hair style. Second, the pattern of the dress is much more easily altered than it was in the real production. Third, the silhouette of the virtual and the actual person with the dress was nearly the same. Fourth, textile tool was much more convenient because of the use of real-time rendering on the virtual dresses. Lastly, the lace and biz decoration were flat, and the luster was duller than in reality. Prospectively, the consumer will decide their own design of variety through the use of the avatar without wearing the actual dresses, and they would demand what the another one desired, different from the presented ones by making the corrections by themselves. Through this process, the consumer would be actively participating in the design, a step which would finally lead to the two way designing rather than the one way design of present times.

Moving Object Tracking Using MHI and M-bin Histogram (MHI와 M-bin Histogram을 이용한 이동물체 추적)

  • Oh, Youn-Seok;Lee, Soon-Tak;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.9 no.1
    • /
    • pp.48-55
    • /
    • 2005
  • In this paper, we propose an efficient moving object tracking technique for multi-camera surveillance system. Color CCD cameras used in this system are network cameras with their own IP addresses. Input image is transmitted to the media server through wireless connection among server, bridge, and Access Point (AP). The tracking system sends the received images through the network to the tracking module, and it tracks moving objects in real-time using color matching method. We compose two sets of cameras, and when the object is out of field of view (FOV), we accomplish hand-over to be able to continue tracking the object. When hand-over is performed, we use MHI(Motion History Information) based on color information and M-bin histogram for an exact tracking. By utilizing MHI, we can calculate direction and velocity of the object, and those information helps to predict next location of the object. Therefore, we obtain a better result in speed and stability than using template matching based on only M-bin histogram, and we verified this result by an experiment.

  • PDF

A Ubiquitous Vision System based on the Identified Contract Net Protocol (Identified Contract Net 프로토콜 기반의 유비쿼터스 시각시스템)

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hagbae
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.10
    • /
    • pp.620-629
    • /
    • 2005
  • In this paper, a new protocol-based approach was proposed for development of a ubiquitous vision system. It is possible to apply the approach by regarding the ubiquitous vision system as a multiagent system. Thus, each vision sensor can be regarded as an agent (vision agent). Each vision agent independently performs exact segmentation for a target by color and motion information, visual tracking for multiple targets in real-time, and location estimation by a simple perspective transform. Matching problem for the identity of a target during handover between vision agents is solved by the Identified Contract Net (ICN) protocol implemented for the protocol-based approach. The protocol-based approach by the ICN protocol is independent of the number of vision agents and moreover the approach doesn't need calibration and overlapped region between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. The protocol-based approach was successfully applied for our ubiquitous vision system and operated well through several experiments.

An Recognition and Acquisition method of Distance Information in Direction Signs for Vehicle Location (차량의 위치 파악을 위한 도로안내표지판 인식과 거리정보 습득 방법)

  • Kim, Hyun-Tae;Jeong, Jin-Seong;Jang, Young-Min;Cho, Sang-Bock
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.70-79
    • /
    • 2017
  • This study proposes a method to quickly and accurately acquire distance information on direction signs. The proposed method is composed of the recognition of the sign, pre-processing to facilitate the acquisition of the road sign distance, and the acquisition of the distance data. The road sign recognition uses color detection including gamma correction in order to mitigate various noise issues. In order to facilitate the acquisition of distance data, this study applied tilt correction using linear factors, and resolution correction using Fourier transform. To acquire the distance data, morphological operation was used to highlight the area, along with labeling and template matching. By acquiring the distance information on the direction sign through such a processes, the proposed system can be output the distance remaining to the next junction. As a result, when the proposed method is applied to system it can process the data in real-time using the fast calculation speed, average speed was shown to be 0.46 second per frame, with accuracy of 0.65 in similarity value.

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.