• Title/Summary/Keyword: Real Time Image Processing

Search Result 1,344, Processing Time 0.036 seconds

Development of a High-Performance Vehicle Imaging Information System for an Efficient Vehicle Imaging Stabilization (효율적인 차량 영상 안정화를 위한 고성능 차량 영상 정보 시스템 개발)

  • Hong, Sung-Il;Lin, Chi-Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.12 no.6
    • /
    • pp.78-86
    • /
    • 2013
  • In this paper, we propose design of a high-performance vehicle imaging information system for an efficient vehicle imaging stabilization. The proposed system was designed the algorithm by divided as motion estimation and motion compensation. The motion estimation were configured as local motion vector estimation and irregular local motion vector detection, global motion vector estimation. The motion compensation was corrected for the four directions for compensate to the shake of vehicle video image using estimate GMV. The designed algorithm were designed the motion compensation technology chip by applied to IP for vehicle imaging stabilization. In this paper, the experimental results of the proposed vehicle imaging information system were proved to the effectiveness by compared with other methods, because imaging stabilization of moving vehicle was not used of memory by processing real-time. Also, it could be obtained to reduction effect of calculation time by arithmetic operation through to block matching.

Behavior Pattern Analysis System based on Temporal Histogram of Moving Object Coordinates. (이동 객체 좌표의 시간적 히스토그램 기반 행동패턴분석시스템)

  • Lee, Jae-kwang;Lee, Kyu-won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.571-575
    • /
    • 2015
  • This paper propose a temporal histogram -based behavior pattern analysis algorithm to analyze the movement features of moving objects from the image inputted in real-time. For the purpose of tracking and analysis of moving objects, it needs to be performed background learning which separated moving objects from the background. Moving object is extracted as a background learning after identifying the object by using the center of gravity and the coordinate correlation is performed by the object tracking. The start frame of each of the tracked object, the end frame, the coordinates information and size information are stored and managed by the linked list. Temporal histogram defines movement features pattern using x, y coordinates based on time axis, it compares each coordinates of objects for understanding its movement features and behavior pattern. Behavior pattern analysis system based on temporal histogram confirmed high tracking rate over 95% with sustaining high processing speed 45~50fps through the demo experiment.

  • PDF

Detecting and Tracking Vehicles at Local Region by using Segmented Regions Information (분할 영역 정보를 이용한 국부 영역에서 차량 검지 및 추적)

  • Lee, Dae-Ho;Park, Young-Tae
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.10
    • /
    • pp.929-936
    • /
    • 2007
  • The novel vision-based scheme for real-time extracting traffic parameters is proposed in this paper. Detecting and tracking of vehicle is processed at local region installed by operator. Local region is divided to segmented regions by edge and frame difference, and the segmented regions are classified into vehicle, road, shadow and headlight by statistical and geometrical features. Vehicle is detected by the result of the classification. Traffic parameters such as velocity, length, occupancy and distance are estimated by tracking using template matching at local region. Because background image are not used, it is possible to utilize under various conditions such as weather, time slots and locations. It is performed well with 90.16% detection rate in various databases. If direction, angle and iris are fitted to operating conditions, we are looking forward to using as the core of traffic monitoring systems.

Color-related Query Processing for Intelligent E-Commerce Search (지능형 검색엔진을 위한 색상 질의 처리 방안)

  • Hong, Jung A;Koo, Kyo Jung;Cha, Ji Won;Seo, Ah Jeong;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.109-125
    • /
    • 2019
  • As interest on intelligent search engines increases, various studies have been conducted to extract and utilize the features related to products intelligencely. In particular, when users search for goods in e-commerce search engines, the 'color' of a product is an important feature that describes the product. Therefore, it is necessary to deal with the synonyms of color terms in order to produce accurate results to user's color-related queries. Previous studies have suggested dictionary-based approach to process synonyms for color features. However, the dictionary-based approach has a limitation that it cannot handle unregistered color-related terms in user queries. In order to overcome the limitation of the conventional methods, this research proposes a model which extracts RGB values from an internet search engine in real time, and outputs similar color names based on designated color information. At first, a color term dictionary was constructed which includes color names and R, G, B values of each color from Korean color standard digital palette program and the Wikipedia color list for the basic color search. The dictionary has been made more robust by adding 138 color names converted from English color names to foreign words in Korean, and with corresponding RGB values. Therefore, the fininal color dictionary includes a total of 671 color names and corresponding RGB values. The method proposed in this research starts by searching for a specific color which a user searched for. Then, the presence of the searched color in the built-in color dictionary is checked. If there exists the color in the dictionary, the RGB values of the color in the dictioanry are used as reference values of the retrieved color. If the searched color does not exist in the dictionary, the top-5 Google image search results of the searched color are crawled and average RGB values are extracted in certain middle area of each image. To extract the RGB values in images, a variety of different ways was attempted since there are limits to simply obtain the average of the RGB values of the center area of images. As a result, clustering RGB values in image's certain area and making average value of the cluster with the highest density as the reference values showed the best performance. Based on the reference RGB values of the searched color, the RGB values of all the colors in the color dictionary constructed aforetime are compared. Then a color list is created with colors within the range of ${\pm}50$ for each R value, G value, and B value. Finally, using the Euclidean distance between the above results and the reference RGB values of the searched color, the color with the highest similarity from up to five colors becomes the final outcome. In order to evaluate the usefulness of the proposed method, we performed an experiment. In the experiment, 300 color names and corresponding color RGB values by the questionnaires were obtained. They are used to compare the RGB values obtained from four different methods including the proposed method. The average euclidean distance of CIE-Lab using our method was about 13.85, which showed a relatively low distance compared to 3088 for the case using synonym dictionary only and 30.38 for the case using the dictionary with Korean synonym website WordNet. The case which didn't use clustering method of the proposed method showed 13.88 of average euclidean distance, which implies the DBSCAN clustering of the proposed method can reduce the Euclidean distance. This research suggests a new color synonym processing method based on RGB values that combines the dictionary method with the real time synonym processing method for new color names. This method enables to get rid of the limit of the dictionary-based approach which is a conventional synonym processing method. This research can contribute to improve the intelligence of e-commerce search systems especially on the color searching feature.

(Distance and Speed Measurements of Moving Object Using Difference Image in Stereo Vision System) (스테레오 비전 시스템에서 차 영상을 이용한 이동 물체의 거리와 속도측정)

  • 허상민;조미령;이상훈;강준길;전형준
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.9
    • /
    • pp.1145-1156
    • /
    • 2002
  • A method to measure the speed and distance of moving object is proposed using the stereo vision system. One of the most important factors for measuring the speed and distance of moving object is the accuracy of object tracking. Accordingly, the background image algorithm is adopted to track the rapidly moving object and the local opening operator algorithm is used to remove the shadow and noise of object. The extraction efficiency of moving object is improved by using the adaptive threshold algorithm independent to variation of brightness. Since the left and right central points are compensated, the more exact speed and distance of object can be measured. Using the background image algorithm and local opening operator algorithm, the computational processes are reduced and it is possible to achieve the real-time processing of the speed and distance of moving object. The simulation results show that background image algorithm can track the moving object more rapidly than any other algorithm. The application of adaptive threshold algorithm improved the extraction efficiency of the target by reducing the candidate areas. Since the central point of the target is compensated by using the binocular parallax, the error of measurement for the speed and distance of moving object is reduced. The error rate of measurement for the distance from the stereo camera to moving object and for the speed of moving object are 2.68% and 3.32%, respectively.

  • PDF

Real-time Color Recognition Based on Graphic Hardware Acceleration (그래픽 하드웨어 가속을 이용한 실시간 색상 인식)

  • Kim, Ku-Jin;Yoon, Ji-Young;Choi, Yoo-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • In this paper, we present a real-time algorithm for recognizing the vehicle color from the indoor and outdoor vehicle images based on GPU (Graphics Processing Unit) acceleration. In the preprocessing step, we construct feature victors from the sample vehicle images with different colors. Then, we combine the feature vectors for each color and store them as a reference texture that would be used in the GPU. Given an input vehicle image, the CPU constructs its feature Hector, and then the GPU compares it with the sample feature vectors in the reference texture. The similarities between the input feature vector and the sample feature vectors for each color are measured, and then the result is transferred to the CPU to recognize the vehicle color. The output colors are categorized into seven colors that include three achromatic colors: black, silver, and white and four chromatic colors: red, yellow, blue, and green. We construct feature vectors by using the histograms which consist of hue-saturation pairs and hue-intensity pairs. The weight factor is given to the saturation values. Our algorithm shows 94.67% of successful color recognition rate, by using a large number of sample images captured in various environments, by generating feature vectors that distinguish different colors, and by utilizing an appropriate likelihood function. We also accelerate the speed of color recognition by utilizing the parallel computation functionality in the GPU. In the experiments, we constructed a reference texture from 7,168 sample images, where 1,024 images were used for each color. The average time for generating a feature vector is 0.509ms for the $150{\times}113$ resolution image. After the feature vector is constructed, the execution time for GPU-based color recognition is 2.316ms in average, and this is 5.47 times faster than the case when the algorithm is executed in the CPU. Our experiments were limited to the vehicle images only, but our algorithm can be extended to the input images of the general objects.

Testimony of the Real World, Documentary-Animation (현실세계의 증언, 다큐멘터리-애니메이션 분석)

  • Oh, Jin-Hee
    • Cartoon and Animation Studies
    • /
    • s.45
    • /
    • pp.27-50
    • /
    • 2016
  • The present study argues that documentary-animation films, which are based on actual human voices, on the level of representation, constitute a new expansion for the medium of animation films, which serve as testimonies to the real world. Animation films are produced using very diverse techniques so that they are complex to the degree of being indefinable, and documentary films, though based on objective representation, increase in complexity in that there exist various types of artificial interventions such as direction and digital image processing. Having emerged as a hybrid genre of the two media, documentary-animation films draw into themselves actual events and elements so that they conceptually share reality-based narratives and are visually characterized by the trappings of animation films. Generally classified as 'animated documentaries', this genre triggered discussions following the release of , a work that is mistaken as having used rotoscoping transforming live action in terms of the technique. When analyzed in detail, however, this work is presented as an ambiguous medium where the characteristics of animation films, which are virtual simulacra without reality, and of documentaries, which are based on the objective indexicality of the referents, coexist because of its mixed use of typical animation techniques, 3D programs, and live-action images. Discussed in the present study, , , and share the characteristics of the medium of documentaries in that the narratives develop as testimonies of historical figures but, at the same time, are connected to animation films because of their production techniques and direction characteristics. Consequently, this medium must be discussed as a new expansion rather than being included in the existing classification system, and such a presupposition is an indispensable process for directly facing the reality of the works and for developing discussions. Through works that directly use the interviewees' voices yet do not transcend the characteristics of animation films, the present study seeks to define documentary-animation films and to discuss the possibility of the medium, which has expanded as a testimony to the real world.

Comparative Study of Fish Detection and Classification Performance Using the YOLOv8-Seg Model (YOLOv8-Seg 모델을 이용한 어류 탐지 및 분류 성능 비교연구)

  • Sang-Yeup Jin;Heung-Bae Choi;Myeong-Soo Han;Hyo-tae Lee;Young-Tae Son
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.30 no.2
    • /
    • pp.147-156
    • /
    • 2024
  • The sustainable management and enhancement of marine resources are becoming increasingly important issues worldwide. This study was conducted in response to these challenges, focusing on the development and performance comparison of fish detection and classification models as part of a deep learning-based technique for assessing the effectiveness of marine resource enhancement projects initiated by the Korea Fisheries Resources Agency. The aim was to select the optimal model by training various sizes of YOLOv8-Seg models on a fish image dataset and comparing each performance metric. The dataset used for model construction consisted of 36,749 images and label files of 12 different species of fish, with data diversity enhanced through the application of augmentation techniques during training. When training and validating five different YOLOv8-Seg models under identical conditions, the medium-sized YOLOv8m-Seg model showed high learning efficiency and excellent detection and classification performance, with the shortest training time of 13 h and 12 min, an of 0.933, and an inference speed of 9.6 ms. Considering the balance between each performance metric, this was deemed the most efficient model for meeting real-time processing requirements. The use of such real-time fish detection and classification models could enable effective surveys of marine resource enhancement projects, suggesting the need for ongoing performance improvements and further research.

The Study of New Reconstruction Method for Brain SPECT on Dual Detector System (Dual detector system에서 Brain SPECT의 new reconstruction method의 연구)

  • Lee, Hyung-Jin;Kim, Su-Mi;Lee, Hong-Jae;Kim, Jin-Eui;Kim, Hyun-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.57-62
    • /
    • 2009
  • Purpose: Brain SPECT study is more sensitive to motion than other studies. Especially, when applying 1-day subtraction method for Diamox SPECT, it needs shorter study time in order to prevent reexamination. We were required to have new study condition and analysing method on dual detector system because triple head camera in Seoul National University Hospital is to be disposed. So we have tried to increase image quality and make the dual and triple head to have equivalent study time by using a new analysing program. Materials and Methods: Using IEC phantom, we estimated contrast, SNR and FWHM. In Hoffman 3D brain phantom which is similar with real brain, we were on the supposition that 5% of injected doses were distributed in brain tissue. To compare with existing FBP method, we used fan-beam collimator. And we applied 15 sec, 25 sec/frame for each SEPCT studies using LEHR and LEUHR. We used OSEM2D and Onco-flash3D reconstruction method and compared reconstruction methods between applied Gaussian post-filtering 5mm and not applied as well. Attenuation correction was applied by manual method. And we did Brain SPECT to patient injected 15 mCi of $^{99m}Tc$-HMPAO according to results of Phantom study. Lastly, technologist, MD, PhD estimated the results. Results: The study shows that reconstruction method by Flash3D is better than exiting FBP and OSEM2D when studied using IEC phantom. Flowing by estimation, when using Flash3D, both of 15 sec and 25 sec are needed postfiltering 5 mm. And 8 times are proper for subset 8 iteration in Flash3D. OSEM2D needs post-filtering. And it is proper that subset 4, iteration 8 times for 15sec and subset 8, iteration 12 times for 25sec. The study regarding to injected doses for a patient and study time, combination of input parameter-15 sec/frame, LEHR collimator, analysing program-Flash3D, subset 8, iteration 8times and Gaussian post-filtering 5mm is the most appropriate. On the other hands, it was not appropriate to apply LEUHR collimator to 1-day subtraction method of Diamox study because of lower sensitivity. Conclusions: We could prove that there was also an advantage of short study time effectiveness in Dual camera same as Triple gamma camera and get great result of alternation from existing fan-beam collimator to parallel collimator. In addition, resolution and contrast of new method was better than FBP method. And it could improve sensitivity and accuracy of image because lesser subjectivity was input than Metz filter of FBP. We expect better image quality and shorter study time of Brain SPECT on Dual detector system.

  • PDF

A Study on Automatic Interface Generation by Protocol Mapping (Protocol Mapping을 이용한 인터페이스 자동생성 기법 연구)

  • Lee Ser-Hoon;Kang Kyung-Goo;Hwang Sun-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.8A
    • /
    • pp.820-829
    • /
    • 2006
  • IP-based design methodology has been popularly employed for SoC design to reduce design complexity and to cope with time-to-market pressure. Due to the request for high performance of current mobile systems, embedded SoC design needs a multi-processor to manage problems of high complexity and the data processing such as multimedia, DMB and image processing in real time. Interface module for communication between system buses and processors are required, since many IPs employ different protocols. High performance processors require interface module to minimize the latency of data transmission during read-write operation and to enhance the performance of a top level system. This paper proposes an automatic interface generation system based on FSM generated from the common protocol description sequence of a bus and an IP. The proposed interface does not use a buffer which stores data temporally causing the data transmission latency. Experimental results show that the area of the interface circuits generated by the proposed system is reduced by 48.5% on the average, when comparing to buffer-based interface circuits. Data transmission latency is reduced by 59.1% for single data transfer and by 13.3% for burst mode data transfer. By using the proposed system, it becomes possible to generate a high performance interface circuit automatically.