• 제목/요약/키워드: Object Feature Extraction

Search Result 266, Processing Time 0.029 seconds

ENHANCEMENT AND SMOOTHING OF HYPERSPECTAL REMOTE SENSING DATA BY ADVANCED SCALE-SPACE FILTERING

  • Konstantinos, Karantzalos;Demetre, Argialas
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.736-739
    • /
    • 2006
  • While hyperspectral data are very rich in information, their processing poses several challenges such as computational requirements, noise removal and relevant information extraction. In this paper, the application of advanced scale-space filtering to selected hyperspectral bands was investigated. In particular, a pre-processing tool, consisting of anisotropic diffusion and morphological leveling filtering, has been developed, aiming to an edge-preserving smoothing and simplification of hyperspectral data, procedures which are of fundamental importance during feature extraction and object detection. Two scale space parameters define the extent of image smoothing (anisotropic diffusion iterations) and image simplification (scale of morphological levelings). Experimental results demonstrated the effectiveness of the developed scale space filtering for the enhancement and smoothing of hyperspectral remote sensing data and their advantage against watershed over-segmentation problems and edge detection.

  • PDF

Robust Recognition of 3D Object Using Attributed Relation Graph of Silhouette's (실루엣 기반의 관계그래프 이용한 강인한 3차원 물체 인식)

  • Kim, Dae-Woong;Baek, Kyung-Hwan;Hahn, Hern-Soo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.25 no.7
    • /
    • pp.103-110
    • /
    • 2008
  • This paper presents a new approach of recognizing a 3D object using a single camera, based on the extended convex hull of its silhouette. It aims at minimizing the DB size and simplifying the processes for matching and feature extraction. For this purpose, two concepts are introduced: extended convex hull and measurable region. Extended convex hull consists of convex curved edges as well as convex polygons. Measurable region is the cluster of the viewing vectors of a camera represented as the points on the orientation sphere from which a specific set of surfaces can be measured. A measurable region is represented by the extended convex hull of the silhouette which can be obtained by viewing the object from the center of the measurable region. Each silhouette is represented by a relation graph where a node describes an edge using its type, length, reality, and components. Experimental results are included to show that the proposed algorithm works efficiently even when the objects are overlapped and partially occluded. The time complexity for searching the object model in the database is O(N) where N is the number of silhouette models.

3D Object Detection with Low-Density 4D Imaging Radar PCD Data Clustering and Voxel Feature Extraction for Each Cluster (4D 이미징 레이더의 저밀도 PCD 데이터 군집화와 각 군집에 복셀 특징 추출 기법을 적용한 3D 객체 인식 기법)

  • Cha-Young, Oh;Soon-Jae, Gwon;Hyun-Jung, Jung;Gu-Min, Jeong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.6
    • /
    • pp.471-476
    • /
    • 2022
  • In this paper, we propose an object detection using a 4D imaging radar, which developed to solve the problems of weak cameras and LiDAR in bad weather. When data are measured and collected through a 4D imaging radar, the density of point cloud data is low compared to LiDAR data. A technique for clustering objects and extracting the features of objects through voxels in the cluster is proposed using the characteristics of wide distances between objects due to low density. Furthermore, we propose an object detection using the extracted features.

A method of extracting edge line from range image using recognition features (거리 영상에서 인식 특정을 이용한 경계선 검출 기법)

  • 이강호
    • Journal of the Korea Society of Computer and Information
    • /
    • v.6 no.2
    • /
    • pp.14-19
    • /
    • 2001
  • This paper presents a new method of 3-D surface feature extraction using a quadratic pol expression. With a range image, we get an edge map through the modified scan line technique this edge map, we label a 3-dimensional object to divide object's region and extract cent corner points from it's region. Then we determine whether the segmented region is a planar or a curved from the quadric surface equation. we calculate the coefficients of the planar su the curved surface to represent regions. In this article. we prove performance of the metho synthetic and real (Odetics) range images.

Feature Extraction of 3-D Object Using Halftoning Image (Halftoning 영상을 이용한 3차원 특징 추출)

  • Kim, D.N.;Kim, S.Y.;Cho, D.S.
    • Proceedings of the KIEE Conference
    • /
    • 1992.07a
    • /
    • pp.465-467
    • /
    • 1992
  • This paper shows 3D vision system based on halftone image analysis. Any halftone image has its own surface vector normal to surface patch. To classily the given 3D images, all the patch on 3D object are transformed to black/white halftone. First we extract the general learning patterns which represents required slopes and their attributes. And next we propose 3D segmentation by searching intensity, slope and density. Artificial neural network is found to be very suitable in this approach, because it has powerful learning quality and noise tolerant. In this study, 3D shape reconstruct using pyramidian model. Our results are evaluated to enhance the quality.

  • PDF

Development of Merging Algorithm between 3-D Objects and Real Image for Augmented Reality

  • Kang, Dong-Joong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.100.5-100
    • /
    • 2002
  • A core technology for implementation of Augmented Reality is to develop a merging algorithm between interesting 3-D objects and real images. In this paper, we present a 3-D object recognition method to decide viewing direction toward the object from camera. This process is the starting point to merge with real image and 3-D objects. Perspective projection between a camera and 3-dimentional objects defines a plane in 3-D space that is from a line in an image and the focal point of the camera. If no errors with perfect 3-D models were introduced in during image feature extraction, then model lines in 3-D space projecting onto this line in the image would exactly lie in this plane. This observa...

  • PDF

Object tracking algorithm through RGB-D sensor in indoor environment (실내 환경에서 RGB-D 센서를 통한 객체 추적 알고리즘 제안)

  • Park, Jung-Tak;Lee, Sol;Park, Byung-Seo;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.248-249
    • /
    • 2022
  • In this paper, we propose a method for classifying and tracking objects based on information of multiple users obtained using RGB-D cameras. The 3D information and color information acquired through the RGB-D camera are acquired and information about each user is stored. We propose a user classification and location tracking algorithm in the entire image by calculating the similarity between users in the current frame and the previous frame through the information on the location and appearance of each user obtained from the entire image.

  • PDF

Modified HOG Feature Extraction for Pedestrian Tracking (동영상에서 보행자 추적을 위한 변형된 HOG 특징 추출에 관한 연구)

  • Kim, Hoi-Jun;Park, Young-Soo;Kim, Ki-Bong;Lee, Sang-Hun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.3
    • /
    • pp.39-47
    • /
    • 2019
  • In this paper, we proposed extracting modified Histogram of Oriented Gradients (HOG) features using background removal when tracking pedestrians in real time. HOG feature extraction has a problem of slow processing speed due to large computation amount. Background removal has been studied to improve computation reductions and tracking rate. Area removal was carried out using S and V channels in HSV color space to reduce feature extraction in unnecessary areas. The average S and V channels of the video were removed and the input video was totally dark, so that the object tracking may fail. Histogram equalization was performed to prevent this case. HOG features extracted from the removed region are reduced, and processing speed and tracking rates were improved by extracting clear HOG features. In this experiment, we experimented with videos with a large number of pedestrians or one pedestrian, complicated videos with backgrounds, and videos with severe tremors. Compared with the existing HOG-SVM method, the proposed method improved the processing speed by 41.84% and the error rate was reduced by 52.29%.

Video Scene Detection using Shot Clustering based on Visual Features (시각적 특징을 기반한 샷 클러스터링을 통한 비디오 씬 탐지 기법)

  • Shin, Dong-Wook;Kim, Tae-Hwan;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.47-60
    • /
    • 2012
  • Video data comes in the form of the unstructured and the complex structure. As the importance of efficient management and retrieval for video data increases, studies on the video parsing based on the visual features contained in the video contents are researched to reconstruct video data as the meaningful structure. The early studies on video parsing are focused on splitting video data into shots, but detecting the shot boundary defined with the physical boundary does not cosider the semantic association of video data. Recently, studies on structuralizing video shots having the semantic association to the video scene defined with the semantic boundary by utilizing clustering methods are actively progressed. Previous studies on detecting the video scene try to detect video scenes by utilizing clustering algorithms based on the similarity measure between video shots mainly depended on color features. However, the correct identification of a video shot or scene and the detection of the gradual transitions such as dissolve, fade and wipe are difficult because color features of video data contain a noise and are abruptly changed due to the intervention of an unexpected object. In this paper, to solve these problems, we propose the Scene Detector by using Color histogram, corner Edge and Object color histogram (SDCEO) that clusters similar shots organizing same event based on visual features including the color histogram, the corner edge and the object color histogram to detect video scenes. The SDCEO is worthy of notice in a sense that it uses the edge feature with the color feature, and as a result, it effectively detects the gradual transitions as well as the abrupt transitions. The SDCEO consists of the Shot Bound Identifier and the Video Scene Detector. The Shot Bound Identifier is comprised of the Color Histogram Analysis step and the Corner Edge Analysis step. In the Color Histogram Analysis step, SDCEO uses the color histogram feature to organizing shot boundaries. The color histogram, recording the percentage of each quantized color among all pixels in a frame, are chosen for their good performance, as also reported in other work of content-based image and video analysis. To organize shot boundaries, SDCEO joins associated sequential frames into shot boundaries by measuring the similarity of the color histogram between frames. In the Corner Edge Analysis step, SDCEO identifies the final shot boundaries by using the corner edge feature. SDCEO detect associated shot boundaries comparing the corner edge feature between the last frame of previous shot boundary and the first frame of next shot boundary. In the Key-frame Extraction step, SDCEO compares each frame with all frames and measures the similarity by using histogram euclidean distance, and then select the frame the most similar with all frames contained in same shot boundary as the key-frame. Video Scene Detector clusters associated shots organizing same event by utilizing the hierarchical agglomerative clustering method based on the visual features including the color histogram and the object color histogram. After detecting video scenes, SDCEO organizes final video scene by repetitive clustering until the simiarity distance between shot boundaries less than the threshold h. In this paper, we construct the prototype of SDCEO and experiments are carried out with the baseline data that are manually constructed, and the experimental results that the precision of shot boundary detection is 93.3% and the precision of video scene detection is 83.3% are satisfactory.

A Blocking Algorithm of a Target Object with Exposed Privacy Information (개인 정보가 노출된 목표 객체의 블로킹 알고리즘)

  • Jang, Seok-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.4
    • /
    • pp.43-49
    • /
    • 2019
  • The wired and wireless Internet is a useful window to easily acquire various types of media data. On the other hand, the public can easily get the media data including the object to which the personal information is exposed, which is a social problem. In this paper, we propose a method to robustly detect a target object that has exposed personal information using a learning algorithm and effectively block the detected target object area. In the proposed method, only the target object containing the personal information is detected using a neural network-based learning algorithm. Then, a grid-like mosaic is created and overlapped on the target object area detected in the previous step, thereby effectively blocking the object area containing the personal information. Experimental results show that the proposed algorithm robustly detects the object area in which personal information is exposed and effectively blocks the detected area through mosaic processing. The object blocking method presented in this paper is expected to be useful in many applications related to computer vision.