• Title/Summary/Keyword: Adjacent Object

Search Result 152, Processing Time 0.03 seconds

A Basic Study on the Fire Flame Extraction of Non-Residential Facilities Based on Core Object Extraction (핵심 객체 추출에 기반한 비주거 시설의 화재불꽃 추출에 관한 기초 연구)

  • Park, Changmin
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.13 no.4
    • /
    • pp.71-79
    • /
    • 2017
  • Recently, Fire watching and dangerous substances monitoring system has been being developed to enhance various fire related security. It is generally assumed that fire flame extraction plays a very important role on this monitoring system. In this study, we propose the fire flame extraction method of Non-Residential Facilities based on core object extraction in image. A core object is defined as a comparatively large object at center of the image. First of all, an input image and its decreased resolution image are segmented. Segmented regions are classified as the outer or the inner region. The outer region is adjacent to boundaries of the image and the rest is not. Then core object regions and core background regions are selected from the inner region and the outer region, respectively. Core object regions are the representative regions for the object and are selected by using the information about the region size and location. Each inner region is classified into foreground or background region by comparing its values of a color histogram intersection of the inner region against the core object region and the core background region. Finally, the extracted core object region is determined as fire flame object in the image. Through experiments, we find that to provide a basic measures can respond effectively and quickly to fire in non-residential facilities.

Efficient 3D Scene Labeling using Object Detectors & Location Prior Maps (물체 탐지기와 위치 사전 확률 지도를 이용한 효율적인 3차원 장면 레이블링)

  • Kim, Joo-Hee;Kim, In-Cheol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.996-1002
    • /
    • 2015
  • In this paper, we present an effective system for the 3D scene labeling of objects from RGB-D videos. Our system uses a Markov Random Field (MRF) over a voxel representation of the 3D scene. In order to estimate the correct label of each voxel, the probabilistic graphical model integrates both scores from sliding window-based object detectors and also from object location prior maps. Both the object detectors and the location prior maps are pre-trained from manually labeled RGB-D images. Additionally, the model integrates the scores from considering the geometric constraints between adjacent voxels in the label estimation. We show excellent experimental results for the RGB-D Scenes Dataset built by the University of Washington, in which each indoor scene contains tabletop objects.

Region-Based Video Object Extraction Using Potential of frame - Difference Energies (프레임차 에너지의 전위차를 이용한 영역 기반의 비디오 객체 추출)

  • 곽종인;김남철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.3A
    • /
    • pp.268-275
    • /
    • 2002
  • This paper proposes a region-based segmentation algorithm fur extracting a video object by using the cost of potential of frame-difference energies. In the first step of a region-based segmentation using spatial intensity, each frame is segmented into a partition of homogeneous regions finely so that each region does not contain the contour of a video object. The fine partition is used as an initial partition for the second step of spatio-temporal segmentation. In spatio-temporal segmentation, the homogeneity cost for each pair of adjacent regions is computed which reflects the potential between the frame-difference energy on the common contour and the frame-difference energy of the lower potential region of the two. The pair of adjacent regions whose cost is minimal then is searched. The two regions of minimum cost ale merged, which result in updating the partition. The merging is recursively performed until only the contours remain which have Same difference energies of high potential. In the fecal step of post-processing, the video object is extracted removing the contours inside the object.

Video Object Segmentation with Weakly Temporal Information

  • Zhang, Yikun;Yao, Rui;Jiang, Qingnan;Zhang, Changbin;Wang, Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1434-1449
    • /
    • 2019
  • Video object segmentation is a significant task in computer vision, but its performance is not very satisfactory. A method of video object segmentation using weakly temporal information is presented in this paper. Motivated by the phenomenon in reality that the motion of the object is a continuous and smooth process and the appearance of the object does not change much between adjacent frames in the video sequences, we use a feed-forward architecture with motion estimation to predict the mask of the current frame. We extend an additional mask channel for the previous frame segmentation result. The mask of the previous frame is treated as the input of the expanded channel after processing, and then we extract the temporal feature of the object and fuse it with other feature maps to generate the final mask. In addition, we introduce multi-mask guidance to improve the stability of the model. Moreover, we enhance segmentation performance by further training with the masks already obtained. Experiments show that our method achieves competitive results on DAVIS-2016 on single object segmentation compared to some state-of-the-art algorithms.

Volume Measurement Method for Object on Pixel Area Basis through Depth Image (깊이 영상을 통한 화소 단위 물체 부피 측정 방법)

  • Ji-hwan Kim;Soon-kak Kwon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.1
    • /
    • pp.125-133
    • /
    • 2024
  • In this paper, we propose a volume measurement method for an object based on depth image. The object volume is measured by calculating the object height and width in actual units through the depth image. The object area is detected through differences between the captured and background depth images. The volume of the 2×2 pixel area, formed by four adjacent pixels using the depth information associated with each pixel, is measured. The object volume is measured as the sum of the volumes for whole 2×2 areas in the object area. In simulation results, the average measurement error for the object volume is 2.1% when the distance from the camera is 60cm.

Multiple Object Tracking using Color Invariants (색상 불변값을 이용한 물체 괘적 추적)

  • Choo, Moon Won;Choi, Young Mie;Hong, Ki-Cheon
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2002.11b
    • /
    • pp.101-109
    • /
    • 2002
  • In this paper, multiple object tracking system in a known environment is proposed. It extracts moving areas shaped on objects in video sequences and detects racks of moving objects. Color invariant co-occurrence matrices are exploited to extract the plausible object blocks and the correspondences between adjacent video frames. The measures of class separability derived from the features of co-occurrence matrices are used to improve the performance of tracking. The experimented results are presented.

  • PDF

Recommendation Technique using Social Network in Internet of Things Environment (사물인터넷 환경에서 소셜 네트워크를 기반으로 한 정보 추천 기법)

  • Kim, Sungrim;Kwon, Joonhee
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.1
    • /
    • pp.47-57
    • /
    • 2015
  • Recently, Internet of Things (IoT) have become popular for research and development in many areas. IoT makes a new intelligent network between things, between things and persons, and between persons themselves. Social network service technology is in its infancy, but, it has many benefits. Adjacent users in a social network tend to trust each other more than random pairs of users in the network. In this paper, we propose recommendation technique using social network in Internet of Things environment. We study previous researches about information recommendation, IoT, and social IoT. We proposed SIoT_P(Social IoT Prediction) using social relationships and item-based collaborative filtering. Also, we proposed SR(Social Relationship) using four social relationships (Ownership Object Relationship, Co-Location Object Relationship, Social Object Relationship, Parental Object Relationship). We describe a recommendation scenario using our proposed method.

Simplified Representation of Image Contour

  • Yoo, Suk Won
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.317-322
    • /
    • 2018
  • We use edge detection technique for the input image to extract the entire edges of the object in the image and then select only the edges that construct the outline of the object. By examining the positional relation between these pixels composing the outline, a simplified version of the outline of the object in the input image is generated by removing unnecessary pixels while maintaining the condition of connection of the outline. For each pixel constituting the outline, its direction is calculated by examining the positional relation with the next pixel. Then, we group the consecutive pixels with same direction into one and then change them to a line segment instead of a point. Among those line segments composing the outline of the object, a line segment whose length is smaller than a predefined minimum length of acceptable line segment is removed by merging it into one of the adjacent line segments. As a result, an outline composed of line segments of over a certain length is obtained through this process.

Oriented object detection in satellite images using convolutional neural network based on ResNeXt

  • Asep Haryono;Grafika Jati;Wisnu Jatmiko
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.307-322
    • /
    • 2024
  • Most object detection methods use a horizontal bounding box that causes problems between adjacent objects with arbitrary directions, resulting in misaligned detection. Hence, the horizontal anchor should be replaced by a rotating anchor to determine oriented bounding boxes. A two-stage process of delineating a horizontal bounding box and then converting it into an oriented bounding box is inefficient. To improve detection, a box-boundary-aware vector can be estimated based on a convolutional neural network. Specifically, we propose a ResNeXt101 encoder to overcome the weaknesses of the conventional ResNet, which is less effective as the network depth and complexity increase. Owing to the cardinality of using a homogeneous design and multi-branch architecture with few hyperparameters, ResNeXt captures better information than ResNet. Experimental results demonstrate more accurate and faster oriented object detection of our proposal compared with a baseline, achieving a mean average precision of 89.41% and inference rate of 23.67 fps.

Recognizing a polyhedron by network constraint analysis

  • Ishikawa, Seiji;Kubota, Mayumi;Nishimura, Hiroshi;Kato, Kiyoshi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10b
    • /
    • pp.1591-1596
    • /
    • 1991
  • The present paper describes a method of recognizing a polyhedron employing the notion of network constraint analysis. Typical difficulties in three-dimensional object recognition, other than shading, reflection, and hidden line problems, include the case where appearances of an object vary according to observation points and the case where an object to be recognized is occluded by other objects placed in its front, resulting in incomplete information on the object shape. These difficulties can, however, be solved to a large extent, by taking account of certain local constraints defined on a polyhedral shape. The present paper assumes a model-based vision employing an appearance-oriented model of a polyhedron which is provided by placing it at the origin of a large sphere and observing it from various positions on the surface of the sphere. The model is actually represented by the sets of adjacent faces pairs of the polyhedron observed from those positions. Since the shape of a projected face gives constraint to that of its adjacent face, this results in a local constraint relation between these faces. Each projected face of an unknown polyhedron on an acquired image is examined its match with those faces in the model, producing network constraint relations between faces in the image and faces in the model. Taking adjacency of faces into consideration, these network constraint relations are analyzed. And if the analysis finally provides a solution telling existence of one to one match of the faces between the unknown polyhedron and the model, the unknown polyhedron is understood to be one of those memorized models placed in a certain posture. In the performed experiment, a polyhedron was observed from 320 regularly arranged points on a sphere to provide its appearance model and a polyhedron with arbitrarily postured, occluded, or imposed another difficulty was successfully recognized.

  • PDF