• 제목/요약/키워드: Object-based model

검색결과 2,230건 처리시간 0.023초

정밀부품의 비접촉 자동검사기술 개발 (Development of Non-Contacting Automatic Inspection Technology of Precise Parts)

  • 이우송;한성현
    • 한국공작기계학회논문집
    • /
    • 제16권6호
    • /
    • pp.110-116
    • /
    • 2007
  • This paper presents a new technique to implement the real-time recognition for shapes and model number of parts based on an active vision approach. The main focus of this paper is to apply a technique of 3D object recognition for non-contacting inspection of the shape and the external form state of precision parts based on the pattern recognition. In the field of computer vision, there have been many kinds of object recognition approaches. And most of these approaches focus on a method of recognition using a given input image (passive vision). It is, however, hard to recognize an object from model objects that have similar aspects each other. Recently, it has been perceived that an active vision is one of hopeful approaches to realize a robust object recognition system. The performance is illustrated by experiment for several parts and models.

서베일런스 네트워크에서 적응적 색상 모델을 기초로 한 실시간 객체 추적 알고리즘 (Real-Time Object Tracking Algorithm based on Adaptive Color Model in Surveillance Networks)

  • 강성관;이정현
    • 디지털융복합연구
    • /
    • 제13권9호
    • /
    • pp.183-189
    • /
    • 2015
  • 본 논문은 서베일런스 네트워크에서 영상의 색상 정보를 이용한 객체 추적 방법을 제안한다. 이 방법은 적응적인 색상 모델을 이용한 객체 검출을 수행한다. 객체 윤곽선 검출은 객체 인식과 같은 응용에서 중요한 역할을 수행한다. 실험 결과는 색상과 크기에서 객체의 다양한 변화가 있을 때에도 성공적인 객체 검출을 증명한다. 실시간으로 객체를 검출하는 응용 분야에서 대량의 영상 데이터를 전송할 때 색상 분포의 형태를 찾아내는 것이 가능하다. 객체의 특정 색상 정보는 입력 영상에서 동적으로 변화하는 색상에서 자주 수정되어진다. 그래서, 이 알고리즘은 해당 추적 영역 안에서 객체의 추적 영역 정보를 탐지하고 그 객체의 움직임만을 추적한다. 실험을 통해, 본 논문은 어떤 이상적인 상황하에서 제안하는 객체 추적 알고리즘이 다른 방법보다 더 강인한 면이 있다는 것을 보여준다.

Image Processing-based Object Recognition Approach for Automatic Operation of Cranes

  • Zhou, Ying;Guo, Hongling;Ma, Ling;Zhang, Zhitian
    • 국제학술발표논문집
    • /
    • The 8th International Conference on Construction Engineering and Project Management
    • /
    • pp.399-408
    • /
    • 2020
  • The construction industry is suffering from aging workers, frequent accidents, as well as low productivity. With the rapid development of information technologies in recent years, automatic construction, especially automatic cranes, is regarded as a promising solution for the above problems and attracting more and more attention. However, in practice, limited by the complexity and dynamics of construction environment, manual inspection which is time-consuming and error-prone is still the only way to recognize the search object for the operation of crane. To solve this problem, an image-processing-based automated object recognition approach is proposed in this paper, which is a fusion of Convolutional-Neutral-Network (CNN)-based and traditional object detections. The search object is firstly extracted from the background by the trained Faster R-CNN. And then through a series of image processing including Canny, Hough and Endpoints clustering analysis, the vertices of the search object can be determined to locate it in 3D space uniquely. Finally, the features (e.g., centroid coordinate, size, and color) of the search object are extracted for further recognition. The approach presented in this paper was implemented in OpenCV, and the prototype was written in Microsoft Visual C++. This proposed approach shows great potential for the automatic operation of crane. Further researches and more extensive field experiments will follow in the future.

  • PDF

SPH 기반의 유체 및 용해성 강체에 대한 시각-촉각 융합 상호작용 시뮬레이션 (Real-time Simulation Technique for Visual-Haptic Interaction between SPH-based Fluid Media and Soluble Solids)

  • 김석열;박진아
    • 한국가시화정보학회지
    • /
    • 제15권1호
    • /
    • pp.32-40
    • /
    • 2017
  • Interaction between fluid and a rigid object is frequently observed in everyday life. However, it is difficult to simulate their interaction as the medium and the object have different representations. One of the challenging issues arises especially in handling deformation of the object visually as well as rendering haptic feedback. In this paper, we propose a real-time simulation technique for multimodal interaction between particle-based fluids and soluble solids. We have developed the dissolution behavior model of solids, which is discretized based on the idea of smoothed particle hydrodynamics, and the changes in physical properties accompanying dissolution is immediately reflected to the object. The user is allowed to intervene in the simulation environment anytime by manipulating the solid object, where both visual and haptic feedback are delivered to the user on the fly. For immersive visualization, we also adopt the screen space fluid rendering technique which can balance realism and performance.

컴퓨터 비젼 방법을 이용한 3차원 물체 위치 결정에 관한 연구 (A Study on the Determination of 3-D Object's Position Based on Computer Vision Method)

  • 김경석
    • 한국생산제조학회지
    • /
    • 제8권6호
    • /
    • pp.26-34
    • /
    • 1999
  • This study shows an alternative method for the determination of object's position, based on a computer vision method. This approach develops the vision system model to define the reciprocal relationship between the 3-D real space and 2-D image plane. The developed model involves the bilinear six-view parameters, which is estimated using the relationship between the camera space location and real coordinates of known position. Based on estimated parameters in independent cameras, the position of unknown object is accomplished using a sequential estimation scheme that permits data of unknown points in each of the 2-D image plane of cameras. This vision control methods the robust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the robot, and correct knowledge of the relative positions and orientation of the robot and CCD camera. Finally, the developed vision control method is tested experimentally by performing determination of object position in the space using computer vision system. These results show the presented method is precise and compatible.

  • PDF

신경망을 이용한 차량 객체의 그림자 제거 (Cast-Shadow Elimination of Vehicle Objects Using Backpropagation Neural Network)

  • 정성환;이준환
    • 한국ITS학회 논문지
    • /
    • 제7권1호
    • /
    • pp.32-41
    • /
    • 2008
  • 비디오를 이용한 비전기반 감시에서 움직이는 객체의 추적은 GMM (Gaussian Mixture Model)을 사용한 배경영상과 현재영상의 차이법을 이용한다. 문턱치를 통해 생성된 이진영상을 이용하여 객체 추적을 할 경우 객체 정보가 아닌 그림자에 의하여 객체가 병합되는 현상이 나타난다. 본 논문에서는 신경망(Backpropagation Neural Network)을 이용하여 그림자를 제거하는 방법을 제안하였다. 10개의 동영상에서 객체영역과 캐스트그림자(Cast-Shadow)영역의 훈련용 이미지에서 특징 값을 추출하여 신경망을 훈련시켰다. 캐스트그림자를 제거하는 방법은 이진영상의 객체로 추정되는 영역에서 그림자를 분리하는 방법을 기초로 하며 기존의 그림자 제거 알고리즘 (SNP, SP, DNM1, DNM2, CNCC)보다 그림자 제거 성능이 (16.2%, 38.2%, 28.1%, 22.3%, 44.4%)로 높게 나타났다.

  • PDF

거리 기반 적응형 임계값을 활용한 강건한 3차원 물체 탐지 (Robust 3D Object Detection through Distance based Adaptive Thresholding)

  • 이은호;정민우;김종호;이경수;김아영
    • 로봇학회논문지
    • /
    • 제19권1호
    • /
    • pp.106-116
    • /
    • 2024
  • Ensuring robust 3D object detection is a core challenge for autonomous driving systems operating in urban environments. To tackle this issue, various 3D representation, including point cloud, voxels, and pillars, have been widely adopted, making use of LiDAR, Camera, and Radar sensors. These representations improved 3D object detection performance, but real-world urban scenarios with unexpected situations can still lead to numerous false positives, posing a challenge for robust 3D models. This paper presents a post-processing algorithm that dynamically adjusts object detection thresholds based on the distance from the ego-vehicle. While conventional perception algorithms typically employ a single threshold in post-processing, 3D models perform well in detecting nearby objects but may exhibit suboptimal performance for distant ones. The proposed algorithm tackles this issue by employing adaptive thresholds based on the distance from the ego-vehicle, minimizing false negatives and reducing false positives in the 3D model. The results show performance enhancements in the 3D model across a range of scenarios, encompassing not only typical urban road conditions but also scenarios involving adverse weather conditions.

로봇시스템에서 작은 마커 인식을 하기 위한 사물 감지 어텐션 모델 (Small Marker Detection with Attention Model in Robotic Applications)

  • 김민재;문형필
    • 로봇학회논문지
    • /
    • 제17권4호
    • /
    • pp.425-430
    • /
    • 2022
  • As robots are considered one of the mainstream digital transformations, robots with machine vision becomes a main area of study providing the ability to check what robots watch and make decisions based on it. However, it is difficult to find a small object in the image mainly due to the flaw of the most of visual recognition networks. Because visual recognition networks are mostly convolution neural network which usually consider local features. So, we make a model considering not only local feature, but also global feature. In this paper, we propose a detection method of a small marker on the object using deep learning and an algorithm that considers global features by combining Transformer's self-attention technique with a convolutional neural network. We suggest a self-attention model with new definition of Query, Key and Value for model to learn global feature and simplified equation by getting rid of position vector and classification token which cause the model to be heavy and slow. Finally, we show that our model achieves higher mAP than state of the art model YOLOr.

Adaptive Bayesian Object Tracking with Histograms of Dense Local Image Descriptors

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제16권2호
    • /
    • pp.104-110
    • /
    • 2016
  • Dense local image descriptors like SIFT are fruitful for capturing salient information about image, shown to be successful in various image-related tasks when formed in bag-of-words representation (i.e., histograms). In this paper we consider to utilize these dense local descriptors in the object tracking problem. A notable aspect of our tracker is that instead of adopting a point estimate for the target model, we account for uncertainty in data noise and model incompleteness by maintaining a distribution over plausible candidate models within the Bayesian framework. The target model is also updated adaptively by the principled Bayesian posterior inference, which admits a closed form within our Dirichlet prior modeling. With empirical evaluations on some video datasets, the proposed method is shown to yield more accurate tracking than baseline histogram-based trackers with the same types of features, often being superior to the appearance-based (visual) trackers.

IoT Device Classification According to Context-aware Using Multi-classification Model

  • Zhang, Xu;Ryu, Shinhye;Kim, Sangwook
    • 한국멀티미디어학회논문지
    • /
    • 제23권3호
    • /
    • pp.447-459
    • /
    • 2020
  • The Internet of Things(IoT) paradigm is flourishing strenuously for the last two decades. Researchers around the globe have their dreams to transmute every real-world object to the virtual object. Consequently, IoT devices are escalating exponentially. The abrupt evolution of these IoT devices has caused a major challenge i.e. object classification. In order to classify devices comprehensively and accurately, this paper proposes a context-aware based multi-classification model for devices, which classifies the smart devices according to people's contexts. However, the classification features of contextual data of different contexts are difficult to extract. The deep learning algorithm has the capability to solve this problem. This paper proposes a context-aware based multi-classification model of devices, which classifies the smart devices according to people's contexts.