• Title/Summary/Keyword: R-Object

검색결과 697건 처리시간 0.025초

전자상거래 시스템을 위한 범용적 O-R Mapping Tool: Object $Organizer{\;}^{TM}$ (A General-purpose Object-Relational Mapping Tool for E-Commerce Applications: Object $Organizer{\;}^{TM}$)

  • 한상목;곽우섭;조규찬
    • 한국지능정보시스템학회:학술대회논문집
    • /
    • 한국지능정보시스템학회 2002년도 춘계학술대회 논문집
    • /
    • pp.115-122
    • /
    • 2002
  • O-R 매핑툴을 이용하면 업체별로 상이하며 가변적인 요소를 포함하는 전자 상거래 시스템을 효과적으로 구현할 수 있다. 본 논문에서는 전자상거래 시스템을 위한 범용적 O-R Mapping Tool인 Object $Organizer{\;}^{TM}$를 소개하고, 이를 설계하고 구현하기 위해 이용된 접근법에 대해 설명한다. 기존의 0-R 매핑툴에 대한 분석을 통해 사용 편의성과 수행 성능을 개선할 수 있도록 설계하였으며 전자 상거래 시스템의 구현 방법에서 일반화될 수 있는 부분을 추출하기 위하여 개발 프로세스와 구현 방식을 분석하였다. 또한 전자 상거래 시스템의 요구 사항을 충분히 반영할 수 있는 애플리케이션도메인을 선정하여 현업의 구체적인 요구 사항을 효과적으로 처리 가능하도록 설계하였다. 본 논문은 전자 상거래 시스템에 범용적으로 사용될 수 있는 O-R 매핑툴의 구현 방향을 제시했으며 Object $Organizer{\;}^{TM}$를 통해 이를 구현하였다. O-R 매핑툴의 범용성을 확보하기 위해 기존의 제품을 분석하고 전자상거래 시스템의 구현 방법을 분석하고 일반화하였으며 실제 전자 상거래 애플리케이션을 구현해 봄으로써 Object $Organizer{\;}^{TM}$의 개발 효율성을 평가하였다.

  • PDF

Caltech 보행자 감지를 위한 Scale-aware Faster R-CNN (Scale-aware Faster R-CNN for Caltech Pedestrian Detection)

  • 바트후;주마벡;조근식
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2016년도 추계학술발표대회
    • /
    • pp.506-509
    • /
    • 2016
  • We present real-time pedestrian detection that exploit accuracy of Faster R-CNN network. Faster R-CNN has shown to success at PASCAL VOC multi-object detection tasks, and their ability to operate on raw pixel input without the need to design special features is very engaging. Therefore, in this work we apply and adjust Faster R-CNN to single object detection, which is pedestrian detection. The drawback of Faster R-CNN is its failure when object size is small. Previously, small sized object problem was solved by Scale-aware Network. We incorporate Scale-aware Network to Faster R-CNN. This made our method Scale-aware Faster R-CNN (DF R-CNN) that is both fast and very accurate. We separated Faster R-CNN networks into two sub-network, that is one for large-size objects and another one for small-size objects. The resulting approach achieves a 28.3% average miss rate on the Caltech Pedestrian detection benchmark, which is competitive with the other best reported results.

자율주행 차량 영상 기반 객체 인식 인공지능 기술 현황 (Overview of Image-based Object Recognition AI technology for Autonomous Vehicles)

  • 임헌국
    • 한국정보통신학회논문지
    • /
    • 제25권8호
    • /
    • pp.1117-1123
    • /
    • 2021
  • 객체 인식이란 하나의 특정 이미지를 입력했을 때, 주어진 이미지를 분석하여 특정한 객체(object)의 위치(location)와 종류(class)를 파악하는 것이다. 최근 객체 인식 기술이 적극적으로 접목되는 분야 중 하나는 자율주행 차량이라 할 수 있고, 본 논문에서는 자율주행 차량에서 영상 기반의 객체 인식 인공지능 기술에 대해 기술한다. 영상 기반 객체 검출 알고리즘은 최근 두 가지 방법(단일 단계 검출 방법 및 두 단계 검출 방법)으로 좁혀지고 있는데, 이를 중심으로 분석 정리하고자 한다. 두 가지 검출 방법의 장단점을 분석 제시하고, 단일 단계 검출 방법에 속하는 YOLO/SSD 알고리즘과 두 단계 검출 방법에 속하는 R-CNN/Faster R-CNN 알고리즘에 대해 분석 기술한다. 이를 통해 자율주행에 필요한 각 객체 인식 응용에 적합한 알고리즘이 선별적으로 선택되어 연구개발 되어질 수 있기를 기대한다.

Rend 3D R-tree : 3D R-tree 기반의 이동 객체 데이터베이스 색인구조 연구 (Rend 3D R-tree: An Improved Index Structure in Moving Object Database Based on 3D R-tree )

  • 임향초;임기욱;남지은;이경오
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2008년도 추계학술발표대회
    • /
    • pp.878-881
    • /
    • 2008
  • To index the object's trajectory is an important aspect in moving object database management. This paper implements an optimizing index structure named Rend 3D R-tree based on 3D R-Tree. This paper demonstrates the time period update method to reconstruct the MBR for the moving objects in order to decrease the dead space that is produced in the closed time dimension of the 3D R-tree, then a rend method is introduced for indexing both current data and history data. The result of experiments illustrates that given methods outperforms 3D R-Tree and LUR tree in query processes.

혼재된 환경에서의 효율적 로봇 파지를 위한 3차원 물체 인식 알고리즘 개발 (Development of an Efficient 3D Object Recognition Algorithm for Robotic Grasping in Cluttered Environments)

  • 송동운;이재봉;이승준
    • 로봇학회논문지
    • /
    • 제17권3호
    • /
    • pp.255-263
    • /
    • 2022
  • 3D object detection pipelines often incorporate RGB-based object detection methods such as YOLO, which detects the object classes and bounding boxes from the RGB image. However, in complex environments where objects are heavily cluttered, bounding box approaches may show degraded performance due to the overlapping bounding boxes. Mask based methods such as Mask R-CNN can handle such situation better thanks to their detailed object masks, but they require much longer time for data preparation compared to bounding box-based approaches. In this paper, we present a 3D object recognition pipeline which uses either the YOLO or Mask R-CNN real-time object detection algorithm, K-nearest clustering algorithm, mask reduction algorithm and finally Principal Component Analysis (PCA) alg orithm to efficiently detect 3D poses of objects in a complex environment. Furthermore, we also present an improved YOLO based 3D object detection algorithm that uses a prioritized heightmap clustering algorithm to handle overlapping bounding boxes. The suggested algorithms have successfully been used at the Artificial-Intelligence Robot Challenge (ARC) 2021 competition with excellent results.

Object Classification based on Weakly Supervised E2LSH and Saliency map Weighting

  • Zhao, Yongwei;Li, Bicheng;Liu, Xin;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권1호
    • /
    • pp.364-380
    • /
    • 2016
  • The most popular approach in object classification is based on the bag of visual-words model, which has several fundamental problems that restricting the performance of this method, such as low time efficiency, the synonym and polysemy of visual words, and the lack of spatial information between visual words. In view of this, an object classification based on weakly supervised E2LSH and saliency map weighting is proposed. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is employed to generate a group of weakly randomized visual dictionary by clustering SIFT features of the training dataset, and the selecting process of hash functions is effectively supervised inspired by the random forest ideas to reduce the randomcity of E2LSH. Secondly, graph-based visual saliency (GBVS) algorithm is applied to detect the saliency map of different images and weight the visual words according to the saliency prior. Finally, saliency map weighted visual language model is carried out to accomplish object classification. Experimental results datasets of Pascal 2007 and Caltech-256 indicate that the distinguishability of objects is effectively improved and our method is superior to the state-of-the-art object classification methods.

Damage Assessment of Free-fall Dropped Object on Sub-seabed in Offshore Operation

  • Won, Jonghwa;Kim, Youngho;Park, Jong-Sik;Kang, Hyo-dong;Joo, YoungSeok;Ryu, Mincheol
    • Journal of Advanced Research in Ocean Engineering
    • /
    • 제1권4호
    • /
    • pp.198-210
    • /
    • 2015
  • This paper presents the damage assessment of a free-fall dropped object on the seabed. The damage to a dropped object totally depends on the relationship between the impact energy and the soil strength at the mudline. In this study, unexpected dropping scenarios were first assumed by varying the relevant range of the impact velocity, structure geometry at the moment of impact, and soil strength profile along the penetration depth. Theoretical damage assessments were then undertaken for a free-fall dropping event with a fixed final embedment depth for the structure. This paper also describes the results of a three-dimensional large deformation finite element analysis undertaken for the purpose of validation. The analyses were carried out using the coupled Eulerian-Lagrangian approach, modifying the simple elastic-perfectly plastic Tresca soil model. The validation exercises for each dropping scenario showed good agreement, and the present numerical approach was capable of predicting the behavior of a free-fall dropped object.

적외선 카메라 영상에서의 마스크 R-CNN기반 발열객체검출 (Object Detection based on Mask R-CNN from Infrared Camera)

  • 송현철;강민식;김태은
    • 디지털콘텐츠학회 논문지
    • /
    • 제19권6호
    • /
    • pp.1213-1218
    • /
    • 2018
  • 최근 비전분야에 소개된 Mask R-CNN은 객체 인스턴스 세분화를위한 개념적으로 간단하고 유연하며 일반적인 프레임 워크를 제시한다. 이 논문에서는 열적외선 카메라로부터 획득한 열감지영상에서 발열체인 인스턴스에 대해 발열부위의 세그멘테이션 마스크를 생성하는 동시에 이미지 내의 오브젝트 발열부분을 효율적으로 탐색하는 알고리즘을 제안한다. Mask R-CNN 기법은 바운딩 박스 인식을 위해 기존 브랜치와 병렬로 객체 마스크를 예측하기 위한 브랜치를 추가함으로써 Faster R-CNN을 확장한 알고리즘이다. Mask R-CNN은 훈련이 간단하고 빠르게 실행하는 고속 R-CNN에 추가된다. 더욱이, Mask R-CNN은 다른 작업으로 일반화하기 용이하다. 본 연구에서는 이 R-CNN기반 적외선 영상 검출알고리즘을 제안하여 RGB영상에서 구별할 수 없는 발열체를 탐지하였다. 실험결과 Mask R-CNN에서 변별하지 못하는 발열객체를 성공적으로 검출하였다.

제한된 영역에서의 이동 및 고정 객체를 위한 시공간 분할 트리 (The Separation of Time and Space Tree for Moving or Static Objects in Limited Region)

  • 윤종선;박현주
    • Journal of Information Technology Applications and Management
    • /
    • 제12권1호
    • /
    • pp.111-123
    • /
    • 2005
  • Many indexing methods were proposed so that process moving object efficiently. Among them, indexing methods like the 3D R-tree treat temporal and spatial domain as the same. Actually, however. both domain had better process separately because of difference in character and unit. Especially in this paper we deal with limited region such as indoor environment since spatial domain is limited but temporal domain is grown. In this paper we present a novel indexing structure, namely STS-tree(Separation of Time and Space tree). based on limited region. STS-tree is a hybrid tree structure which consists of R-tree and one-dimensional TB-tree. The R-tree component indexes static object and spatial information such as topography of the space. The TB-tree component indexes moving object and temporal information.

  • PDF

비주얼 서보잉을 위한 딥러닝 기반 물체 인식 및 자세 추정 (Object Recognition and Pose Estimation Based on Deep Learning for Visual Servoing)

  • 조재민;강상승;김계경
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.1-7
    • /
    • 2019
  • Recently, smart factories have attracted much attention as a result of the 4th Industrial Revolution. Existing factory automation technologies are generally designed for simple repetition without using vision sensors. Even small object assemblies are still dependent on manual work. To satisfy the needs for replacing the existing system with new technology such as bin picking and visual servoing, precision and real-time application should be core. Therefore in our work we focused on the core elements by using deep learning algorithm to detect and classify the target object for real-time and analyzing the object features. We chose YOLO CNN which is capable of real-time working and combining the two tasks as mentioned above though there are lots of good deep learning algorithms such as Mask R-CNN and Fast R-CNN. Then through the line and inside features extracted from target object, we can obtain final outline and estimate object posture.