• Title/Summary/Keyword: 3D Objects

Search Result 1,461, Processing Time 0.034 seconds

Optimal 3D Grasp Planning for unknown objects (임의 물체에 대한 최적 3차원 Grasp Planning)

  • 이현기;최상균;이상릉
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.05a
    • /
    • pp.462-465
    • /
    • 2002
  • This paper deals with the problem of synthesis of stable and optimal grasps with unknown objects by 3-finger hand. Previous robot grasp research has analyzed mainly with either unknown objects 2D by vision sensor or unknown objects, cylindrical or hexahedral objects, 3D. Extending the previous work, in this paper we propose an algorithm to analyze grasp of unknown objects 3D by vision sensor. This is archived by two steps. The first step is to make a 3D geometrical model of unknown objects by stereo matching which is a kind of 3D computer vision technique. The second step is to find the optimal grasping points. In this step, we choose the 3-finger hand because it has the characteristic of multi-finger hand and is easy to modeling. To find the optimal grasping points, genetic algorithm is used and objective function minimizing admissible farce of finger tip applied to the object is formulated. The algorithm is verified by computer simulation by which an optimal grasping points of known objects with different angles are checked.

  • PDF

Separation of the Occluding Object from the Stack of 3D Objects Using a 2D Image (겹쳐진 3차원 물체의 2차원 영상에서 가리는 물체의 구분기법)

  • 송필재;홍민철;한헌수
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.11-22
    • /
    • 2004
  • Conventional algorithms of separating overlapped objects are mostly based on template matching methods and thus their application domain is restricted to 2D objects and the processing time increases when the number of templates (object models) does. To solve these problems, this paper proposes a new approach of separating the occluding object from the stack of 3D objects using the relationship between surfaces without any information on the objects. The proposed algorithm considers an object as a combination of surfaces which are consisted with a set of boundary edges. Overlap of 3D objects appears as overlap of surfaces and thus as crossings of edges in 2D image. Based on this observation, the types of edge crossings are classified from which the types of overlap of 3D objects can be identified. The relationships between surfaces are represented by an attributed graph where the types of overlaps are represented by relation values. Using the relation values, the surfaces pertained to the same object are discerned and the overlapping object on the top of the stack can be separated. The performance of the proposed algorithm has been proved by the experiments using the overlapped images of 3D objects selected among the standard industrial parts.

Applying differential techniques for 2D/3D video conversion to the objects grouped by depth information (2D/3D 동영상 변환을 위한 그룹화된 객체별 깊이 정보의 차등 적용 기법)

  • Han, Sung-Ho;Hong, Yeong-Pyo;Lee, Sang-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.3
    • /
    • pp.1302-1309
    • /
    • 2012
  • In this paper, we propose applying differential techniques for 2D/3D video conversion to the objects grouped by depth information. One of the problems converting 2D images to 3D images using the technique tracking the motion of pixels is that objects not moving between adjacent frames do not give any depth information. This problem can be solved by applying relative height cue only to the objects which have no moving information between frames, after the process of splitting the background and objects and extracting depth information using motion vectors between objects. Using this technique all the background and object can have their own depth information. This proposed method is used to generate depth map to generate 3D images using DIBR(Depth Image Based Rendering) and verified that the objects which have no movement between frames also had depth information.

Neural Network Approach to Sensor Fusion System for Improving the Recognition Performance of 3D Objects (3차원 물체의 인식 성능 향상을 위한 감각 융합 신경망 시스템)

  • Dong Sung Soo;Lee Chong Ho;Kim Ji Kyoung
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.3
    • /
    • pp.156-165
    • /
    • 2005
  • Human being recognizes the physical world by integrating a great variety of sensory inputs, the information acquired by their own action, and their knowledge of the world using hierarchically parallel-distributed mechanism. In this paper, authors propose the sensor fusion system that can recognize multiple 3D objects from 2D projection images and tactile informations. The proposed system focuses on improving recognition performance of 3D objects. Unlike the conventional object recognition system that uses image sensor alone, the proposed method uses tactual sensors in addition to visual sensor. Neural network is used to fuse the two sensory signals. Tactual signals are obtained from the reaction force of the pressure sensors at the fingertips when unknown objects are grasped by four-fingered robot hand. The experiment evaluates the recognition rate and the number of learning iterations of various objects. The merits of the proposed systems are not only the high performance of the learning ability but also the reliability of the system with tactual information for recognizing various objects even though the visual sensory signals get defects. The experimental results show that the proposed system can improve recognition rate and reduce teeming time. These results verify the effectiveness of the proposed sensor fusion system as recognition scheme for 3D objects.

Study on collision processing among objects by 3D information of real objects extracted from a stereo type method in AR (가상현실에서 스테레오 타입 방식으로 추출한 실물 객체 3D 정보를 이용한 객체간 충돌처리 연구)

  • Jo, In-Kyeong;Park, Hwa-Jin
    • Journal of Digital Contents Society
    • /
    • v.11 no.2
    • /
    • pp.243-251
    • /
    • 2010
  • In this paper, 2 devices through the input image are projected onto the output video device to extract 3D information of real objects and they are located in virtual space. All 3D objects for each inter-object interaction information and location information makes the validation process by recognizing conflict. The proposed extract 3D information of real objects and collision handling inter-object interaction in the most basic issues in augmented reality, because more than anything is a matter to be prescriptive. Therefore, the proposed system to solve this problem exists in virtual space, all objects of the user by validating the conflict between realism and immersion to show that aims to increase.

A Simulated Annealing Tangential Cutting Algorithm for Lamination Rapid Prototyping System (적층 쾌속조형 시스템을 위한 시뮬레이티드 어닐링 경사절단 알고리즘)

  • 김명숙;엄태준;김승우;천인국;공용해
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.4
    • /
    • pp.226-234
    • /
    • 2004
  • A rapid Prototyping system that laser-cuts and laminates thick layers can fabricate 3D objects promptly with a variety of materials. Building such a system must consider the surface distortions due to both vertical-cut layers and triangular surfaces. We developed a tangential layer-cutting algorithm by rearranging tangential lines such that they reconstruct 3D surfaces more closely and also constitute smoother laser trajectories. An energy function that reflects the surface-closeness with the tangential lines was formulated and then the energy was minimized by a gradient descent method. Since this simple method tends to cause many local minima for complex 3D objects, we tried to solve this problem by adding a simulated annealing process to the proposed method. To view and manipulate 3D objects, we also implemented a 3D visual environment. Under this environment, experiments on various 3D objects showed that our algorithm effectively approximates 3D surfaces and makes laser-trajectory feasibly smooth.

Development of Mobile 3D Urban Landscape Authoring and Rendering System

  • Lee Ki-Won;Kim Seung-Yub
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.3
    • /
    • pp.221-228
    • /
    • 2006
  • In this study, an integrated 3D modeling and rendering system dealing with 3D urban landscape features such as terrain, building, road and user-defined geometric ones was designed and implemented using $OPENGL\;{|}\;ES$ (Embedded System) API for mobile devices of PDA. In this system, the authoring functions are composed of several parts handling urban landscape features: vertex-based geometry modeling, editing and manipulating 3D landscape objects, generating geometrically complex type features with attributes for 3D objects, and texture mapping of complex types using image library. It is a kind of feature-based system, linked with 3D geo-based spatial feature attributes. As for the rendering process, some functions are provided: optimizing of integrated multiple 3D landscape objects, and rendering of texture-mapped 3D landscape objects. By the active-synchronized process among desktop system, OPENGL-based 3D visualization system, and mobile system, it is possible to transfer and disseminate 3D feature models through both systems. In this mobile 3D urban processing system, the main graphical user interface and core components is implemented under EVC 4.0 MFC and tested at PDA running on windows mobile and Pocket Pc. It is expected that the mobile 3D geo-spatial information systems supporting registration, modeling, and rendering functions can be effectively utilized for real time 3D urban planning and 3D mobile mapping on the site.

A Study on the recognition of moving objects by segmenting 2D Laser Scanner points (2D Laser Scanner 포인트의 자동 분리를 통한 이동체의 구분에 관한 연구)

  • Lee Sang-Yeop;Han Soo-Hee;Yu Ki-Yun
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2006.04a
    • /
    • pp.177-180
    • /
    • 2006
  • In this paper we proposed a method of automatic point segmentation acquired by 2D laser scanner to recognize moving objects. Recently, Laser scanner is noticed as a new method in the field of close range 3D modeling. But the majority of the researches are pointed on precise 3D modeling of static objects using expensive 3D laser scanner. 2D laser scanner is relatively cheap and can obtain 2D coordinate information of moving object's surface or can be utilized as 3D laser scanner by rotating the system body. In these reasons, some researches are in progress, which are adopting 2D laser scanner to robot control systems or detection of objects moving along linear trajectory. In our study, we automatically segmented point data of 2D laser scanner thus we could recognize each of the object passing through a section.

  • PDF

Development of Merging Algorithm between 3-D Objects and Real Image for Augmented Reality

  • Kang, Dong-Joong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.100.5-100
    • /
    • 2002
  • A core technology for implementation of Augmented Reality is to develop a merging algorithm between interesting 3-D objects and real images. In this paper, we present a 3-D object recognition method to decide viewing direction toward the object from camera. This process is the starting point to merge with real image and 3-D objects. Perspective projection between a camera and 3-dimentional objects defines a plane in 3-D space that is from a line in an image and the focal point of the camera. If no errors with perfect 3-D models were introduced in during image feature extraction, then model lines in 3-D space projecting onto this line in the image would exactly lie in this plane. This observa...

  • PDF

DEVELOPMENT OF AN INTEGRATED MODEL OF 3D CAD OBJECT AND AUTOMATIC SCHEDULING PROCESS

  • Je-Seung Ryu;Kyung-Hwan Kim
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.1468-1473
    • /
    • 2009
  • Efficient communication of construction information has been critical for successful project performance. Building Information Modeling (BIM) has appeared as a tool for efficient communication. Through 3D CAD objects, it is possible to check interception and collisions of each object in advance. In addition, 4D simulation based on 3D objects integrated with time information makes it realize to go over scheduling and to perceive potential errors in scheduling. However, current scheduling simulation is still at a stage of animation due to manual integration of 3D objects and scheduling data. Accordingly, this study aims to develop an integrated model of 3D CAD objects that automatically creates scheduling information.

  • PDF