• Title/Summary/Keyword: cloud robotics

Search Result 57, Processing Time 0.02 seconds

An Object Recognition Method Based on Depth Information for an Indoor Mobile Robot (실내 이동로봇을 위한 거리 정보 기반 물체 인식 방법)

  • Park, Jungkil;Park, Jaebyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.10
    • /
    • pp.958-964
    • /
    • 2015
  • In this paper, an object recognition method based on the depth information from the RGB-D camera, Xtion, is proposed for an indoor mobile robot. First, the RANdom SAmple Consensus (RANSAC) algorithm is applied to the point cloud obtained from the RGB-D camera to detect and remove the floor points. Next, the removed point cloud is classified by the k-means clustering method as each object's point cloud, and the normal vector of each point is obtained by using the k-d tree search. The obtained normal vectors are classified by the trained multi-layer perceptron as 18 classes and used as features for object recognition. To distinguish an object from another object, the similarity between them is measured by using Levenshtein distance. To verify the effectiveness and feasibility of the proposed object recognition method, the experiments are carried out with several similar boxes.

Development of ROS2-on-Yocto-based Thin Client Robot for Cloud Robotics (클라우드 연동을 위한 ROS2 on Yocto 기반의 Thin Client 로봇 개발)

  • Kim, Yunsung;Lee, Dongoen;Jeong, Seonghoon;Moon, Hyeongil;Yu, Changseung;Lee, Kangyoung;Choi, Juneyoul;Kim, Youngjae
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.4
    • /
    • pp.327-335
    • /
    • 2021
  • In this paper, we propose an embedded robot system based on "ROS2 on Yocto" that can support various robots. We developed a lightweight OS based on the Yocto Project as a next-generation robot platform targeting cloud robotics. Yocto Project was adopted for portability and scalability in both software and hardware, and ROS2 was adopted and optimized considering a low specification embedded hardware system. We developed SLAM, navigation, path planning, and motion for the proposed robot system validation. For verification of software packages, we applied it to home cleaning robot and indoor delivery robot that were already commercialized by LG Electronics and verified they can do autonomous driving, obstacle recognition, and avoidance driving. Memory usage and network I/O have been improved by applying the binary launch method based on shell and mmap application as opposed to the conventional Python method. Finally, we verified the possibility of mass production and commercialization of the proposed system through performance evaluation from CPU and memory perspective.

Design and Implementation of Multi-Cloud Service Common Platform (멀티 클라우드 서비스 공통 플랫폼 설계 및 구현)

  • Kim, Sooyoung;Kim, Byoungseob;Son, Seokho;Seo, Jihoon;Kim, Yunkon;Kang, Dongjae
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.1
    • /
    • pp.75-94
    • /
    • 2021
  • The 4th industrial revolution needs a fusion of artificial intelligence, robotics, the Internet of Things (IoT), edge computing, and other technologies. For the fusion of technologies, cloud computing technology can provide flexible and high-performance computing resources so that cloud computing can be the foundation technology of new emerging services. The emerging services become a global-scale, and require much higher performance, availability, and reliability. Public cloud providers already provide global-scale services. However, their services, costs, performance, and policies are different. Enterprises/ developers to come out with a new inter-operable service are experiencing vendor lock-in problems. Therefore, multi-cloud technology that federatively resolves the limitations of single cloud providers is required. We propose a software platform, denoted as Cloud-Barista. Cloud-Barista is a multi-cloud service common platform for federating multiple clouds. It makes multiple cloud services as a single service. We explain the functional architecture of the proposed platform that consists of several frameworks, and then discuss the main design and implementation issues of each framework. To verify the feasibility of our proposal, we show a demonstration which is to create 18 virtual machines on several cloud providers, combine them as a single resource, and manage it.

Goal-driven Optimization Strategy for Energy and Performance-Aware Data Centers for Cloud-Based Wind Farm CMS

  • Elijorde, Frank;Kim, Sungho;Lee, Jaewan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1362-1376
    • /
    • 2016
  • A cloud computing system can be characterized by the provision of resources in the form of services to third parties on a leased, usage-based basis, as well as the private infrastructures maintained and utilized by individual organizations. To attain the desired reliability and energy efficiency in a cloud data center, trade-offs need to be carried out between system performance and power consumption. Resolving these conflicting goals is often the major challenge encountered in the design of optimization strategies for cloud data centers. The work presented in this paper is directed towards the development of an Energy-efficient and Performance-aware Cloud System equipped with strategies for dynamic switching of optimization approach. Moreover, a platform is also provided for the deployment of a Wind Farm CMS (Condition Monitoring System) which allows ubiquitous access. Due to the geographically-dispersed nature of wind farms, the CMS can take advantage of the cloud's highly scalable architecture in order to keep a reliable and efficient operation capable of handling multiple simultaneous users and huge amount of monitoring data. Using the proposed cloud architecture, a Wind Farm CMS is deployed in a virtual platform to monitor and evaluate the aging conditions of the turbine's major components in concurrent, yet isolated working environments.

Spherical Signature Description of 3D Point Cloud and Environmental Feature Learning based on Deep Belief Nets for Urban Structure Classification (도시 구조물 분류를 위한 3차원 점 군의 구형 특징 표현과 심층 신뢰 신경망 기반의 환경 형상 학습)

  • Lee, Sejin;Kim, Donghyun
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.3
    • /
    • pp.115-126
    • /
    • 2016
  • This paper suggests the method of the spherical signature description of 3D point clouds taken from the laser range scanner on the ground vehicle. Based on the spherical signature description of each point, the extractor of significant environmental features is learned by the Deep Belief Nets for the urban structure classification. Arbitrary point among the 3D point cloud can represents its signature in its sky surface by using several neighborhood points. The unit spherical surface centered on that point can be considered to accumulate the evidence of each angular tessellation. According to a kind of point area such as wall, ground, tree, car, and so on, the results of spherical signature description look so different each other. These data can be applied into the Deep Belief Nets, which is one of the Deep Neural Networks, for learning the environmental feature extractor. With this learned feature extractor, 3D points can be classified due to its urban structures well. Experimental results prove that the proposed method based on the spherical signature description and the Deep Belief Nets is suitable for the mobile robots in terms of the classification accuracy.

LiDAR-based Mobile Robot Exploration Considering Navigability in Indoor Environments (실내 환경에서의 주행가능성을 고려한 라이다 기반 이동 로봇 탐사 기법)

  • Hyejeong Ryu;Jinwoo Choi;Taehyeon Kim
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.4
    • /
    • pp.487-495
    • /
    • 2023
  • This paper presents a method for autonomous exploration of indoor environments using a 2-dimensional Light Detection And Ranging (LiDAR) scanner. The proposed frontier-based exploration method considers navigability from the current robot position to extracted frontier targets. An approach to constructing the point cloud grid map that accurately reflects the occupancy probability of glass obstacles is proposed, enabling identification of safe frontier grids on the safety grid map calculated from the point cloud grid map. Navigability, indicating whether the robot can successfully navigate to each frontier target, is calculated by applying the skeletonization-informed rapidly exploring random tree algorithm to the safety grid map. While conventional exploration approaches have focused on frontier detection and target position/direction decision, the proposed method discusses a safe navigation approach for the overall exploration process until the completion of mapping. Real-world experiments have been conducted to verify that the proposed method leads the robot to avoid glass obstacles and safely navigate the entire environment, constructing the point cloud map and calculating the navigability with low computing time deviation.

Development of a Noncontact Three Dimensional Foot Form Measurement System with Optical Triangulation (광삼각법을 이용한 비접촉 3차원 족형 측정 시스템 설계)

  • 박인덕;안형회;송강석;이희만;김시경
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.5
    • /
    • pp.368-373
    • /
    • 2003
  • This paper presents a cost-effective 3D foot scanner system that provides the 3-dimensional point cloud foot data to design the custom footwear. To measure the 3-dimensional point cloud data of the foot, a CCD camera, a Non-Gaussian laser line projector and optical triangulation method are employed. Furthermore, the integrated system employs a measurement base, a frame grabber, a CCD moving cart, a stepping motor and a computer. The measurement result is saved as 3D dxf format and it could be converted to 2D essential data fer a shoe design. The experimental results demonstrate that the proposed system have the decent resolution of 1mm which is enough for last and shoe design.

Obstacle Detection for Generating the Motion of Humanoid Robot (휴머노이드 로봇의 움직임 생성을 위한 장애물 인식방법)

  • Park, Chan-Soo;Kim, Doik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.12
    • /
    • pp.1115-1121
    • /
    • 2012
  • This paper proposes a method to extract accurate plane of an object in unstructured environment for a humanoid robot by using a laser scanner. By panning and tilting 2D laser scanner installed on the head of a humanoid robot, 3D depth map of unstructured environment is generated. After generating the 3D depth map around a robot, the proposed plane extraction method is applied to the 3D depth map. By using the hierarchical clustering method, points on the same plane are extracted from the point cloud in the 3D depth map. After segmenting the plane from the point cloud, dimensions of the planes are calculated. The accuracy of the extracted plane is evaluated with experimental results, which show the effectiveness of the proposed method to extract planes around a humanoid robot in unstructured environment.

UAV Altitude and Attitude Estimation Method Using Stereo Vision (스테레오 비전를 이용한 무인기 고도 및 자세 추정기법)

  • Jung, Ha-Hyoung;Lee, Jun-Min;Lyou, Joon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.1
    • /
    • pp.17-23
    • /
    • 2016
  • This paper presents the implementation of altitude and attitude measurement algorithm using stereo camera for an unmanned aerial vehicle (UAV). Depth images are generated by calibrating the stereo cameras, and converted into 3D point cloud data. By applying a plane fitting algorithm to the resultant point cloud, altitude from ground level, and roll and pitch angles are extracted. To verify the performance, experimental results are provided by comparing with those of the motion caption system.

Grasping Algorithm using Point Cloud-based Deep Learning (점군 기반의 심층학습을 이용한 파지 알고리즘)

  • Bae, Joon-Hyup;Jo, HyunJun;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.2
    • /
    • pp.130-136
    • /
    • 2021
  • In recent years, much study has been conducted in robotic grasping. The grasping algorithms based on deep learning have shown better grasping performance than the traditional ones. However, deep learning-based algorithms require a lot of data and time for training. In this study, a grasping algorithm using an artificial neural network-based graspability estimator is proposed. This graspability estimator can be trained with a small number of data by using a neural network based on the residual blocks and point clouds containing the shapes of objects, not RGB images containing various features. The trained graspability estimator can measures graspability of objects and choose the best one to grasp. It was experimentally shown that the proposed algorithm has a success rate of 90% and a cycle time of 12 sec for one grasp, which indicates that it is an efficient grasping algorithm.