• Title/Summary/Keyword: learning with a robot

Search Result 492, Processing Time 0.029 seconds

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Confidence Measure of Depth Map for Outdoor RGB+D Database (야외 RGB+D 데이터베이스 구축을 위한 깊이 영상 신뢰도 측정 기법)

  • Park, Jaekwang;Kim, Sunok;Sohn, Kwanghoon;Min, Dongbo
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.9
    • /
    • pp.1647-1658
    • /
    • 2016
  • RGB+D database has been widely used in object recognition, object tracking, robot control, to name a few. While rapid advance of active depth sensing technologies allows for the widespread of indoor RGB+D databases, there are only few outdoor RGB+D databases largely due to an inherent limitation of active depth cameras. In this paper, we propose a novel method used to build outdoor RGB+D databases. Instead of using active depth cameras such as Kinect or LIDAR, we acquire a pair of stereo image using high-resolution stereo camera and then obtain a depth map by applying stereo matching algorithm. To deal with estimation errors that inevitably exist in the depth map obtained from stereo matching methods, we develop an approach that estimates confidence of depth maps based on unsupervised learning. Unlike existing confidence estimation approaches, we explicitly consider a spatial correlation that may exist in the confidence map. Specifically, we focus on refining confidence feature with the assumption that the confidence feature and resultant confidence map are smoothly-varying in spatial domain and are highly correlated to each other. Experimental result shows that the proposed method outperforms existing confidence measure based approaches in various benchmark dataset.

The Present and Future of Robotic Surgery (로봇수술의 현재와 미래)

  • Rha, Koon-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.68-70
    • /
    • 2008
  • Since the beginning of the 21st century, the emergence of innovative technologies made further advances in minimal access surgery possible. Robotic surgery and telepresence surgery effectively addressed the limitations of laparoscopic procedures, thus revolutionizing minimal access surgery. Surgical robots provide surgeons with to technologically advanced vision and hand skills. As a result, such systems are expected to revolutionize the field of surgery. In that time, much progress has been made in integrating robotic technologies with surgical instrumentation. However, robotic surgery will not only require special training, but it will also change the existing surgical training pattern and reshape the learning curve by offering new solutions, such as robotic surgical simulators and robotic telementoring. This article provides an introduction to medical robotic technologies, develops a possible classification, reviews the evolution of a surgical robot, and discusses future prospects for innovation. In the future, surgical robots should be smaller, less expensive, easier to operate, and should seamlessly integrate emerging technologies from a number of different fields. We believe that, in the near future as robotic technology continues to develop, almost all kinds of endoscopic surgery will be performed by this technology.

  • PDF

AdaBoost-based Real-Time Face Detection & Tracking System (AdaBoost 기반의 실시간 고속 얼굴검출 및 추적시스템의 개발)

  • Kim, Jeong-Hyun;Kim, Jin-Young;Hong, Young-Jin;Kwon, Jang-Woo;Kang, Dong-Joong;Lho, Tae-Jung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.11
    • /
    • pp.1074-1081
    • /
    • 2007
  • This paper presents a method for real-time face detection and tracking which combined Adaboost and Camshift algorithm. Adaboost algorithm is a method which selects an important feature called weak classifier among many possible image features by tuning weight of each feature from learning candidates. Even though excellent performance extracting the object, computing time of the algorithm is very high with window size of multi-scale to search image region. So direct application of the method is not easy for real-time tasks such as multi-task OS, robot, and mobile environment. But CAMshift method is an improvement of Mean-shift algorithm for the video streaming environment and track the interesting object at high speed based on hue value of the target region. The detection efficiency of the method is not good for environment of dynamic illumination. We propose a combined method of Adaboost and CAMshift to improve the computing speed with good face detection performance. The method was proved for real image sequences including single and more faces.

Grading of Harvested 'Mihwang' Peach Maturity with Convolutional Neural Network (합성곱 신경망을 이용한 '미황' 복숭아 과실의 성숙도 분류)

  • Shin, Mi Hee;Jang, Kyeong Eun;Lee, Seul Ki;Cho, Jung Gun;Song, Sang Jun;Kim, Jin Gook
    • Journal of Bio-Environment Control
    • /
    • v.31 no.4
    • /
    • pp.270-278
    • /
    • 2022
  • This study was conducted using deep learning technology to classify for 'Mihwang' peach maturity with RGB images and fruit quality attributes during fruit development and maturation periods. The 730 images of peach were used in the training data set and validation data set at a ratio of 8:2. The remains of 170 images were used to test the deep learning models. In this study, among the fruit quality attributes, firmness, Hue value, and a* value were adapted to the index with maturity classification, such as immature, mature, and over mature fruit. This study used the CNN (Convolutional Neural Networks) models for image classification; VGG16 and InceptionV3 of GoogLeNet. The performance results show 87.1% and 83.6% with Hue left value in VGG16 and InceptionV3, respectively. In contrast, the performance results show 72.2% and 76.9% with firmness in VGG16 and InceptionV3, respectively. The loss rate shows 54.3% and 62.1% with firmness in VGG16 and InceptionV3, respectively. It considers increasing for adapting a field utilization with firmness index in peach.

Development a Model of Smart Phone and Educational Robot for Educational using (스마트폰과 교육용 로봇의 교육적 활용을 위한 프로그래밍 교육 모형 개발)

  • Kim, Se-Min;Moon, Chae-Young;Chung, Jong-In
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.10a
    • /
    • pp.481-484
    • /
    • 2012
  • Information subject in the revision educational curriculum actually devoted a good deal of space to increase problem-solving ability through programming learning. However, it is not easy for learners to be immersed in the programming teaching by using only computers, which leads to the heavy logical burden in learning. Therefore, many studies are being carried out on the programming teaching by using robots. Moreover, smartphones have been rapidly widespread in the past few years; as a result, the present immersion situation in smartphones and the side effect problems are on the rise. This study tried to develop a programming teaching model to have a significant synergy effect in programming teaching by using robots with the immersion effect in smartphones. This paper attempts to improve programming teaching effectively by introducing the special feature of smartphones: the immersion greatly needed to programming teaching.

  • PDF

Convergence Education Program Using Smart Farm for Artificial Intelligence Education of Elementary School Students (초등학생 대상의 인공지능교육을 위한 스마트팜 활용 융합교육 프로그램)

  • Kim, Jung-Hoon;Moon, Seong-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.10
    • /
    • pp.203-210
    • /
    • 2021
  • This study was conducted to develop a convergence education program using smart farms with both input data(temperature, humidity, etc.) and output data(vegetables, fruits, etc.) that are easily accessible in everyday life so that elementary school students can intuitively and easily understand the principles of artificial intelligence(AI) learning. In order to develop this program, we conducted a prior study analysis of a horticulture, software, robot units in the 2015 Practical Arts curriculum and artificial intelligence education. Based on this, 13 components and 16 achievement criteria were selected, and AI programs of 4 sessions(a total of 8 hours). This program can be used as a reference when developing various teaching materials for artificial intelligence education in the future.

MIMO Fuzzy Reasoning Method using Learning Ability (학습기능을 사용한 MIMO 퍼지추론 방식)

  • Park, Jin-Hyun;Lee, Tae-Hwan;Choi, Young-Kiu
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.175-178
    • /
    • 2008
  • Z. Cao had proposed NFRM(new fuzzy reasoning method) which infers in detail using relation matrix. In spite of the small inference rules, it shows good performance than mamdani's fuzzy inference method. But the most of fuzzy systems are difficult to make fuzzy inference rules in the case of MIMO system. The past days, We had proposed the MIMO fuzzy inference which had extended a Z. Cao's fuzzy inference to handle MIMO system. But many times and effort needed to determine the relation matrix elements of MIMO fuzzy inference by heuristic and trial and error method in order to improve inference performances. In this paper, we propose a MIMO fuzzy inference method with the learning ability witch is used a gradient descent method in order to improve the performances. Through the computer simulation studies for the inverse kinematics problem of 2-axis robot, we show that proposed inference method using a gradient descent method has good performances.

  • PDF

Deep Learning Based Rescue Requesters Detection Algorithm for Physical Security in Disaster Sites (재난 현장 물리적 보안을 위한 딥러닝 기반 요구조자 탐지 알고리즘)

  • Kim, Da-hyeon;Park, Man-bok;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.23 no.4
    • /
    • pp.57-64
    • /
    • 2022
  • If the inside of a building collapses due to a disaster such as fire, collapse, or natural disaster, the physical security inside the building is likely to become ineffective. Here, physical security is needed to minimize the human casualties and physical damages in the collapsed building. Therefore, this paper proposes an algorithm to minimize the damage in a disaster situation by fusing existing research that detects obstacles and collapsed areas in the building and a deep learning-based object detection algorithm that minimizes human casualties. The existing research uses a single camera to determine whether the corridor environment in which the robot is currently located has collapsed and detects obstacles that interfere with the search and rescue operation. Here, objects inside the collapsed building have irregular shapes due to the debris or collapse of the building, and they are classified and detected as obstacles. We also propose a method to detect rescue requesters-the most important resource in the disaster situation-and minimize human casualties. To this end, we collected open-source disaster images and image data of disaster situations and calculated the accuracy of detecting rescue requesters in disaster situations through various deep learning-based object detection algorithms. In this study, as a result of analyzing the algorithms that detect rescue requesters in disaster situations, we have found that the YOLOv4 algorithm has an accuracy of 0.94, proving that it is most suitable for use in actual disaster situations. This paper will be helpful for performing efficient search and rescue in disaster situations and achieving a high level of physical security, even in collapsed buildings.

Learning Rules for AMR of Collision Avoidance using Fuzzy Classifier System (퍼지 분류자 시스템을 이용한 자율이동로봇의 충돌 회피학습)

  • 반창봉;심귀보
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.10 no.5
    • /
    • pp.506-512
    • /
    • 2000
  • In this paper, we propose a Fuzzy Classifier System(FCS) makes the classifier system be able to carry out the mapping from continuous inputs to outputs. The FCS is based on the fuzzy controller system combined with machine learning. Therefore the antecedent and consequent of a classifier in FCS are the same as those of a fuzzy rule. In this paper, the FCS modifies input message to fuzzified message and stores those in the message list. The FCS constructs rule-base through matching between messages of message list and classifiers of fuzzy classifier list. The FCS verifies the effectiveness of classifiers using Bucket Brigade algorithm. Also the FCS employs the Genetic Algorithms to generate new rules and modifY rules when performance of the system needs to be improved. Then the FCS finds the set of the effective rules. We will verifY the effectiveness of the poposed FCS by applying it to Autonomous Mobile Robot avoiding the obstacle and reaching the goal.

  • PDF