• Title/Summary/Keyword: robot systems

Search Result 3,643, Processing Time 0.042 seconds

Design of Fuzzy-Neural Control Technique Using Automatic Cruise Control System of Mobile Robot

  • Kim, Jong-Soo;Jang, Jun-Hwa;Lee, Jin;Han, Sung-Hyung;Han, Dunk-Ki;Kim, Yong-Kyu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.69.3-69
    • /
    • 2001
  • This paper presents a new approach to the design of cruise control system of a mobile robot with two drive wheel. The proposed control scheme uses a Gaussian function as a unit function in the fuzzy-neural network, and back propagation algorithm to train the fuzzy-neural network controller in the framework of the specialized learning architecture. It is proposed a learning controller consisting of two neural network-fuzzy based on independent reasoning and a connection net with fixed weights to simply the neural networks-fuzzy. The performance of the proposed controller is shown by performing the computer simulation for trajectory tracking of the speed and azimuth of a mobile robot driven by two independent wheels.

  • PDF

Control of a welfare liferobot guided by voice commands

  • Han, Seong-Ho;Yoshihiro, Takita
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.47.3-47
    • /
    • 2001
  • This paper describes the control of a health care robot (called Welfare Liferobot) with voice commands. The welfare liferobot is an intelligent autonomous mobile robot with its own control system on-board and the set of sensors to perceive an environment. It is a natural way to control the welfare liferobot by use of voice command for the usage of keyboard and mouse may present a difficult problem to the elderly and the handicapped. Voice input as the main control modality can offer many advantages. A set of oral commands is included, and each command has its associated function. These control words (commands) have to be chosen by user. Each time a voice command is recognized by the robot, it executes the pre-assigned action ...

  • PDF

Mobility analysis of Planar Mobile Robots and The Rough-Terrain Mobile Robot via The Screw

  • Kim, Whee-Kuk;Yi, Byung-Ju;Lee, Seung-Eun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.59.3-59
    • /
    • 2001
  • In this paper, the method of analyzing mobility of the mechanisms is suggested. The method based on the joint screws provides accurate values of mobility of the mechanisms even with the lack of geometric generality. To show its validity, the method is applied to finding mobilities of planar mobile robots and a rough-terrain mobile robot, Mars Rover. To do so, simplified joint model for each of four different typical wheels of the mobile robots are described including friction velocities, firstly. Then, mobility analyses of planar mobile robots and the Mars Rover mobile robot for navigation on the rocky road on Mars are performed. It is confirmed that the obtained results in this study coincide with the previous ones which ...

  • PDF

The Design of Controller for Unlimited Track Mobile Robot

  • Park, Han-Soo;Heon Jeong;Park, Sei-Seung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.41.6-41
    • /
    • 2001
  • As autonomous mobile robot become more widely used in industry, the importance of navigation system is rising, But eh primary method of locomotion is with wheels, which cause man problems in controlling tracked mobile robots. In this paper, we discuss the used navigation control of tracked mobile robots with multiple sensors. The multiple sensors are composed of ultrasonic wave sensors and vision sensors. Vision sensors gauge distance using a laser and create visual images, to estimate robot position. The 80196 is used at close range and the vision board is used at long range. Data is managed in the main PC and management is distributed to ever sensor. The controller employs fuzzy logic.

  • PDF

A Region Search Algorithm and Improved Environment Map Building for Mobile Robot Navigation

  • Jin, Kwang-Sik;Jung, Suk-Yoon;Son, Jung-Su;Yoon, Tae-Sung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.71.1-71
    • /
    • 2001
  • In this paper, an improved method of environment map building and a region search algorithm for mobile robot are presented. For the environment map building of mobile robot, measurement data of ultrasonic sensors and certainty grid representation is usually used. In this case, inaccuracies due to the uncertainty of ultrasonic data are included in the map. In order to solve this problem, an environment map building method using a Bayesian model was proposed previously[5]. In this study, we present an improved method of probability map building that uses infrared sensors and shift division Gaussian probability distribution with the existing Bayesian update method using ultrasonic sensors. Also, a region search algorithm for ...

  • PDF

Improvement on the Image Processing for an Autonomous Mobile Robot with an Intelligent Control System

  • Kubik, Tomasz;Loukianov, Andrey A.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.36.4-36
    • /
    • 2001
  • A robust and reliable path recognition system is one necessary component for the autonomous navigation of a mobile robot to help determining its current position in its navigation map. This paper describes a computer visual path-recognition system using on-board video camera as vision-based driving assistance for an autonomous navigation mobile robot. The common problem for a visual system is that its reliability was often influenced by different lighting conditions. Here, two different image processing methods for the path detection were developed to reduce the effect of the luminance: one is based on the RGB color model and features of the path, another is based on the HSV color model in the absence of luminance.

  • PDF

Design of Convolution Neural Network (CNN) Based Medicine Classifier for Nursing Robots (간병 로봇을 위한 합성곱 신경망 (CNN) 기반 의약품 인식기 설계)

  • Kim, Hyun-Don;Kim, Dong Hyeon;Seo, Pil Won;Bae, Jongseok
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.5
    • /
    • pp.187-193
    • /
    • 2021
  • Our final goal is to implement nursing robots that can recognize patient's faces and their medicine on prescription. They can help patients to take medicine on time and prevent its abuse for recovering their health soon. As the first step, we proposed a medicine classifier with a low computational network that is able to run on embedded PCs without GPU in order to be applied to universal nursing robots. We confirm that our proposed model called MedicineNet achieves an 99.99% accuracy performance for classifying 15 kinds of medicines and background images. Moreover, we realize that the calculation time of our MedicineNet is about 8 times faster than EfficientNet-B0 which is well known as ImageNet classification with the high performance and the best computational efficiency.

Implementation of Wheelchair Robot Applying SLAM and Global Path Planning Methods Suitable for Indoor Autonomous Driving (실내 자율주행에 적합한 SLAM과 전역경로생성 방법을 적용한 휠체어로봇 구현)

  • Baek, Su-Jin;Kim, A-Hyeon;Kim, Jong-Wook
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.6
    • /
    • pp.293-297
    • /
    • 2021
  • This paper presents how to create a 3D map and solve problems related to generating a global path planning for navigation. Map creation and localization were performed using the RTAB-Map package to create a 3D map of the environment. In addition, when the target point is within the obstacle space, the problem of not generating a global path was solved using the asr_navfn package. The performance of the proposed system is validated through experiments with a wheelchair-type robot.

Failure Detection Method of Industrial Cartesian Coordinate Robots Based on a CNN Inference Window Using Ambient Sound (음향 데이터를 이용한 CNN 추론 윈도우 기반 산업용 직교 좌표 로봇의 고장 진단 기법)

  • Hyuntae Cho
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.57-64
    • /
    • 2024
  • In the industrial field, robots are used to increase productivity by replacing labors with dangerous, difficult, and hard tasks. However, failures of individual industrial robots in the entire production process may cause product defects or malfunctions, and may cause dangerous disasters in the case of manufacturing parts used in automobiles and aircrafts. Although requirements for early diagnosis of industrial robot failures are steadily increasing, there are many limitations in early detection. This paper introduces methods for diagnosing robot failures using sound-based data and deep learning. This paper also analyzes, compares, and evaluates the performance of failure diagnosis using various deep learning technologies. Furthermore, in order to improve the performance of the fault diagnosis system using deep learning technology, we propose a method to increase the accuracy of fault diagnosis based on an inference window. When adopting the inference window of deep learning, the accuracy of the failure diagnosis was increased up to 94%.

Design of HCI System of Museum Guide Robot Based on Visual Communication Skill

  • Qingqing Liang
    • Journal of Information Processing Systems
    • /
    • v.20 no.3
    • /
    • pp.328-336
    • /
    • 2024
  • Visual communication is widely used and enhanced in modern society, where there is an increasing demand for spirituality. Museum robots are one of many service robots that can replace humans to provide services such as display, interpretation and dialogue. For the improvement of museum guide robots, the paper proposes a human-robot interaction system based on visual communication skills. The system is based on a deep neural mesh structure and utilizes theoretical analysis of computer vision to introduce a Tiny+CBAM mesh structure in the gesture recognition component. This combines basic gestures and gesture states to design and evaluate gesture actions. The test results indicated that the improved Tiny+CBAM mesh structure could enhance the mean average precision value by 13.56% while maintaining a loss of less than 3 frames per second during static basic gesture recognition. After testing the system's dynamic gesture performance, it was found to be over 95% accurate for all items except double click. Additionally, it was 100% accurate for the action displayed on the current page.