• Title/Summary/Keyword: robot systems

Search Result 3,642, Processing Time 0.031 seconds

Fast and Fine Control of a Visual Alignment Systems Based on the Misalignment Estimation Filter (정렬오차 추정 필터에 기반한 비전 정렬 시스템의 고속 정밀제어)

  • Jeong, Hae-Min;Hwang, Jae-Woong;Kwon, Sang-Joo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1233-1240
    • /
    • 2010
  • In the flat panel display and semiconductor industries, the visual alignment system is considered as a core technology which determines the productivity of a manufacturing line. It consists of the vision system to extract the centroids of alignment marks and the stage control system to compensate the alignment error. In this paper, we develop a Kalman filter algorithm to estimate the alignment mark postures and propose a coarse-fine alignment control method which utilizes both original fine images and reduced coarse ones in the visual feedback. The error compensation trajectory for the distributed joint servos of the alignment stage is generated in terms of the inverse kinematic solution for the misalignment in task space. In constructing the estimation algorithm, the equation of motion for the alignment marks is given by using the forward kinematics of alignment stage. Secondly, the measurements for the alignment mark centroids are obtained from the reduced images by applying the geometric template matching. As a result, the proposed Kalman filter based coarse-fine alignment control method enables a considerable reduction of alignment time.

An iterative learning and adaptive control scheme for a class of uncertain systems

  • Kuc, Tae-Yong;Lee, Jin-S.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10b
    • /
    • pp.963-968
    • /
    • 1990
  • An iterative learning control scheme for tracking control of a class of uncertain nonlinear systems is presented. By introducing a model reference adaptive controller in the learning control structure, it is possible to achieve zero tracking of unknown system even when the upperbound of uncertainty in system dynamics is not known apriori. The adaptive controller pull the state of the system to the state of reference model via control gain adaptation at each iteration, while the learning controller attracts the model state to the desired one by synthesizing a suitable control input along with iteration numbers. In the controller role transition from the adaptive to the learning controller takes place in gradually as learning proceeds. Another feature of this control scheme is that robustness to bounded input disturbances is guaranteed by the linear controller in the feedback loop of the learning control scheme. In addition, since the proposed controller does not require any knowledge of the dynamic parameters of the system, it is flexible under uncertain environments. With these facts, computational easiness makes the learning scheme more feasible. Computer simulation results for the dynamic control of a two-axis robot manipulator shows a good performance of the scheme in relatively high speed operation of trajectory tracking.

  • PDF

Development of Precise Localization System for Autonomous Mobile Robots using Multiple Ultrasonic Transmitters and Receivers in Indoor Environments (다수의 초음파 송수신기를 이용한 이동 로봇의 정밀 실내 위치인식 시스템의 개발)

  • Kim, Yong-Hwi;Song, Ui-Kyu;Kim, Byung-Kook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.4
    • /
    • pp.353-361
    • /
    • 2011
  • A precise embedded ultrasonic localization system is developed for autonomous mobile robots in indoor environments, which is essential for autonomous navigation of mobile robots with various tasks. Although ultrasonic sensors are more cost-effective than other sensors such as LRF (Laser Range Finder) and vision, they suffer inaccuracy and directional ambiguity. First, we apply the matched filter to measure the distance precisely. For resolving the computational complexity of the matched filter for embedded systems, we propose a new matched filter algorithm with fast computation in three points of view. Second, we propose an accurate ultrasonic localization system which consists of three ultrasonic receivers on the mobile robot and two or more transmitters on the ceiling. Last, we add an extended Kalman filter to estimate position and orientation. Various simulations and experimental results show the effectiveness of the proposed system.

Distributed Moving Algorithm of Swarm Robots to Enclose an Invader (침입자 포위를 위한 군집 로봇의 분산 이동 알고리즘)

  • Lee, Hea-Jae;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.2
    • /
    • pp.224-229
    • /
    • 2009
  • When swarm robots exist in the same workspace, first we have to decide robots in order to accomplish some tasks. There have been a lot of works that research how to control robots in cooperation. The interest in using swarm robot systems is due to their unique characteristics such as increasing the adaptability and the flexibility of mission execution. When an invader is discovered, swarm robots have to enclose a invader through a variety of path, expecting invader's move, in order to effective enclose. In this paper, we propose an effective swarm robots enclosing and distributed moving algorithm in a two dimensional map.

The Vision-based Autonomous Guided Vehicle Using a Virtual Photo-Sensor Array (VPSA) for a Port Automation (가상 포토센서 배열을 탑재한 항만 자동화 자을 주행 차량)

  • Kim, Soo-Yong;Park, Young-Su;Kim, Sang-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.2
    • /
    • pp.164-171
    • /
    • 2010
  • We have studied the port-automation system which is requested by the steep increment of cost and complexity for processing the freight. This paper will introduce a new algorithm for navigating and controlling the autonomous Guided Vehicle (AGV). The camera has the optical distortion in nature and is sensitive to the external ray, the weather, and the shadow, but it is very cheap and flexible to make and construct the automation system for the port. So we tried to apply to the AGV for detecting and tracking the lane using the CCD camera. In order to make the error stable and exact, this paper proposes new concept and algorithm for obtaining the error is generated by the Virtual Photo-Sensor Array (VPSA). VPSAs are implemented by programming and very easy to use for the various autonomous systems. Because the load of the computation is light, the AGV utilizes the maximal performance of the CCD camera and enables the CPU to take multi-tasks. We experimented on the proposed algorithm using the mobile robot and confirmed the stable and exact performance for tracking the lane.

Development of Vision based Emotion Recognition Robot (비전 기반의 감정인식 로봇 개발)

  • Park, Sang-Sung;Kim, Jung-Nyun;An, Dong-Kyu;Kim, Jae-Yeon;Jang, Dong-Sik
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.670-672
    • /
    • 2005
  • 본 논문은 비전을 기반으로 한 감정인식 로봇에 관한 논문이다. 피부스킨칼라와 얼굴의 기하학적 정보를 이용한 얼굴검출과 감정인식 알고리즘을 제안하고, 개발한 로봇 시스템을 설명한다. 얼굴 검출은 RGB 칼라 공간을 CIElab칼라 공간으로 변환하여, 피부스킨 후보영역을 추출하고, Face Filter로 얼굴의 기하학적 상관관계를 통하여 얼굴을 검출한다. 기하학적인 특징을 이용하여 눈, 코, 입의 위치를 판별하여 표정 인식의 기본 데이터로 활용한다. 눈썹과 입의 영역에 감정 인식 윈도우를 적용하여, 윈도우 내에서의 픽셀값의 변화와 크기의 변화로 감정인식의 특징 칼을 추출한다. 추출된 값은 실험에 의해서 미리 구해진 샘플과 비교를 통해 강정을 표현하고, 표현된 감정은 Serial Communication을 통하여 로봇에 전달되고, 감정 데이터를 받은 얼굴에 장착되어 있는 모터를 통해 표정을 표현한다.

  • PDF

Vision-based Ground Test for Active Debris Removal

  • Lim, Seong-Min;Kim, Hae-Dong;Seong, Jae-Dong
    • Journal of Astronomy and Space Sciences
    • /
    • v.30 no.4
    • /
    • pp.279-290
    • /
    • 2013
  • Due to the continuous space development by mankind, the number of space objects including space debris in orbits around the Earth has increased, and accordingly, difficulties of space development and activities are expected in the near future. In this study, among the stages for space debris removal, the implementation of a vision-based approach technique for approaching space debris from a far-range rendezvous state to a proximity state, and the ground test performance results were described. For the vision-based object tracking, the CAM-shift algorithm with high speed and strong performance, and the Kalman filter were combined and utilized. For measuring the distance to a tracking object, a stereo camera was used. For the construction of a low-cost space environment simulation test bed, a sun simulator was used, and in the case of the platform for approaching, a two-dimensional mobile robot was used. The tracking status was examined while changing the position of the sun simulator, and the results indicated that the CAM-shift showed a tracking rate of about 87% and the relative distance could be measured down to 0.9 m. In addition, considerations for future space environment simulation tests were proposed.

Photon-counting linear discriminant analysis for face recognition at a distance

  • Yeom, Seok-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.3
    • /
    • pp.250-255
    • /
    • 2012
  • Face recognition has wide applications in security and surveillance systems as well as in robot vision and machine interfaces. Conventional challenges in face recognition include pose, illumination, and expression, and face recognition at a distance involves additional challenges because long-distance images are often degraded due to poor focusing and motion blurring. This study investigates the effectiveness of applying photon-counting linear discriminant analysis (Pc-LDA) to face recognition in harsh environments. A related technique, Fisher linear discriminant analysis, has been found to be optimal, but it often suffers from the singularity problem because the number of available training images is generally much smaller than the number of pixels. Pc-LDA, on the other hand, realizes the Fisher criterion in high-dimensional space without any dimensionality reduction. Therefore, it provides more invariant solutions to image recognition under distortion and degradation. Two decision rules are employed: one is based on Euclidean distance; the other, on normalized correlation. In the experiments, the asymptotic equivalence of the photon-counting method to the Fisher method is verified with simulated data. Degraded facial images are employed to demonstrate the robustness of the photon-counting classifier in harsh environments. Four types of blurring point spread functions are applied to the test images in order to simulate long-distance acquisition. The results are compared with those of conventional Eigen face and Fisher face methods. The results indicate that Pc-LDA is better than conventional facial recognition techniques.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

A Policy-Based Meta-Planning for General Task Management for Multi-Domain Services (다중 도메인 서비스를 위한 정책 모델 주도 메타-플래닝 기반 범용적 작업관리)

  • Choi, Byunggi;Yu, Insik;Lee, Jaeho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.12
    • /
    • pp.499-506
    • /
    • 2019
  • An intelligent robot should decide its behavior accordingly to the dynamic changes in the environment and user's requirements by evaluating options to choose the best one for the current situation. Many intelligent robot systems that use the Procedural Reasoning System (PRS) accomplishes such task management functions by defining the priority functions in the task model and evaluating the priority functions of the applicable tasks in the current situation. The priority functions, however, are defined locally inside of the plan, which exhibits limitation for the tasks for multi-domain services because global contexts for overall prioritization are hard to be expressed in the local priority functions. Furthermore, since the prioritization functions are not defined as an explicit module, reuse or extension of the them for general context is limited. In order to remove such limitations, we propose a policy-based meta-planning for general task management for multi-domain services, which provides the ability to explicitly define the utility of a task in the meta-planning process and thus the ability to evaluate task priorities for general context combining the modular priority functions. The ontological specification of the model also enhances the scalability of the policy model. In the experiments, adaptive behavior of a robot according to the policy model are confirmed by observing the appropriate tasks are selected in dynamic service environments.