• Title/Summary/Keyword: mobile vision system

Search Result 292, Processing Time 0.039 seconds

DIND Data Fusion with Covariance Intersection in Intelligent Space with Networked Sensors

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.1
    • /
    • pp.41-48
    • /
    • 2007
  • Latest advances in network sensor technology and state of the art of mobile robot, and artificial intelligence research can be employed to develop autonomous and distributed monitoring systems. In this study, as the preliminary step for developing a multi-purpose "Intelligent Space" platform to implement advanced technologies easily to realize smart services to human. We will give an explanation for the ISpace system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the DIND data fusion with CI of Intelligent Space. We will conclude by discussing some possible future extensions of ISpace. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions tracking multiple objects, human detection and motion assessment, with the results from the simulations run.

Lightweight CNN based Meter Digit Recognition

  • Sharma, Akshay Kumar;Kim, Kyung Ki
    • Journal of Sensor Science and Technology
    • /
    • v.30 no.1
    • /
    • pp.15-19
    • /
    • 2021
  • Image processing is one of the major techniques that are used for computer vision. Nowadays, researchers are using machine learning and deep learning for the aforementioned task. In recent years, digit recognition tasks, i.e., automatic meter recognition approach using electric or water meters, have been studied several times. However, two major issues arise when we talk about previous studies: first, the use of the deep learning technique, which includes a large number of parameters that increase the computational cost and consume more power; and second, recent studies are limited to the detection of digits and not storing or providing detected digits to a database or mobile applications. This paper proposes a system that can detect the digital number of meter readings using a lightweight deep neural network (DNN) for low power consumption and send those digits to an Android mobile application in real-time to store them and make life easy. The proposed lightweight DNN is computationally inexpensive and exhibits accuracy similar to those of conventional DNNs.

Remote Controlled Robot System using Real-Time Operating System (실시간 운영체제를 탑재한 원격 제어 로봇 시스템)

  • Lee, Tae-Hee;Cho, Sang
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.8
    • /
    • pp.689-695
    • /
    • 2004
  • This paper presents a robot system that combines computer network and an autonomous mobile robot where RTOS is installed. We propose a wireless communication protocol, and also implement it on the RTOS of the robot system. Main controller of the robot processes the control program as a task type in the real-time operating system. Peripheral devices are driven by the device driver functions with the dependency of the hardware. Because the client and server program was implemented to support the multi-platforms by Java SDK and Java JMF, it is easy to analyze programs, maintain system, and correct the errors in the system. End-user can control a robot with a vision showing remote sight over the Internet in real time, and the robot is moved keeping away from the obstacles by itself and command of the server received from end-user at the local client.

Position Improvement of a Mobile Robot by Real Time Tracking of Multiple Moving Objects (실시간 다중이동물체 추적에 의한 이동로봇의 위치개선)

  • Jin, Tae-Seok;Lee, Min-Jung;Tack, Han-Ho;Lee, In-Yong;Lee, Joon-Tark
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.2
    • /
    • pp.187-192
    • /
    • 2008
  • The Intelligent Space(ISpace) provides challenging research fields for surveillance, human-computer interfacing, networked camera conferencing, industrial monitoring or service and training applications. ISpace is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human Jollowing by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd. This paper describes appearance based unknown object tracking with the distributed vision system in intelligent space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

Survey on Visual Navigation Technology for Unmanned Systems (무인 시스템의 자율 주행을 위한 영상기반 항법기술 동향)

  • Kim, Hyoun-Jin;Seo, Hoseong;Kim, Pyojin;Lee, Chung-Keun
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.2
    • /
    • pp.133-139
    • /
    • 2015
  • This paper surveys vision based autonomous navigation technologies for unmanned systems. Main branches of visual navigation technologies are visual servoing, visual odometry, and visual simultaneous localization and mapping (SLAM). Visual servoing provides velocity input which guides mobile system to desired pose. This input velocity is calculated from feature difference between desired image and acquired image. Visual odometry is the technology that estimates the relative pose between frames of consecutive image. This can improve the accuracy when compared with the exisiting dead-reckoning methods. Visual SLAM aims for constructing map of unknown environment and determining mobile system's location simultaneously, which is essential for operation of unmanned systems in unknown environments. The trend of visual navigation is grasped by examining foreign research cases related to visual navigation technology.

Non-contact mobile inspection system for tunnels: a review (터널의 비접촉 이동식 상태점검 장비: 리뷰)

  • Chulhee Lee;Donggyou Kim
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.25 no.3
    • /
    • pp.245-259
    • /
    • 2023
  • The purpose of this paper is to examine the most recent tunnel scanning systems to obtain insights for the development of non-contact mobile inspection system. Tunnel scanning systems are mostly being developed by adapting two main technologies, namely laser scanning and image scanning systems. Laser scanning system has the advantage of accurately recreating the geometric characteristics of tunnel linings from point cloud. On the other hand, image scanning system employs computer vision to effortlessly identify damage, such as fine cracks and leaks on the tunnel lining surface. The analysis suggests that image scanning system is more suitable for detecting damage on tunnel linings. A camera-based tunnel scanning system under development should include components such as lighting, data storage, power supply, and image-capturing controller synchronized with vehicle speed.

Tele-operation of a Mobile Robot Using Force Reflection Joystick with Single Hall Sensor (단일 홀센서 힘반영 조이스틱을 이용한 모바일 로봇 원격제어)

  • Lee, Jang-Myung;Jeon, Chan-Sung;Cho, Seung-Keun
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.17-24
    • /
    • 2006
  • Though the final goal of mobile robot navigation is to be autonomous, operators' intelligent and skillful decisions are necessary when there are many scattered obstacles. There are several limitations even in the camera-based tele-operation of a mobile robot, which is very popular for the mobile robot navigation. For examples, shadowed and curved areas cannot be viewed using a narrow view-angle camera, especially in bad weather such as on snowy or rainy days. Therefore, it is necessary to have other sensory information for reliable tele-operations. In this paper, sixteen ultrasonic sensors are attached around a mobile robot in a ring pattern to measure the distances to obstacles. A collision vector is introduced in this paper as a new tool for obstacle avoidance, which is defined as a normal vector from an obstacle to the mobile robot. Based on this collision vector, a virtual reflection force is generated to avoid the obstacles and then the reflection force is transferred to an operator who is holding a joystick to control the mobile robot. Relying on the reflection force, the operator can control the mobile robot more smoothly and safely. For this bi-directional tele-operation, a master joystick system using a hall sensor was designed to resolve the existence of nonlinear sections, which are usual for a general joystick with two motors and potentiometers. Finally, the efficiency of a force reflection joystick is verified through the comparison of two vision-based tele-operation experiments, with and without force reflection.

  • PDF

A study on the stereo vision system for controlling the mobile robot tele-operation (이동로봇의 원격조작을 위한 스테레오 비젼에 관한 연구)

  • Jung, Ki-Su;Ro, Young-Shick;Kang, Hui-Jun;Seo, Young-Su;Yun, Seong-Jun
    • Proceedings of the KIEE Conference
    • /
    • 2007.10a
    • /
    • pp.321-322
    • /
    • 2007
  • 본 논문은 네트워크을 통한 원격제어 시스템을 구축하는 연구로써 무선 랜과 AP를 이용하여 독립 무선 네트워크를 구축하고, 이동로봇의 주변 환경에 대한 레이저 센서정보와 영상정의를 전송한다. 그리고 스테레오 카메라와 Head Mounted Display를 사용하여 원격지에서 입체감 있는 영상을 보며 조작을 할 수 있게 하였으며, Head Motion Tracking를 이용해 이동로봇의 카메라를 별도의 조작 없이 컨트롤 가능하도록 방법을 제안하였다.

  • PDF

Design and Implementation of a Low-Code/No-Code System

  • Hyun, Chang Young
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.188-193
    • /
    • 2019
  • This paper is about environment-based low-code and no-code execution platform and execution method that combines hybrid and native apps. In detail, this paper describes the Low-Code/No-Code execution structure that combines the advantages of hybrid and native apps. It supports the iPhone and Android phones simultaneously, supports various templates, and avoids developer-oriented development methods based on the production process of coding-free apps and the produced apps play the role of Java virtual machine (VM). The Low-Code /No-Code (LCNC) development platform is a visual integrated development environment that allows non-technical developers to drag and drop application components to develop mobile or web applications. It provides the functions to manage dependencies that are packaged into small modules such as widgets and dynamically loads when needed, to apply model-view-controller (MVC) pattern, and to handle document object model (DOM). In the Low-Code/No-Code system, the widget calls the AppOS API provided by the UCMS platform to deliver the necessary requests to AppOS. The AppOS API provides authentication/authorization, online to offline (O2O), commerce, messaging, social publishing, and vision. It includes providing the functionality of vision.

Automatic Extraction of Lean Tissue for Pork Grading

  • Cho, Sung-Ho;Huan, Le Ngoc;Choi, Sun;Kim, Tae-Jung;Shin, Wu-Hyun;Hwang, Heon
    • Journal of Biosystems Engineering
    • /
    • v.39 no.3
    • /
    • pp.174-183
    • /
    • 2014
  • Purpose: A robust, efficient auto-grading computer vision system for meat carcasses is in high demand by researchers all over the world. In this paper, we discuss our study, in which we developed a system to speed up line processing and provide reliable results for pork grading, comparing the results of our algorithms with visual human subjectivity measurements. Methods: We differentiated fat and lean using an entropic correlation algorithm. We also developed a self-designed robust segmentation algorithm that successfully segmented several porkcut samples; this algorithm can help to eliminate the current issues associated with autothresholding. Results: In this study, we carefully considered the key step of autoextracting lean tissue. We introduced a self-proposed scheme and implemented it in over 200 pork-cut samples. The accuracy and computation time were acceptable, showing excellent potential for use in online commercial systems. Conclusions: This paper summarizes the main results reported in recent application studies, which include modifying and smoothing the lean area of pork-cut sections of commercial fresh pork by human experts for an auto-grading process. The developed algorithms were implemented in a prototype mobile processing unit, which can be implemented at the pork processing site.