• Title/Summary/Keyword: Marker Information Recognition

Search Result 68, Processing Time 0.02 seconds

Finger-Gesture Recognition Using Concentric-Circle Tracing Algorithm (동심원 추적 알고리즘을 사용한 손가락 동작 인식)

  • Hwang, Dong-Hyun;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.12
    • /
    • pp.2956-2962
    • /
    • 2015
  • In this paper, we propose a novel algorithm, Concentric-Circle Tracing algorithm, which recognizes finger's shape and counts the number of fingers of hand using low-cost web-camera. We improve algorithm's usability by using low-price web-camera and also enhance user's comfortability by not using a additional marker or sensor. As well as counting the number of fingers, it is possible to extract finger's shape information whether finger is straight or folded, efficiently. The experimental result shows that the finger gesture can be recognized with an average accuracy of 95.48%. It is confirmed that the hand-gesture is an useful method for HCI input and remote control command.

Recognition of road information using magnetic polarity for intelligent vehicles (자계 극배치를 이용한 지능형 차량용 도로 정보의 인식)

  • Kim, Young-Min;Lim, Young-Cheol;Kim, Tae-Gon;Kim, Eui-Sun
    • Journal of Sensor Science and Technology
    • /
    • v.14 no.6
    • /
    • pp.409-414
    • /
    • 2005
  • For an intelligent vehicle driving which uses magnetic markers and magnetic sensors, we can get every kind of road information while moving the vehicle if we use the code that is encoded with N, S pole direction of markers. If we make it an only aim to move the vehicle, it becomes easy to control the vehicle the more we put markers close. By the way, to recognize the direction of a marker pole it is much better that the markers have no effect each other. To get road informations and move the vehicle autonomously we propose the methods of arranging magnetic sensors and algorithm of recognizing the position of the vehicle with those sensors. We verified the effectiveness of the methods with computer simulation.

Implementation of Hand-Gesture Interface to manipulate a 3D Object of Augmented Reality (증강현실의 3D 객체 조작을 위한 핸드-제스쳐 인터페이스 구현)

  • Jang, Myeong-Soo;Lee, Woo-Beom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.4
    • /
    • pp.117-123
    • /
    • 2016
  • A hand-gesture interface to manipulate a 3D object of augmented reality is implemented by recognizing the user hand-gesture in this paper. Proposed method extracts the hand region from real image, and creates augmented object by hand marker recognized user hand-gesture. Also, 3D object manipulation corresponding to user hand-gesture is performed by analyzing a hand region ratio, a numbet of finger and a variation ratio of hand region center. In order to evaluate the performance of the our proposed method, after making a 3D object by using the OpenGL library, all processing tasks are implemented by using the Intel OpenCV library and C++ language. As a result, the proposed method showed the average 90% recognition ratio by the user command-modes successfully.

Heterogeneous Sensor Coordinate System Calibration Technique for AR Whole Body Interaction (AR 전신 상호작용을 위한 이종 센서 간 좌표계 보정 기법)

  • Hangkee Kim;Daehwan Kim;Dongchun Lee;Kisuk Lee;Nakhoon Baek
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.7
    • /
    • pp.315-324
    • /
    • 2023
  • A simple and accurate whole body rehabilitation interaction technology using immersive digital content is needed for elderly patients with steadily increasing age-related diseases. In this study, we introduce whole-body interaction technology using HoloLens and Kinect for this purpose. To achieve this, we propose three coordinate transformation methods: mesh feature point-based transformation, AR marker-based transformation, and body recognition-based transformation. The mesh feature point-based transformation aligns the coordinate system by designating three feature points on the spatial mesh and using a transform matrix. This method requires manual work and has lower usability, but has relatively high accuracy of 8.5mm. The AR marker-based method uses AR and QR markers recognized by HoloLens and Kinect simultaneously to achieve a compliant accuracy of 11.2mm. The body recognition-based transformation aligns the coordinate system by using the position of the head or HMD recognized by both devices and the position of both hands or controllers. This method has lower accuracy, but does not require additional tools or manual work, making it more user-friendly. Additionally, we reduced the error by more than 10% using RANSAC as a post-processing technique. These three methods can be selectively applied depending on the usability and accuracy required for the content. In this study, we validated this technology by applying it to the "Thunder Punch" and rehabilitation therapy content.

Lane Detection Algorithm for Night-time Digital Image Based on Distribution Feature of Boundary Pixels

  • You, Feng;Zhang, Ronghui;Zhong, Lingshu;Wang, Haiwei;Xu, Jianmin
    • Journal of the Optical Society of Korea
    • /
    • v.17 no.2
    • /
    • pp.188-199
    • /
    • 2013
  • This paper presents a novel algorithm for nighttime detection of the lane markers painted on a road at night. First of all, the proposed algorithm uses neighborhood average filtering, 8-directional Sobel operator and thresholding segmentation based on OTSU's to handle raw lane images taken from a digital CCD camera. Secondly, combining intensity map and gradient map, we analyze the distribution features of pixels on boundaries of lanes in the nighttime and construct 4 feature sets for these points, which are helpful to supply with sufficient data related to lane boundaries to detect lane markers much more robustly. Then, the searching method in multiple directions- horizontal, vertical and diagonal directions, is conducted to eliminate the noise points on lane boundaries. Adapted Hough transformation is utilized to obtain the feature parameters related to the lane edge. The proposed algorithm can not only significantly improve detection performance for the lane marker, but it requires less computational power. Finally, the algorithm is proved to be reliable and robust in lane detection in a nighttime scenario.

Design and Implementation of Mobile Vision-based Augmented Galaga using Real Objects (실제 물체를 이용한 모바일 비전 기술 기반의 실감형 갤러그의 설계 및 구현)

  • Park, An-Jin;Yang, Jong-Yeol;Jung, Kee-Chul
    • Journal of Korea Game Society
    • /
    • v.8 no.2
    • /
    • pp.85-96
    • /
    • 2008
  • Recently, research on augmented games as a new game genre has attracted a lot of attention. An augmented game overlaps virtual objects in an augmented reality(AR) environment, allowing game players to interact with the AR environment through manipulating real and virtual objects. However, it is difficult to release existing augmented games to ordinary game players, as the games generally use very expensive and inconvenient 'backpack' systems: To solve this problem, several augmented games have been proposed using mobile devices equipped with cameras, but it can be only enjoyed at a previously-installed location, as a ‘color marker' or 'pattern marker’ is used to overlap the virtual object with the real environment. Accordingly, this paper introduces an augmented game, called augmented galaga based on traditional well-known galaga, executed on mobile devices to make game players experience the game without any economic burdens. Augmented galaga uses real object in real environments, and uses scale-invariant features(SIFT), and Euclidean distance to recognize the real objects. The virtural aliens are randomly appeared around the specific objects, several specific objects are used to improve the interest aspect, andgame players attack the virtual aliens by moving the mobile devices towards specific objects and clicking a button of mobile devices. As a result, we expect that augmented galaga provides an exciting experience without any economic burdens for players based on the game paradigm, where the user interacts with both the physical world captured by a mobile camera and the virtual aliens automatically generated by a mobile devices.

  • PDF

Research for robot kidnap problem in the indoor of utilizing external image information and the absolute spatial coordinates (실내 공간에서 이동 로봇의 납치 문제 해결을 위한 외부 영상 정보 및 절대 공간 좌표 활용 연구)

  • Jeon, Young-Pil;Park, Jong-Ho;Lim, Shin-Teak;Chong, Kil-To
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.3
    • /
    • pp.2123-2130
    • /
    • 2015
  • For such automatic monitoring robot or a robot cleaner that is utilized indoors, if it deviates from someone by replacement or, or of a mobile robot such as collisions with unexpected object direction or planned path, based on the planned path There is a need to come back to, it is necessary to tough self-position estimation ability of mobile robot in this, which is also associated with resolution of the kidnap problem of conventional mobile robot. In this study, the case of a mobile robot, operates indoors, you want to take advantage of the low cost of the robot. Therefore, in this paper, by using the acquisition device to an external image information such as the CCTV which is installed in a room, it acquires the environment image and take advantage of marker recognition of the mobile robot at the same time and converted it absolutely spatial coordinates it is, we are trying to solve the self-position estimation of the mobile robot in the room and kidnap problem and actual implementation methods potential field to try utilizing robotic systems. Thus, by implementing the method proposed in this study to the actual robot system, and is promoting the relevant experiment was to verify the results.

Gesture Spotting by Web-Camera in Arbitrary Two Positions and Fuzzy Garbage Model (임의 두 지점의 웹 카메라와 퍼지 가비지 모델을 이용한 사용자의 의미 있는 동작 검출)

  • Yang, Seung-Eun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.127-136
    • /
    • 2012
  • Many research of hand gesture recognition based on vision system have been conducted which enable user operate various electronic devices more easily. 3D position calculation and meaningful gesture classification from similar gestures should be executed to recognize hand gesture accurately. A simple and cost effective method of 3D position calculation and gesture spotting (a task to recognize meaningful gesture from other similar meaningless gestures) is described in this paper. 3D position is achieved by calculation of two cameras relative position through pan/tilt module and a marker regardless with the placed position. Fuzzy garbage model is proposed to provide a variable reference value to decide whether the user gesture is the command gesture or not. The reference is achieved from fuzzy command gesture model and fuzzy garbage model which returns the score that shows the degree of belonging to command gesture and garbage gesture respectively. Two-stage user adaptation is proposed that off-line (batch) adaptation for inter-personal difference and on-line (incremental) adaptation for intra-difference to enhance the performance. Experiment is conducted for 5 different users. The recognition rate of command (discriminate command gesture) is more than 95% when only one command like meaningless gesture exists and more than 85% when the command is mixed with many other similar gestures.

A Study on Intuitive IoT Interface System using 3D Depth Camera (3D 깊이 카메라를 활용한 직관적인 사물인터넷 인터페이스 시스템에 관한 연구)

  • Park, Jongsub;Hong, June Seok;Kim, Wooju
    • The Journal of Society for e-Business Studies
    • /
    • v.22 no.2
    • /
    • pp.137-152
    • /
    • 2017
  • The decline in the price of IT devices and the development of the Internet have created a new field called Internet of Things (IoT). IoT, which creates new services by connecting all the objects that are in everyday life to the Internet, is pioneering new forms of business that have not been seen before in combination with Big Data. The prospect of IoT can be said to be unlimited in its utilization. In addition, studies of standardization organizations for smooth connection of these IoT devices are also active. However, there is a part of this study that we overlook. In order to control IoT equipment or acquire information, it is necessary to separately develop interworking issues (IP address, Wi-Fi, Bluetooth, NFC, etc.) and related application software or apps. In order to solve these problems, existing research methods have been conducted on augmented reality using GPS or markers. However, there is a disadvantage in that a separate marker is required and the marker is recognized only in the vicinity. In addition, in the case of a study using a GPS address using a 2D-based camera, it was difficult to implement an active interface because the distance to the target device could not be recognized. In this study, we use 3D Depth recognition camera to be installed on smartphone and calculate the space coordinates automatically by linking the distance measurement and the sensor information of the mobile phone without a separate marker. Coordination inquiry finds equipment of IoT and enables information acquisition and control of corresponding IoT equipment. Therefore, from the user's point of view, it is possible to reduce the burden on the problem of interworking of the IoT equipment and the installation of the app. Furthermore, if this technology is used in the field of public services and smart glasses, it will reduce duplication of investment in software development and increase in public services.

A Study on Pagoda Image Search Using Artificial Intelligence (AI) Technology for Restoration of Cultural Properties

  • Lee, ByongKwon;Kim, Soo Kyun;Kim, Seokhun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2086-2097
    • /
    • 2021
  • The current cultural assets are being restored depending on the opinions of experts (craftsmen). We intend to introduce digitalized artificial intelligence techniques, excluding the personal opinions of experts on reconstruction of such cultural properties. The first step toward restoring digitized cultural properties is separation. The restoration of cultural properties should be reorganized based on recorded documents, period historical backgrounds and regional characteristics. The cultural properties in the form of photographs or images should be collected by separating the background. In addition, when restoring cultural properties most of them depend a lot on the tendency of the restoring person workers. As a result, it often occurs when there is a problem in the accuracy and reliability of restoration of cultural properties. In this study, we propose a search method for learning stored digital cultural assets using AI technology. Pagoda was selected for restoration of Cultural Properties. Pagoda data collection was collected through the Internet and various historical records. The pagoda data was classified by period and region, and grouped into similar buildings. The collected data was learned by applying the well-known CNN algorithm for artificial intelligence learning. The pagoda search used Yolo Marker to mark the tower shape. The tower was used a total of about 100-10,000 pagoda data. In conclusion, it was confirmed that the probability of searching for a tower differs according to the number of pagoda pictures and the number of learning iterations. Finally, it was confirmed that the number of 500 towers and the epochs in training of 8000 times were good. If the test result exceeds 8,000 times, it becomes overfitting. All so, I found a phenomenon that the recognition rate drops when the enemy repeatedly learns more than 8,000 times. As a result of this study, it is believed that it will be helpful in data gathering to increase the accuracy of tower restoration.