• Title/Summary/Keyword: Object detecting

Search Result 548, Processing Time 0.028 seconds

Improving visual relationship detection using linguistic and spatial cues

  • Jung, Jaewon;Park, Jongyoul
    • ETRI Journal
    • /
    • v.42 no.3
    • /
    • pp.399-410
    • /
    • 2020
  • Detecting visual relationships in an image is important in an image understanding task. It enables higher image understanding tasks, that is, predicting the next scene and understanding what occurs in an image. A visual relationship comprises of a subject, a predicate, and an object, and is related to visual, language, and spatial cues. The predicate explains the relationship between the subject and object and can be categorized into different categories such as prepositions and verbs. A large visual gap exists although the visual relationship is included in the same predicate. This study improves upon a previous study (that uses language cues using two losses) and a spatial cue (that only includes individual information) by adding relative information on the subject and object of the extant study. The architectural limitation is demonstrated and is overcome to detect all zero-shot visual relationships. A new problem is discovered, and an explanation of how it decreases performance is provided. The experiment is conducted on the VRD and VG datasets and a significant improvement over previous results is obtained.

Development of Core Technology for Object Detection in Excavation Work Using Laser Sensor (레이저 센서를 이용한 굴삭기 작업의 장애물 탐지 요소기술 개발)

  • Soh, Ji-Yune;Kim, Min-Woong;Lee, Jun-Bok;Han, Choong-Hee
    • Journal of the Korea Institute of Building Construction
    • /
    • v.8 no.4
    • /
    • pp.71-77
    • /
    • 2008
  • Earthwork is very equipment-intensive task and researches related to automated excavation have been conducted. There is an issue to secure the safety for an automated excavating system. Therefore, this paper focuses on how to improve safety for semi- or fully-automated backhoe excavation. The primary objective of this research is to develop the core technology for automated object detection in excavation work. In order to satisfy the research objective, a diverse sensing technologies are investigated and analysed in terms of functions, durability, and reliability. The authors developed detecting algorithm for the objects using laser sensor and verified its performance by several tests. The results of this study would be the basis for developing the automated object detection system.

Cluster Based Object Detection in Wireless Sensor Network

  • Rahman, Obaidur;Hong, Choong-Seon
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10d
    • /
    • pp.56-58
    • /
    • 2006
  • Sensing and coverage are the two relevant tasks for a sensor network. Quality of sensor network totally depends upon the sensing ability of sensors. In a certain monitored region success of detecting or sensing an object with the help of sensor network tells that how efficiently the network coverage perform. Here in this paper we have proposed a clustering algorithm for the deployment of sensors and thus calculated the object detection probability. Actually by this work we can easily identify the present network coverage status and accordingly can take action for its improvement.

  • PDF

OBJECT-ORIENTED CLASSIFICATION AND APPLICATIONS IN THE LUCC

  • Yang, Guijun
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1221-1223
    • /
    • 2003
  • With speediness of economy, the structure of land use has taken lots of change. How can we quickly and exactly obtain detailed land use/cover change information, and then we know land resource amount, quality, distributing and change direction. More and more high resolution satellite systems are under development. So we can make good use of RS data, existed GIS data and GPS data to extract change information and update map. In this paper a fully automated approach for detecting land use/cover change using remote sensing data with object-oriented classification based on GIS data, GPS data is presented (referring to Fig.1). At same time, I realize integrating raster with vector methods of updating the basic land use/land cover map based on 3S technology and this is becoming one of the most important developing direction in 3S application fields; land-use and cover change fields over the world. It has been successful applied in two tasks of The Ministry of Land and Resources P.R.C and taken some of benefit.

  • PDF

Measuring Method of In-plane Position Based On Reference Pattern (레퍼런스 패턴 기반 면내 위치 측정 방법)

  • Jung, Kwang Suk
    • Journal of Institute of Convergence Technology
    • /
    • v.2 no.1
    • /
    • pp.43-48
    • /
    • 2012
  • Generally, in-plane position of moving object is measured referring to the reference pattern attached to the object. From optical camera to magnetic reluctance probe, there are many ways detecting a variation of the periodical pattern. In this paper, the various operating principles developed for in-plane positioning are reviewed and compared each other. And, a novel method measuring large rotation as well as x, y linear displacements is suggested, including a detailed description of the overall system layout. It is a modified version of the surface encoder, which is a robust digital measuring method. From the surface encoder, the rotation of an object is measured indirectly through a compensated input of optical servo and independently of linear displacements. So, the operating range can be extended simply by enlarging the reference pattern, without magnifying the decoding units.

  • PDF

A Method for Body Keypoint Localization based on Object Detection using the RGB-D information (RGB-D 정보를 이용한 객체 탐지 기반의 신체 키포인트 검출 방법)

  • Park, Seohee;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.18 no.6
    • /
    • pp.85-92
    • /
    • 2017
  • Recently, in the field of video surveillance, a Deep Learning based learning method has been applied to a method of detecting a moving person in a video and analyzing the behavior of a detected person. The human activity recognition, which is one of the fields this intelligent image analysis technology, detects the object and goes through the process of detecting the body keypoint to recognize the behavior of the detected object. In this paper, we propose a method for Body Keypoint Localization based on Object Detection using RGB-D information. First, the moving object is segmented and detected from the background using color information and depth information generated by the two cameras. The input image generated by rescaling the detected object region using RGB-D information is applied to Convolutional Pose Machines for one person's pose estimation. CPM are used to generate Belief Maps for 14 body parts per person and to detect body keypoints based on Belief Maps. This method provides an accurate region for objects to detect keypoints an can be extended from single Body Keypoint Localization to multiple Body Keypoint Localization through the integration of individual Body Keypoint Localization. In the future, it is possible to generate a model for human pose estimation using the detected keypoints and contribute to the field of human activity recognition.

Grasping Impact-Improvement of Robot Hands using Proximate Sensor (근접 센서를 이용한 로봇 손의 파지 충격 개선)

  • Hong, Yeh-Sun;Chin, Seong-Mu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.1 s.94
    • /
    • pp.42-48
    • /
    • 1999
  • A control method for a robot hand grasping a object in a partially unknown environment will be proposed, where a proximate sensor detecting the distance between the fingertip and object was used. Particularly, the finger joints were driven servo-pneumatically in this study. Based on the proximate sensor signal the finger motion controller could plan the grasping process divided in three phases ; fast aproach, slow transitional contact and contact force control. That is, the fingertip approached to the object with full speed, until the output signal of the proximate sensor began to change. Within the perating range of the proximate sensor, the finger joint was moved by a state-variable feedback position controller in order to obtain a smooth contact with the object. The contact force of fingertip was then controlled using the blocked-line pressure sensitivity of the flow control servovalve for finger joint control. In this way, the grasping impact could be reduced without reducing the object approaching speed. The performance of the proposed grasping method was experimentally compared with that of a open loop-controlled one.

  • PDF

Intelligent Hexapod Mobile Robot using Image Processing and Sensor Fusion (영상처리와 센서융합을 활용한 지능형 6족 이동 로봇)

  • Lee, Sang-Mu;Kim, Sang-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.365-371
    • /
    • 2009
  • A intelligent mobile hexapod robot with various types of sensors and wireless camera is introduced. We show this mobile robot can detect objects well by combining the results of active sensors and image processing algorithm. First, to detect objects, active sensors such as infrared rays sensors and supersonic waves sensors are employed together and calculates the distance in real time between the object and the robot using sensor's output. The difference between the measured value and calculated value is less than 5%. This paper suggests effective visual detecting system for moving objects with specified color and motion information. The proposed method includes the object extraction and definition process which uses color transformation and AWUPC computation to decide the existence of moving object. We add weighing values to each results from sensors and the camera. Final results are combined to only one value which represents the probability of an object in the limited distance. Sensor fusion technique improves the detection rate at least 7% higher than the technique using individual sensor.

A Study on Phase Bearing Error using Phase Delay of Relative Phase Difference

  • Lee, Kwan Hyeong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.76-81
    • /
    • 2021
  • This study proposes a method to reduce the phase error of the received signal to detect the object bearing. The phase shift of the received signal occurs due to the multipath of the signal by natural structure or artificial structures. When detecting the direction of the object using radio waves, the phase of the received signal cannot be accurately detected because of the phase bearing error in the object detection direction. The object detection direction estimation depends on the phase difference, antenna installation distance, signal source wavelength, frequency band and bearing angle. This study reduces the error of the phase bearing by using the phase delay of the relative phase difference for the signals incident on the two antennas. Through simulation, we analyzed the object direction detection performance of the proposed method and the existing method. Three targets are detected from the [-15°, 0°, 15°] direction. The existing method detects the target at [-13°, 3°, 17°], and the proposed method detects the at [-15°, 0°, 15°]. As a result of the simulation, the target detection direction of the proposed method is improved by 2 degrees compared to the existing method.

PointNet and RandLA-Net Algorithms for Object Detection Using 3D Point Clouds (3차원 포인트 클라우드 데이터를 활용한 객체 탐지 기법인 PointNet과 RandLA-Net)

  • Lee, Dong-Kun;Ji, Seung-Hwan;Park, Bon-Yeong
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.59 no.5
    • /
    • pp.330-337
    • /
    • 2022
  • Research on object detection algorithms using 2D data has already progressed to the level of commercialization and is being applied to various manufacturing industries. Object detection technology using 2D data has an effective advantage, there are technical limitations to accurate data generation and analysis. Since 2D data is two-axis data without a sense of depth, ambiguity arises when approached from a practical point of view. Advanced countries such as the United States are leading 3D data collection and research using 3D laser scanners. Existing processing and detection algorithms such as ICP and RANSAC show high accuracy, but are used as a processing speed problem in the processing of large-scale point cloud data. In this study, PointNet a representative technique for detecting objects using widely used 3D point cloud data is analyzed and described. And RandLA-Net, which overcomes the limitations of PointNet's performance and object prediction accuracy, is described a review of detection technology using point cloud data was conducted.