• Title/Summary/Keyword: Estimating Position

Search Result 411, Processing Time 0.026 seconds

Mobile Location Estimation Scheme Based on Virtual Area Concept (가상 구역 방법을 이용한 이동체 위치 추정)

  • Lee, Jong-Chan;Lee, Mun-Ho
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.37 no.7
    • /
    • pp.9-17
    • /
    • 2000
  • Determining the position and velocity of mobiles is an important issue for efficient handoff and channel allocation in microcell structure. Our early work proposes a technique for estimating the mobile location in the microcellular architecture. This process is based on the three step position estimation which determines the mobile position by gradually reducing the area of the mobile position. Using three step method, the estimator first estimates the locating sector in the sector estimation step, then estimates the locating zone in the zone estimation step, and then finally estimates the locating block in the block estimate step. But this scheme is prone to errors when the mobile is located in the boundary of sectors or tracks. In this paper we propose the enhanced scheme to reduce the estimation error.

  • PDF

Secure and Robust Clustering for Quantized Target Tracking in Wireless Sensor Networks

  • Mansouri, Majdi;Khoukhi, Lyes;Nounou, Hazem;Nounou, Mohamed
    • Journal of Communications and Networks
    • /
    • v.15 no.2
    • /
    • pp.164-172
    • /
    • 2013
  • We consider the problem of secure and robust clustering for quantized target tracking in wireless sensor networks (WSN) where the observed system is assumed to evolve according to a probabilistic state space model. We propose a new method for jointly activating the best group of candidate sensors that participate in data aggregation, detecting the malicious sensors and estimating the target position. Firstly, we select the appropriate group in order to balance the energy dissipation and to provide the required data of the target in the WSN. This selection is also based on the transmission power between a sensor node and a cluster head. Secondly, we detect the malicious sensor nodes based on the information relevance of their measurements. Then, we estimate the target position using quantized variational filtering (QVF) algorithm. The selection of the candidate sensors group is based on multi-criteria function, which is computed by using the predicted target position provided by the QVF algorithm, while the malicious sensor nodes detection is based on Kullback-Leibler distance between the current target position distribution and the predicted sensor observation. The performance of the proposed method is validated by simulation results in target tracking for WSN.

Evaluation of Planting Distance in Rice Paddies Using Deep Learning-Based Drone Imagery (딥 러닝 기반 드론 영상을 활용한 벼 포장의 재식거리 평가)

  • Hyeok-jin Bak;Dongwon Kwon;Woo-jin Im;Ji-hyeon Lee;Eun-ji Kim;Nam-jin Chung;Jung-Il Cho;Woon-Ha Hwang;Jae-Ki Chnag;Wan-Gyu Sang
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.69 no.3
    • /
    • pp.154-162
    • /
    • 2024
  • In response to the increasing impact of climate change on agriculture, various cultivation technologies have been recently developed to improve agricultural productivity and reduce carbon emissions for carbon neutrality. This study presents an algorithm for estimating rice planting density in agriculture using drone-captured images and deep learning-based image analysis technology. The algorithm utilizes images collected from various paddies; these images are processed through pre-processing steps and serve as training data for the YOLOv5x deep learning model. The trained model demonstrated high precision and recall, effectively estimating the position information of rice plants in each image. By accurately estimating the position of rice plants based on the central coordinates in diverse unpaved environments, the model allowed for estimation of rice plant density in each paddy, producing values closely aligned with actual measurements. Moreover, the algorithm proposed in this study provides a novel approach for precise determination of rice planting density based on the position information of rice plants in the images. Analysis of drone footage from different regions capturing portions of paddies revealed that the developed algorithm exhibited a significant correlation (R2 =0.877) with actual planting density. This finding suggests the potential effective application of the algorithm in real-world agricultural settings. In conclusion, we believe that this research contributes to the ongoing digital transformation in agriculture by offering a valuable technology that supports the goals of enhancing efficiency, mitigating methane emissions, and achieving carbon neutrality, in response to the challenges posed by climate change.

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

Research of Satellite Autonomous Navigation Using Star Sensor Algorithm (별 추적기 알고리즘을 활용한 위성 자율항법 연구)

  • Hyunseung Kim;Chul Hyun;Hojin Lee;Donggeon Kim
    • Journal of Space Technology and Applications
    • /
    • v.4 no.3
    • /
    • pp.232-243
    • /
    • 2024
  • In order to perform various missions in space, including planetary exploration, estimating the position of a satellite in orbit is a very important factor because it is directly related to the success rate of mission performance. As a study for autonomous satellite navigation, this study estimated the satellite's attitude and real time orbital position using a star sensor algorithm with two star trackers and earth sensor. To implement the star sensor algorithm, a simulator was constructed and the position error of the satellite estimated through the technique presented in the paper was analyzed. Due to lens distortion and errors in the center point finding algorithm, the average attitude estimation error was at the level of 2.6 rad in the roll direction. And the position error was confirmed by attitude error, so average error in altitude direction was 516 m. It is expected that the proposed satellite attitude and position estimation technique will contribute to analyzing star sensor performance and improving position estimation accuracy.

Driving Control System applying Position Recognition Method of Ball Robot using Image Processing (영상처리를 이용하는 볼 로봇의 위치 인식 방법을 적용한 주행 제어 시스템)

  • Heo, Nam-Gyu;Lee, Kwang-Min;Park, Seong-Hyun;Kim, Min-Ji;Park, Sung-Gu;Chung, Myung-Jin
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.148-155
    • /
    • 2021
  • As robot technology advances, research on the driving system of mobile robots is actively being conducted. The driving system of a mobile robot configured based on two-wheels and four-wheels has an advantage in unidirectional driving such as a straight line, but has disadvantages in turning direction and rotating in place. A ball robot using a ball as a wheel has an advantage in omnidirectional movement, but due to its structurally unstable characteristics, balancing control to maintain attitude and driving control for movement are required. By estimating the position from an encoder attached to the motor, conventional ball robots have a limitation, which causes the accumulation of errors during driving control. In this study, a driving control system was proposed that estimates the position coordinates of a ball robot through image processing and uses it for driving control. A driving control system including an image processing unit, a communication unit, a display unit, and a control unit for estimating the position of the ball robot was designed and manufactured. Through the driving control experiment applying the driving control system of the ball robot, it was confirmed that the ball robot was controlled within the error range of ±50.3mm in the x-axis direction and ±53.9mm in the y-axis direction without accumulating errors.

A New Method of Estimating Coronary Artery Diameter Using Direction Codes (방향코드를 이용한 관상동맥의 직경 측정 방법)

  • Jeon, Chun-Gi;Gang, Gwang-Nam;Lee, Tae-Won
    • Journal of Biomedical Engineering Research
    • /
    • v.16 no.3
    • /
    • pp.289-300
    • /
    • 1995
  • The conventionally used method requires centerline of vessels to estimate the vessel diameter. Two methods of estimating the centerline of vessels are reported : One is manually observer-defined method. This potentially contributes to inter-and intra-observer variability. And the other is to auto- matically detect the centerline of vessels. But this is very complicated method. In this paper, we propose a new method of estimating vessel diameter using direction codes and position informs:ion without detecting centerline. Since this method detects the vessel boundary and direction code at d same time, it simplifies the procedure and reduces execution time in estimating the vessel diameter. Compared to a method that automatically estimates the vessel diAmeter uslng centerline, our method provides improved accuracy in image with poor contrast, branching or obstructed vessels. Also, this provides a good compression of boundary description, because each direction code element can be coded with 3 bits only, instead of the 4 bytes required for the storage of the coordinates of each border pixel. Our experiments demonstrate the usefulness of the technique using direction code for quantitative analysis of coronary angiography Experimental results Justify the validity of the proposed method.

  • PDF

Estimating the Position of Mobiles by Multi-Criteria Decision Making

  • Lee, Jong-Chan;Ryu, Byung-Han;Ahn, Jee-Hwan
    • ETRI Journal
    • /
    • v.24 no.4
    • /
    • pp.323-327
    • /
    • 2002
  • In this study, we propose a novel mobile tracking method based on Multi-Criteria Decision Making (MCDM), in which uncertain parameters-the received signal strength, the distance between the mobile and the base station, the moving direction, and the previous location-are used in the decision process using the aggregation function in the fuzzy set theory. Through numerical results, we show that our proposed mobile tracking method provides a better performance than the conventional method using the received signal strength.

  • PDF

One-dimensional Kalman filter for estimating target position (목표물 위치추정을 위한 1차원 Kalman filter)

  • 진강규;하주식
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.10 no.3
    • /
    • pp.119-125
    • /
    • 1986
  • By using the least square input estimator and a likelihood ratio technique, an one-dimensional tracking problem is presented. A Kalman tracking filter based on constant-velocity model is used to track a target and the filtered estimate is updated with an input estimate when a maneuver is detected. The simulation results show that there are significant improvements using the scheme presented here.

  • PDF

Estimating Facial Feature Position with Matched Filters (Matched Filter를 이용한 얼굴 특징점 위치추출)

  • 황인택;최광남
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10b
    • /
    • pp.565-567
    • /
    • 2003
  • 이 논문은 Matched Filter 기술을 사용해 얼굴 특징점 위치를 추출하는 연구에 대해서 기술한다. 기본 목표는 얼굴의 서로 다른 8개( 양쪽 눈과 눈썹, 머리선, 코, 입, 턱 )의 부분을 구분할 수 있는 필터들을 개발하는 것이다. 이런 Matched Filter는 Fourier 역변환을 사용해 훈련영상(Training Image)으로부터 얻을 수 있다. 실험평가는 베른대학의 얼굴 데이터베이스에 근거한다. 우리는 여기서 다양한 얼굴의 방향성에 효과적으로 적용할 수 있도록 하는 훈련 영상자료가 무엇인지 알 수 있다. 그리고 안경을 썼을 때 얼굴을 인식할 수 있는 가장 좋은 방법도 알아본다.

  • PDF