• Title/Summary/Keyword: 6차원 거리

Search Result 259, Processing Time 0.026 seconds

Performance of a 3-Dimensional Signal Transmission System (3차원 신호 전송시스템의 성능)

  • Kwon, Hyeock Chan;Kang, Seog Geun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.11
    • /
    • pp.2021-2026
    • /
    • 2016
  • In this paper, a system model for transmission of 3-dimensional (3-D) signals is presented and its performance is analyzed. Unlike 2-D signals, no quadrature form expression for the 3-D signals is available. Exploiting a set of orthogonal basis functions, the 3-D signals are transmitted. As a result of computer simulation using very higher-level signal constellations, the 3-D transmission system has significantly improved error performance as compared with the 2-D system. It is considered that the principal reason for such performance improvement is much increased minimum Euclidean distance (MED) of the 3-D lattice constellations compared with the corresponding 2-D ones. When the MEDs of 2-D and 3-D lattice constellation are compared to confirm the analysis, the MED of 3-D 1024-ary constellation is around 2.6 times larger than that of the quadrature amplitude modulation (QAM). Expanding the constellation size to 4096, the MED of 3-D lattice constellation is increased by 3.2 times of the QAM.

6D ICP Based on Adaptive Sampling of Color Distribution (색상분포에 기반한 적응형 샘플링 및 6차원 ICP)

  • Kim, Eung-Su;Choi, Sung-In;Park, Soon-Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.9
    • /
    • pp.401-410
    • /
    • 2016
  • 3D registration is a computer vision technique of aligning multi-view range images with respect to a reference coordinate system. Various 3D registration algorithms have been introduced in the past few decades. Iterative Closest Point (ICP) is one of the widely used 3D registration algorithms, where various modifications are available nowadays. In the ICP-based algorithms, the closest points are considered as the corresponding points. However, this assumption fails to find matching points accurately when the initial pose between point clouds is not sufficiently close. In this paper, we propose a new method to solve this problem using the 6D distance (3D color space and 3D Euclidean distances). Moreover, a color segmentation-based adaptive sampling technique is used to reduce the computational time and improve the registration accuracy. Several experiments are performed to evaluate the proposed method. Experimental results show that the proposed method yields better performance compared to the conventional methods.

Distance Estimation Using Convolutional Neural Network in UWB Systems (UWB 시스템에서 합성곱 신경망을 이용한 거리 추정)

  • Nam, Gyeong-Mo;Jung, Tae-Yun;Jung, Sunghun;Jeong, Eui-Rim
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.10
    • /
    • pp.1290-1297
    • /
    • 2019
  • The paper proposes a distance estimation technique for ultra-wideband (UWB) systems using convolutional neural network (CNN). To estimate the distance from the transmitter and the receiver in the proposed method, 1 dimensional vector consisted of the magnitudes of the received samples is reshaped into a 2 dimensional matrix, and by using this matrix, the distance is estimated through the CNN regressor. The received signal for CNN training is generated by the UWB channel model in the IEEE 802.15.4a, and the CNN model is trained. Next, the received signal for CNN test is generated by filed experiments in indoor environments, and the distance estimation performance is verified. The proposed technique is also compared with the existing threshold based method. According to the results, the proposed CNN based technique is superior to the conventional method and specifically, the proposed method shows 0.6 m root mean square error (RMSE) at distance 10 m while the conventional technique shows much worse 1.6 m RMSE.

Object and Pose Recognition with Boundary Extraction from 3 Dimensional Depth Information (3 차원 거리 정보로부터 물체 윤곽추출에 의한 물체 및 자세 인식)

  • Gim, Seong-Chan;Yang, Chang-Ju;Lee, Jun-Ho;Kim, Jong-Man;Kim, Hyoung-Suk
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.6
    • /
    • pp.15-23
    • /
    • 2011
  • Stereo vision approach to solve the problem using a single camera three dimension precise distance measurement and object recognition method is proposed. Precise three dimensional information of objects can be obtained using single camera, a laser light and a rotating flat mirror. With a simple thresholding operation on the depth information, the segmentations of objects can be obtained. Comparing the signatures of object boundaries with database, objects can be recognized. Improving the simulation results for the object recognition by precise distance measurement are presented.

3D Terrain Reconstruction Using 2D Laser Range Finder and Camera Based on Cubic Grid for UGV Navigation (무인 차량의 자율 주행을 위한 2차원 레이저 거리 센서와 카메라를 이용한 입방형 격자 기반의 3차원 지형형상 복원)

  • Joung, Ji-Hoon;An, Kwang-Ho;Kang, Jung-Won;Kim, Woo-Hyun;Chung, Myung-Jin
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.26-34
    • /
    • 2008
  • The information of traversability and path planning is essential for UGV(Unmanned Ground Vehicle) navigation. Such information can be obtained by analyzing 3D terrain. In this paper, we present the method of 3D terrain modeling with color information from a camera, precise distance information from a 2D Laser Range Finder(LRF) and wheel encoder information from mobile robot with less data. And also we present the method of 3B terrain modeling with the information from GPS/IMU and 2D LRF with less data. To fuse the color information from camera and distance information from 2D LRF, we obtain extrinsic parameters between a camera and LRF using planar pattern. We set up such a fused system on a mobile robot and make an experiment on indoor environment. And we make an experiment on outdoor environment to reconstruction 3D terrain with 2D LRF and GPS/IMU(Inertial Measurement Unit). The obtained 3D terrain model is based on points and requires large amount of data. To reduce the amount of data, we use cubic grid-based model instead of point-based model.

Interactive Facial Expression Animation of Motion Data using CCA (CCA 투영기법을 사용한 모션 데이터의 대화식 얼굴 표정 애니메이션)

  • Kim Sung-Ho
    • Journal of Internet Computing and Services
    • /
    • v.6 no.1
    • /
    • pp.85-93
    • /
    • 2005
  • This paper describes how to distribute high multi-dimensional facial expression data of vast quantity over a suitable space and produce facial expression animations by selecting expressions while animator navigates this space in real-time. We have constructed facial spaces by using about 2400 facial expression frames on this paper. These facial spaces are created by calculating of the shortest distance between two random expressions. The distance between two points In the space of expression, which is manifold space, is described approximately as following; When the linear distance of them is shorter than a decided value, if the two expressions are adjacent after defining the expression state vector of facial status using distance matrix expressing distance between two markers, this will be considered as the shortest distance (manifold distance) of the two expressions. Once the distance of those adjacent expressions was decided, We have taken a Floyd algorithm connecting these adjacent distances to yield the shortest distance of the two expressions. We have used CCA(Curvilinear Component Analysis) technique to visualize multi-dimensional spaces, the form of expressing space, into two dimensions. While the animators navigate this two dimensional spaces, they produce a facial animation by using user interface in real-time.

  • PDF

Real-time Localization of An UGV based on Uniform Arc Length Sampling of A 360 Degree Range Sensor (전방향 거리 센서의 균일 원호길이 샘플링을 이용한 무인 이동차량의 실시간 위치 추정)

  • Park, Soon-Yong;Choi, Sung-In
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.6
    • /
    • pp.114-122
    • /
    • 2011
  • We propose an automatic localization technique based on Uniform Arc Length Sampling (UALS) of 360 degree range sensor data. The proposed method samples 3D points from dense a point-cloud which is acquired by the sensor, registers the sampled points to a digital surface model(DSM) in real-time, and determines the location of an Unmanned Ground Vehicle(UGV). To reduce the sampling and registration time of a sequence of dense range data, 3D range points are sampled uniformly in terms of ground sample distance. Using the proposed method, we can reduce the number of 3D points while maintaining their uniformity over range data. We compare the registration speed and accuracy of the proposed method with a conventional sample method. Through several experiments by changing the number of sampling points, we analyze the speed and accuracy of the proposed method.

3-dimensional Coordinate Measurement by Pulse Magnetic Field Method (자기적 방법을 이용한 3차원 좌표 측정)

  • Im, Y.B.;Cho, Y.;Herr, H.B.;Son, D.
    • Journal of the Korean Magnetics Society
    • /
    • v.12 no.6
    • /
    • pp.206-211
    • /
    • 2002
  • We have constructed a new kind of magnetic motion capture sensor based on the pulse magnetic field method. 3-orthogonal magnetic pulse fields were generated in turns only one period of sinusoidal waveform using 3-orthogonal magnetic dipole coils, ring counter and analog multiplier. These pulse magnetic fields were measured with 3-orthogonal search coils, of which induced voltages by the x-, y-, and l-dipole sources using S/H amplifier at the time position of maximum induced voltage. Using the developed motion capture sensor, we can measure position of sensor with uncertainty of ${\pm}$0.5% in the measuring range from ${\pm}$0.5 m to ${\pm}$1.5 m.

Autonomous Driving Platform using Hybrid Camera System (복합형 카메라 시스템을 이용한 자율주행 차량 플랫폼)

  • Eun-Kyung Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1307-1312
    • /
    • 2023
  • In this paper, we propose a hybrid camera system that combines cameras with different focal lengths and LiDAR (Light Detection and Ranging) sensors to address the core components of autonomous driving perception technology, which include object recognition and distance measurement. We extract objects within the scene and generate precise location and distance information for these objects using the proposed hybrid camera system. Initially, we employ the YOLO7 algorithm, widely utilized in the field of autonomous driving due to its advantages of fast computation, high accuracy, and real-time processing, for object recognition within the scene. Subsequently, we use multi-focal cameras to create depth maps to generate object positions and distance information. To enhance distance accuracy, we integrate the 3D distance information obtained from LiDAR sensors with the generated depth maps. In this paper, we introduce not only an autonomous vehicle platform capable of more accurately perceiving its surroundings during operation based on the proposed hybrid camera system, but also provide precise 3D spatial location and distance information. We anticipate that this will improve the safety and efficiency of autonomous vehicles.