• Title/Summary/Keyword: Image rotation

Search Result 842, Processing Time 0.026 seconds

The Road Traffic Sign Recognition and Automatic Positioning for Road Facility Management (도로시설물 관리를 위한 교통안전표지 인식 및 자동위치 취득 방법 연구)

  • Lee, Jun Seok;Yun, Duk Geun
    • International Journal of Highway Engineering
    • /
    • v.15 no.1
    • /
    • pp.155-161
    • /
    • 2013
  • PURPOSES: This study is to develop a road traffic sign recognition and automatic positioning for road facility management. METHODS: In this study, we installed the GPS, IMU, DMI, camera, laser sensor on the van and surveyed the car position, fore-sight image, point cloud of traffic signs. To insert automatic position of traffic sign, the automatic traffic sign recognition S/W developed and it can log the traffic sign type and approximate position, this study suggests a methodology to transform the laser point-cloud to the map coordinate system with the 3D axis rotation algorithm. RESULTS: Result show that on a clear day, traffic sign recognition ratio is 92.98%, and on cloudy day recognition ratio is 80.58%. To insert exact traffic sign position. This study examined the point difference with the road surveying results. The result RMSE is 0.227m and average is 1.51m which is the GPS positioning error. Including these error we can insert the traffic sign position within 1.51m CONCLUSIONS: As a result of this study, we can automatically survey the traffic sign type, position data of the traffic sign position error and analysis the road safety, speed limit consistency, which can be used in traffic sign DB.

Feasibility of a New Vault Technique through Kinematic Analysis of Yeo 2 and YANG Hak Seon Vaults

  • Song, Joo-Ho;Park, Jong-Hoon;Kim, Jin-Sun
    • Korean Journal of Applied Biomechanics
    • /
    • v.28 no.2
    • /
    • pp.69-78
    • /
    • 2018
  • Objective: The purpose of this study was to investigate the feasibility of a new vault technique through a kinematic comparison of the YANG Hak Seon and Yeo 2 vaults. Method: The photographic images of the YANG Hak Seon and Yeo 2 vaults were collected using a high-speed camera, and their kinematic characteristics were analyzed using three-dimensional image analysis. Results: During the post-flight phase of the Yeo 2 and YANG Hak Seon vaults, the time of flight, height of flight, and flight distance were similar. At the peak of the post-flight phase, the trunk rotation angle of the YANG Hak Seon vault rotated $457^{\circ}$ more than did the Yeo 2 vault. During the post-flight descending period, the twist velocity of the trunk was much faster with the YANG Hak Seon vault ($1,278^{\circ}/s$) than with the Yeo 2 vault ($1,016^{\circ}/s$). Conclusion: To succeed in the new technique, the average twist velocity during post-flight must be maintained at $1,058^{\circ}/s$ and the twist velocity must be increased from the ascending phase.

Hand Gesture Recognition using Optical Flow Field Segmentation and Boundary Complexity Comparison based on Hidden Markov Models

  • Park, Sang-Yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.4
    • /
    • pp.504-516
    • /
    • 2011
  • In this paper, we will present a method to detect human hand and recognize hand gesture. For detecting the hand region, we use the feature of human skin color and hand feature (with boundary complexity) to detect the hand region from the input image; and use algorithm of optical flow to track the hand movement. Hand gesture recognition is composed of two parts: 1. Posture recognition and 2. Motion recognition, for describing the hand posture feature, we employ the Fourier descriptor method because it's rotation invariant. And we employ PCA method to extract the feature among gesture frames sequences. The HMM method will finally be used to recognize these feature to make a final decision of a hand gesture. Through the experiment, we can see that our proposed method can achieve 99% recognition rate at environment with simple background and no face region together, and reduce to 89.5% at the environment with complex background and with face region. These results can illustrate that the proposed algorithm can be applied as a production.

Face Detection using Zernike Moments (Zernike 모멘트를 이용한 얼굴 검출)

  • Lee, Daeho
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.2
    • /
    • pp.179-186
    • /
    • 2007
  • This paper proposes a novel method for face detection method using Zernike moments. To detect the faces in an image, local regions in multiscale sliding windows are classified into face and non-face by a neural network, and input features of the neural network consist of Zernike moments. Feature dimension is reduced as the reconstruction capability of orthogonal moment. In addition, because the magnitude of Zernike moment is invariant to rotation, a tilted human face can be detected. Even so the detection rate of the proposed method about head on face is less than experiments using intensity features, the result of our method about rotated faces is more robust. If the additional compensation and features are utilized, the proposed scheme may be best suited for the later stage of classification.

  • PDF

270-W 15-kHz MOPA System Based on Side-pumped Rod-type Nd:YAG Gain Modules

  • Cha, Yong-Ho;Yang, Myoung-Yerl;Ko, Kwang-Hoon;Lim, Gwon;Han, Jae-Min;Park, Hyun-Min;Kim, Taek-Soo;Roh, Si-Pyo;Jeong, Do-Young
    • Journal of the Optical Society of Korea
    • /
    • v.12 no.4
    • /
    • pp.298-302
    • /
    • 2008
  • We have developed a 270-W 15-kHz MOPA system based on side-pumped rod-type Nd:YAG gain modules. The master oscillator is a 3-W 15-kHz $TEM_{00}$ $Nd:YVO_4$ laser with a pulse duration of 30 ns. To preserve the high beam quality during the amplification, we use image relay and polarization rotation which can simultaneously compensate for thermal lensing and thermal birefringence generated in the rod-type gain modules. After the amplification to 270 W with six rod-type gain modules, the beam quality factor ($M^2$) of the amplified laser beam is 5-10, and the pulse duration is maintained at 30 ns.

Mobile Augmented Visualization Technology Using Vive Tracker (포즈 추적 센서를 활용한 모바일 증강 가시화 기술)

  • Lee, Dong-Chun;Kim, Hang-Kee;Lee, Ki-Suk
    • Journal of Korea Game Society
    • /
    • v.21 no.5
    • /
    • pp.41-48
    • /
    • 2021
  • This paper introduces a mobile augmented visualization technology that augments a three-dimensional virtual human body on a mannequin model using two pose(position and rotation) tracking sensors. The conventional camera tracking technology used for augmented visualization has the disadvantage of failing to calculate the camera pose when the camera shakes or moves quickly because it uses the camera image, but using a pose tracking sensor can overcome this disadvantage. Also, even if the position of the mannequin is changed or rotated, augmented visualization is possible using the data of the pose tracking sensor attached to the mannequin, and above all there is no load for camera tracking.

Validation Data Augmentation for Improving the Grading Accuracy of Diabetic Macular Edema using Deep Learning (딥러닝을 이용한 당뇨성황반부종 등급 분류의 정확도 개선을 위한 검증 데이터 증강 기법)

  • Lee, Tae Soo
    • Journal of Biomedical Engineering Research
    • /
    • v.40 no.2
    • /
    • pp.48-54
    • /
    • 2019
  • This paper proposed a method of validation data augmentation for improving the grading accuracy of diabetic macular edema (DME) using deep learning. The data augmentation technique is basically applied in order to secure diversity of data by transforming one image to several images through random translation, rotation, scaling and reflection in preparation of input data of the deep neural network (DNN). In this paper, we apply this technique in the validation process of the trained DNN, and improve the grading accuracy by combining the classification results of the augmented images. To verify the effectiveness, 1,200 retinal images of Messidor dataset was divided into training and validation data at the ratio 7:3. By applying random augmentation to 359 validation data, $1.61{\pm}0.55%$ accuracy improvement was achieved in the case of six times augmentation (N=6). This simple method has shown that the accuracy can be improved in the N range from 2 to 6 with the correlation coefficient of 0.5667. Therefore, it is expected to help improve the diagnostic accuracy of DME with the grading information provided by the proposed DNN.

Separation of Occluding Pigs using Deep Learning-based Image Processing Techniques (딥 러닝 기반의 영상처리 기법을 이용한 겹침 돼지 분리)

  • Lee, Hanhaesol;Sa, Jaewon;Shin, Hyunjun;Chung, Youngwha;Park, Daihee;Kim, Hakjae
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.2
    • /
    • pp.136-145
    • /
    • 2019
  • The crowded environment of a domestic pig farm is highly vulnerable to the spread of infectious diseases such as foot-and-mouth disease, and studies have been conducted to automatically analyze behavior of pigs in a crowded pig farm through a video surveillance system using a camera. Although it is required to correctly separate occluding pigs for tracking each individual pigs, extracting the boundaries of the occluding pigs fast and accurately is a challenging issue due to the complicated occlusion patterns such as X shape and T shape. In this study, we propose a fast and accurate method to separate occluding pigs not only by exploiting the characteristics (i.e., one of the fast deep learning-based object detectors) of You Only Look Once, YOLO, but also by overcoming the limitation (i.e., the bounding box-based object detector) of YOLO with the test-time data augmentation of rotation. Experimental results with two-pigs occlusion patterns show that the proposed method can provide better accuracy and processing speed than one of the state-of-the-art widely used deep learning-based segmentation techniques such as Mask R-CNN (i.e., the performance improvement over Mask R-CNN was about 11 times, in terms of the accuracy/processing speed performance metrics).

An Application of Deep Clustering for Abnormal Vessel Trajectory Detection (딥 클러스터링을 이용한 비정상 선박 궤적 식별)

  • Park, Heon-Jei;Lee, Jun Woo;Kyung, Ji Hoon;Kim, Kyeongtaek
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.4
    • /
    • pp.169-176
    • /
    • 2021
  • Maritime monitoring requirements have been beyond human operators capabilities due to the broadness of the coverage area and the variety of monitoring activities, e.g. illegal migration, or security threats by foreign warships. Abnormal vessel movement can be defined as an unreasonable movement deviation from the usual trajectory, speed, or other traffic parameters. Detection of the abnormal vessel movement requires the operators not only to pay short-term attention but also to have long-term trajectory trace ability. Recent advances in deep learning have shown the potential of deep learning techniques to discover hidden and more complex relations that often lie in low dimensional latent spaces. In this paper, we propose a deep autoencoder-based clustering model for automatic detection of vessel movement anomaly to assist monitoring operators to take actions on the vessel for more investigation. We first generate gridded trajectory images by mapping the raw vessel trajectories into two dimensional matrix. Based on the gridded image input, we test the proposed model along with the other deep autoencoder-based models for the abnormal trajectory data generated through rotation and speed variation from normal trajectories. We show that the proposed model improves detection accuracy for the generated abnormal trajectories compared to the other models.

Using CNN- VGG 16 to detect the tennis motion tracking by information entropy and unascertained measurement theory

  • Zhong, Yongfeng;Liang, Xiaojun
    • Advances in nano research
    • /
    • v.12 no.2
    • /
    • pp.223-239
    • /
    • 2022
  • Object detection has always been to pursue objects with particular properties or representations and to predict details on objects including the positions, sizes and angle of rotation in the current picture. This was a very important subject of computer vision science. While vision-based object tracking strategies for the analysis of competitive videos have been developed, it is still difficult to accurately identify and position a speedy small ball. In this study, deep learning (DP) network was developed to face these obstacles in the study of tennis motion tracking from a complex perspective to understand the performance of athletes. This research has used CNN-VGG 16 to tracking the tennis ball from broadcasting videos while their images are distorted, thin and often invisible not only to identify the image of the ball from a single frame, but also to learn patterns from consecutive frames, then VGG 16 takes images with 640 to 360 sizes to locate the ball and obtain high accuracy in public videos. VGG 16 tests 99.6%, 96.63%, and 99.5%, respectively, of accuracy. In order to avoid overfitting, 9 additional videos and a subset of the previous dataset are partly labelled for the 10-fold cross-validation. The results show that CNN-VGG 16 outperforms the standard approach by a wide margin and provides excellent ball tracking performance.