• Title/Summary/Keyword: camera image

Search Result 4,918, Processing Time 0.034 seconds

Research on a Method for the Optical Measurement of the Rifling Angle of Artillery Based on Angle Error Correction

  • Zhang, Ye;Zheng, Yang
    • Current Optics and Photonics
    • /
    • v.4 no.6
    • /
    • pp.500-508
    • /
    • 2020
  • The rifling angle of artillery is an important parameter, and its determination plays a key role in the stability, hit rate, accuracy and service life of artillery. In this study, we propose an optical measurement method for the rifling angle based on angle error correction. The method is based on the principle of geometrical optics imaging, where the rifling on the inner wall of the artillery barrel is imaged on a CCD camera target surface by an optical system. When the measurement system moves in the barrel, the rifling image rotates accordingly. According to the relationship between the rotation angle of the rifling image and the travel distance of the measurement system, different types of rifling equations are established. Solving equations of the rifling angle are deduced according to the definition of the rifling angle. Furthermore, we added an angle error correction function to the method that is based on the theory of dynamic optics. This function can measure and correct the angle error caused by the posture change of the measurement system. Thus, the rifling angle measurement accuracy is effectively improved. Finally, we simulated and analyzed the influence of parameter changes of the measurement system on rifling angle measurement accuracy. The simulation results show that the rifling angle measurement method has high measurement accuracy, and the method can be applied to different types of rifling angle measurements. The method provides the theoretical basis for the development of a high-precision rifling measurement system in the future.

SPECKLE IMAGING TECHNIQUE FOR LUNAR SURFACES

  • Kim, Jinkyu;Sim, Chae Kyung;Jeong, Minsup;Moon, Hong-Kyu;Choi, Young-Jun;Kim, Sungsoo S.;Jin, Ho
    • Journal of The Korean Astronomical Society
    • /
    • v.55 no.4
    • /
    • pp.87-97
    • /
    • 2022
  • Polarimetric measurements of the lunar surface from lunar orbit soon will be available via Wide-Field Polarimetric Camera (PolCam) onboard the Korea Pathfinder Lunar Orbiter (KPLO), which is planned to be launched in mid 2022. To provide calibration data for the PolCam, we are conducting speckle polarimetric measurements of the nearside of the Moon from the Earth's ground. It appears that speckle imaging of the Moon for scientific purposes has not been attempted before, and there is need for a procedure to create a "lucky image" from a number of observed speckle images. As a first step of obtaining calibration data for the PolCam from the ground, we search for the best sharpness measure for lunar surfaces. We then calculate the minimum number of speckle images and the number of images to be shift-and-added for higher resolution (sharpness) and signal-to-noise ratio.

Traffic Signal Detection and Recognition Using a Color Segmentation in a HSI Color Model (HSI 색상 모델에서 색상 분할을 이용한 교통 신호등 검출과 인식)

  • Jung, Min Chul
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.4
    • /
    • pp.92-98
    • /
    • 2022
  • This paper proposes a new method of the traffic signal detection and the recognition in an HSI color model. The proposed method firstly converts a ROI image in the RGB model to in the HSI model to segment the color of a traffic signal. Secondly, the segmented colors are dilated by the morphological processing to connect the traffic signal light and the signal light case and finally, it extracts the traffic signal light and the case by the aspect ratio using the connected component analysis. The extracted components show the detection and the recognition of the traffic signal lights. The proposed method is implemented using C language in Raspberry Pi 4 system with a camera module for a real-time image processing. The system was fixedly installed in a moving vehicle, and it recorded a video like a vehicle black box. Each frame of the recorded video was extracted, and then the proposed method was tested. The results show that the proposed method is successful for the detection and the recognition of traffic signals.

Anomaly detection of isolating switch based on single shot multibox detector and improved frame differencing

  • Duan, Yuanfeng;Zhu, Qi;Zhang, Hongmei;Wei, Wei;Yun, Chung Bang
    • Smart Structures and Systems
    • /
    • v.28 no.6
    • /
    • pp.811-825
    • /
    • 2021
  • High-voltage isolating switches play a paramount role in ensuring the safety of power supply systems. However, their exposure to outdoor environmental conditions may cause serious physical defects, which may result in great risk to power supply systems and society. Image processing-based methods have been used for anomaly detection. However, their accuracy is affected by numerous uncertainties due to manually extracted features, which makes the anomaly detection of isolating switches still challenging. In this paper, a vision-based anomaly detection method for isolating switches, which uses the rotational angle of the switch system for more accurate and direct anomaly detection with the help of deep learning (DL) and image processing methods (Single Shot Multibox Detector (SSD), improved frame differencing method, and Hough transform), is proposed. The SSD is a deep learning method for object classification and localization. In addition, an improved frame differencing method is introduced for better feature extraction and a hough transform method is adopted for rotational angle calculation. A number of experiments are conducted for anomaly detection of single and multiple switches using video frames. The results of the experiments demonstrate that the SSD outperforms the You-Only-Look-Once network. The effectiveness and robustness of the proposed method have been proven under various conditions, such as different illumination and camera locations using 96 videos from the experiments.

Vignetting Dimensional Geometric Models and a Downhill Simplex Search

  • Kim, Hyung Tae;Lee, Duk Yeon;Choi, Dongwoon;Kang, Jaehyeon;Lee, Dong-Wook
    • Current Optics and Photonics
    • /
    • v.6 no.2
    • /
    • pp.161-170
    • /
    • 2022
  • Three-dimensional (3D) geometric models are introduced to correct vignetting, and a downhill simplex search is applied to determine the coefficients of a 3D model used in digital microscopy. Vignetting is nonuniform illuminance with a geometric regularity on a two-dimensional (2D) image plane, which allows the illuminance distribution to be estimated using 3D models. The 3D models are defined using generalized polynomials and arbitrary coefficients. Because the 3D models are nonlinear, their coefficients are determined using a simplex search. The cost function of the simplex search is defined to minimize the error between the 3D model and the reference image of a standard white board. The conventional and proposed methods for correcting the vignetting are used in experiments on four inspection systems based on machine vision and microscopy. The methods are investigated using various performance indices, including the coefficient of determination, the mean absolute error, and the uniformity after correction. The proposed method is intuitive and shows performance similar to the conventional approach, using a smaller number of coefficients.

Automated Print Quality Assessment Method for 3D Printing AI Data Construction

  • Yoo, Hyun-Ju;Moon, Nammee
    • Journal of Information Processing Systems
    • /
    • v.18 no.2
    • /
    • pp.223-234
    • /
    • 2022
  • The evaluation of the print quality of 3D printing has traditionally relied on manual work using dimensional measurements. However, the dimensional measurement method has an error value that depends on the person who measures it. Therefore, we propose the design of a new print quality measurement method that can be automatically measured using the field-of-view (FOV) model and the intersection over union (IoU) technique. First, the height information of the modeling is acquired from a camera; the output is measured by a sensor; and the images of the top and isometric views are acquired from the FOV model. The height information calculates the height ratio by calculating the percentage of modeling and output, and compares the 2D contour of the object on the image using the FOV model. The contour of the object is obtained from the image for 2D contour comparison and the IoU is calculated by comparing the areas of the contour regions. The accuracy of the automated measurement technique for determining, which derives the print quality value was calculated by averaging the IoU value corrected by the measurement error and the height ratio value.

Design and Implementation of Smart Pen based User Interface System for U-learning (U-Learning 을 위한 스마트펜 인터페이스 시스템 디자인 및 개발)

  • Shim, Jae-Youen;Kim, Seong-Whan
    • Annual Conference of KIPS
    • /
    • 2010.11a
    • /
    • pp.1388-1391
    • /
    • 2010
  • In this paper, we present a design and implementation of U-learning system using pen based augmented reality approach. Student has been given a smart pen and a smart study book, which is similar to the printed material already serviced. However, we print the study book using CMY inks, and embed perceptually invisible dot patterns using K ink. Smart pen includes (1) IR LED for illumination, IR pass filter for extracting the dot patterns, and (3) camera for image captures. From the image sequences, we perform topology analysis which determines the topological distance between dot pixels, and perform error correction decoding using four position symbols and five CRC symbols. When a student touches a smart study books with our smart pen, we show him/her multimedia (visual/audio) information which is exactly related with the selected region. Our scheme can embed 16 bit information, which is more than 200% larger than previous scheme, which supports 7 bits or 8 bits information.

Sensor System for Autonomous Mobile Robot Capable of Floor-to-floor Self-navigation by Taking On/off an Elevator (엘리베이터를 통한 층간 이동이 가능한 실내 자율주행 로봇용 센서 시스템)

  • Min-ho Lee;Kun-woo Na;Seungoh Han
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.2
    • /
    • pp.118-123
    • /
    • 2023
  • This study presents sensor system for autonomous mobile robot capable of floor-to-floor self-navigation. The robot was modified using the Turtlebot3 hardware platform and ROS2 (robot operating system 2). The robot utilized the Navigation2 package to estimate and calibrate the moving path acquiring a map with SLAM (simultaneous localization and mapping). For elevator boarding, ultrasonic sensor data and threshold distance are compared to determine whether the elevator door is open. The current floor information of the elevator is determined using image processing results of the ceiling-fixed camera capturing the elevator LCD (liquid crystal display)/LED (light emitting diode). To realize seamless communication at any spot in the building, the LoRa (long-range) communication module was installed on the self-navigating autonomous mobile robot to support the robot in deciding if the elevator door is open, when to get off the elevator, and how to reach at the destination.

Generating 3D Digital Twins of Real Indoor Spaces based on Real-World Point Cloud Data

  • Wonseop Shin;Jaeseok Yoo;Bumsoo Kim;Yonghoon Jung;Muhammad Sajjad;Youngsup Park;Sanghyun Seo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.8
    • /
    • pp.2381-2398
    • /
    • 2024
  • The construction of virtual indoor spaces is crucial for the development of metaverses, virtual production, and other 3D content domains. Traditional methods for creating these spaces are often cost-prohibitive and labor-intensive. To address these challenges, we present a pipeline for generating digital twins of real indoor environments from RGB-D camera-scanned data. Our pipeline synergizes space structure estimation, 3D object detection, and the inpainting of missing areas, utilizing deep learning technologies to automate the creation process. Specifically, we apply deep learning models for object recognition and area inpainting, significantly enhancing the accuracy and efficiency of virtual space construction. Our approach minimizes manual labor and reduces costs, paving the way for the creation of metaverse spaces that closely mimic real-world environments. Experimental results demonstrate the effectiveness of our deep learning applications in overcoming traditional obstacles in digital twin creation, offering high-fidelity digital replicas of indoor spaces. This advancement opens for immersive and realistic virtual content creation, showcasing the potential of deep learning in the field of virtual space construction.

Estimation of Local Strain Distribution of Shear-Compressive Failure Type Beam Using Digital Image Processing Technology (화상계측기법에 의한 전단압축파괴형 보의 국부변형률분포 추정)

  • Kwon, Yong-Gil;Han, Sang-Hoon;Hong, Ki-Nam
    • Journal of the Korea Concrete Institute
    • /
    • v.21 no.2
    • /
    • pp.121-127
    • /
    • 2009
  • The failure behavior of RC structure was exceedingly affected by the size and the local strain distribution of the failure zone due to the strain localization behavior on the tension softening materials. However, it is very difficult to quantify and assess the local strain occurring in the failure zone by the conventional test method. In this study, image processing technology, which is available to measure the strain up to the complete failure of RC structures, was used to estimate the local strain distribution and the size of failure zone. In order to verify the reliability and validity for the image processing technology, the strain transition acquired by the image processing technology was compared with strain values measured by the concrete gauge on the uniaxial compressive specimens. Based on the verification of image processing technology for the uniaxial compressive specimens, the size and the local strain distribution of the failure zone of deep beam was measured using the image processing technology. With the results of test, the principal tensile/compressive strain contours were drawn. Using the strain contours, the size of the failure zone and the local strain distribution on the failure of the deep beam was evaluated. The results of strain contour showed that image processing technology is available to assess the failure behavior of deep beam and obtain the local strain values on the domain of the post-peak failure comparatively.