• Title/Summary/Keyword: 연속 촬영 영상

Search Result 173, Processing Time 0.023 seconds

Visualization and Analysis of the Dynamic Behavior of Splashes and Residuals of Droplets Continuously Colliding with a Vertical Wall (수직벽으로 연속 충돌하는 액적들의 비산/잔류 동적 거동 가시화 및 분석 연구)

  • Jaehyeon Noh;Hoonseok Lee;Taeyeong Park;Seungho Kim
    • Journal of the Korean Society of Visualization
    • /
    • v.22 no.2
    • /
    • pp.82-89
    • /
    • 2024
  • In this study, experiments were conducted to visualize and analyze the dynamic characteristics of splash and residual liquid film formation during and after the injection of water droplets onto vertically situated solid substrates with varying surface wettability, elasticity, and microtexture. As wettability decreased (higher contact angle), more splash droplets formed, and the residual liquid film decreased. Low contact angles resulted in thin residual films and less splash. Surface elasticity absorbed the impact forces of droplets, thereby decreasing splash phenomena and significantly reducing the formation of residual liquid films due to surface vibration. Surfaces with microtextures demonstrated control over droplet splash direction, guiding the liquid along desired pathways. High-speed imaging provided detailed insights, showing that surface properties critically influence splash dynamics and residual liquid film formation.

Segmentation of Target Objects Based on Feature Clustering in Stereoscopic Images (입체영상에서 특징의 군집화를 통한 대상객체 분할)

  • Jang, Seok-Woo;Choi, Hyun-Jun;Huh, Moon-Haeng
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.10
    • /
    • pp.4807-4813
    • /
    • 2012
  • Since the existing methods of segmenting target objects from various images mainly use 2-dimensional features, they have several constraints due to the shortage of 3-dimensional information. In this paper, we therefore propose a new method of accurately segmenting target objects from three dimensional stereoscopic images using 2D and 3D feature clustering. The suggested method first estimates depth features from stereo images by using a stereo matching technique, which represent the distance between a camera and an object from left and right images. It then eliminates background areas and detects foreground areas, namely, target objects by effectively clustering depth and color features. To verify the performance of the proposed method, we have applied our approach to various stereoscopic images and found that it can accurately detect target objects compared to other existing 2-dimensional methods.

Water Segmentation Based on Morphologic and Edge-enhanced U-Net Using Sentinel-1 SAR Images (형태학적 연산과 경계추출 학습이 강화된 U-Net을 활용한 Sentinel-1 영상 기반 수체탐지)

  • Kim, Hwisong;Kim, Duk-jin;Kim, Junwoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.793-810
    • /
    • 2022
  • Synthetic Aperture Radar (SAR) is considered to be suitable for near real-time inundation monitoring. The distinctly different intensity between water and land makes it adequate for waterbody detection, but the intrinsic speckle noise and variable intensity of SAR images decrease the accuracy of waterbody detection. In this study, we suggest two modules, named 'morphology module' and 'edge-enhanced module', which are the combinations of pooling layers and convolutional layers, improving the accuracy of waterbody detection. The morphology module is composed of min-pooling layers and max-pooling layers, which shows the effect of morphological transformation. The edge-enhanced module is composed of convolution layers, which has the fixed weights of the traditional edge detection algorithm. After comparing the accuracy of various versions of each module for U-Net, we found that the optimal combination is the case that the morphology module of min-pooling and successive layers of min-pooling and max-pooling, and the edge-enhanced module of Scharr filter were the inputs of conv9. This morphologic and edge-enhanced U-Net improved the F1-score by 9.81% than the original U-Net. Qualitative inspection showed that our model has capability of detecting small-sized waterbody and detailed edge of water, which are the distinct advancement of the model presented in this research, compared to the original U-Net.

An Vision System for Traffic sign Recognition (교통표지판 인식을 위한 비젼시스템)

  • 남기환;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.2
    • /
    • pp.471-476
    • /
    • 2004
  • This paper presents an active vision system for on-line traffic sign recognition. The system is composed of two cameras, one is equipped with a wide-angle lens and the other with a telephoto lends, and a PC with an image processing board. The system first detects candidates for traffic signs in the wide-angle image using color, intensity, and shape information. For each candidate, the telephoto-camera is directed to its predicted position to capture the candidate in a large size in the image. The recognition algorithm is designed by intensively using built in functions of an off-the-shelf image processing board to realize both easy implementation and fast recognition. The results of on-road experiments show the feasibility of the system.

Pseudo Image Composition and Sensor Models Analysis of SPOT Satellite Imagery of Non-Accessible Area (비접근 지역에 대한 SPOT 위성영상의 Pseudo영상 구성 및 센서모델 분석)

  • 방기인;조우석
    • Proceedings of the KSRS Conference
    • /
    • 2001.03a
    • /
    • pp.140-148
    • /
    • 2001
  • The satellite sensor model is typically established using ground control points acquired by ground survey Of existing topographic maps. In some cases where the targeted area can't be accessed and the topographic maps are not available, it is difficult to obtain ground control points so that geospatial information could not be obtained from satellite image. The paper presents several satellite sensor models and satellite image decomposition methods for non-accessible area where ground control points can hardly acquired in conventional ways. First, 10 different satellite sensor models, which were extended from collinearity condition equations, were developed and then the behavior of each sensor model was investigated. Secondly, satellite images were decomposed and also pseudo images were generated. The satellite sensor model extended from collinearity equations was represented by the six exterior orientation parameters in 1$^{st}$, 2$^{nd}$ and 3$^{rd}$ order function of satellite image row. Among them, the rotational angle parameters such as $\omega$(omega) and $\phi$(phi) correlated highly with positional parameters could be assigned to constant values. For non-accessible area, satellite images were decomposed, which means that two consecutive images were combined as one image. The combined image consists of one satellite image with ground control points and the other without ground control points. In addition, a pseudo image which is an imaginary image, was prepared from one satellite image with ground control points and the other without ground control points. In other words, the pseudo image is an arbitrary image bridging two consecutive images. For the experiments, SPOT satellite images exposed to the similar area in different pass were used. Conclusively, it was found that 10 different satellite sensor models and 5 different decomposed methods delivered different levels of accuracy. Among them, the satellite camera model with 1$^{st}$ order function of image row for positional orientation parameters and rotational angle parameter of kappa, and constant rotational angle parameter omega and phi provided the best 60m maximum error at check point with pseudo images arrangement.

  • PDF

Development of Intelligent Multiple Camera System for High-Speed Impact Experiment (고속충돌 시험용 지능형 다중 카메라 시스템 개발)

  • Chung, Dong Teak;Park, Chi Young;Jin, Doo Han;Kim, Tae Yeon;Lee, Joo Yeon;Rhee, Ihnseok
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.37 no.9
    • /
    • pp.1093-1098
    • /
    • 2013
  • A single-crystal sapphire is used as a transparent bulletproof window material; however, few studies have investigated the dynamic behavior and fracture properties under high-speed impact. High-speed and high-resolution sequential images are required to study the interaction of the bullet with the brittle ceramic materials. In this study, a device is developed to capture the sequence of high-speed impact/penetration phenomena. This system consists of a speed measurement device, a microprocessor-based camera controller, and multiple CCD cameras. By using a linear array sensor, the speed-measuring device can measure a small (diameter: up to 1 2 mm) and fast (speed: up to Mach 3) bullet. Once a bullet is launched, it passes through the speed measurement device where its time and speed is recorded, and then, the camera controller computes the exact time of arrival to the target during flight. Then, it sends the trigger signal to the cameras and flashes with a specific delay to capture the impact images sequentially. It is almost impossible to capture high-speed images without the estimation of the time of arrival. We were able to capture high-speed images using the new system with precise accuracy.

Color2Gray using Conventional Approaches in Black-and-White Photography (전통적 사진 기법에 기반한 컬러 영상의 흑백 변환)

  • Jang, Hyuk-Su;Choi, Min-Gyu
    • Journal of the Korea Computer Graphics Society
    • /
    • v.14 no.3
    • /
    • pp.1-9
    • /
    • 2008
  • This paper presents a novel optimization-based saliency-preserving method for converting color images to grayscale in a manner consistent with conventional approaches of black-and-white photographers. In black-and-white photography, a colored filter called a contrast filter has been commonly employed on a camera to lighten or darken selected colors. In addition, local exposure controls such as dodging and burning techniques are typically employed in the darkroom process to change the exposure of local areas within the print without affecting the overall exposure. Our method seeks a digital version of a conventional contrast filter to preserve visually-important image features. Furthermore, conventional burning and dodging techniques are addressed, together with image similarity weights, to give edge-aware local exposure control over the image space. Our method can be efficiently optimized on GPU. According to the experiments, CUDA implementation enables 1 megapixel color images to be converted to grayscale at interactive frames rates.

  • PDF

Camera Monitoring of Topographical Changes of Daehang-ri Intertidal Flat Outside Semangeum Sea Dike No.1. (새만금 1호 방조제 외측 대항리 조간대 갯벌 지형 변화에 대한 영상 관측)

  • Kim, Tae-Rim;Park, Seoc-Kwang
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.21 no.6
    • /
    • pp.453-461
    • /
    • 2009
  • Camera monitoring of topographical changes of intertidal flat was performed at Daehang-ri mud flat outside Semangeum sea dike No. 1, where creation of mud flat was reported after sea dike construction. Ground survey on the mud flat is often limited only to points or few line surveys because of difficulty of walking and limitation of working hours by flood/ebb. This study uses natures of tide that the water lines moving on the intertidal flat during a flood indicate depth contours between low and high tide. Ground coordinates for the water lines extracted from the consecutive images of intertidal flat are calculated and information of topography is acquired by integrating all the water line data. Analysis of 6 camera monitoring data between September 2005 and September 2009 shows 0.127 m deposition per year on the average and variation of deposition/erosion in space and time.

An Effective Detection Algorithm of Shot Boundaries in Animations (애니메이션의 효과적인 장면경계 검출 알고리즘)

  • Jang, Seok-Woo;Jung, Myung-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.8
    • /
    • pp.3670-3676
    • /
    • 2011
  • A cell animation is represented by one background cell, and there is much difference of images when its shot is changed. Also, it does not have a lot of colors since people themselves draw it. In order to effectively detect shot transitions of cell animations while fully considering their intrinsic characteristics, in this paper, we propose a animation shot boundary detection algorithm that utilizes color and block-based histograms step by step. The suggested algorithm first converts RGB color space into HSI color one, and coarsely decides if adjacent frames contains a shot transition by performing color difference operation between two images. If they are considered to have a shot transition candidate, we calculate color histograms for 9 sub-regions of the adjacent images and apply weights to them. Finally, we determine whether there is a real shot transition by analyzing the weighted sum of histogram values. In experiments, we show that our method is superior to others.

Moving Object Extraction and Relative Depth Estimation of Backgrould regions in Video Sequences (동영상에서 물체의 추출과 배경영역의 상대적인 깊이 추정)

  • Park Young-Min;Chang Chu-Seok
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.247-256
    • /
    • 2005
  • One of the classic research problems in computer vision is that of stereo, i.e., the reconstruction of three dimensional shape from two or more images. This paper deals with the problem of extracting depth information of non-rigid dynamic 3D scenes from general 2D video sequences taken by monocular camera, such as movies, documentaries, and dramas. Depth of the blocks are extracted from the resultant block motions throughout following two steps: (i) calculation of global parameters concerned with camera translations and focal length using the locations of blocks and their motions, (ii) calculation of each block depth relative to average image depth using the global parameters and the location of the block and its motion, Both singular and non-singular cases are experimented with various video sequences. The resultant relative depths and ego-motion object shapes are virtually identical to human vision.