• Title/Summary/Keyword: Image Edge

Search Result 2,472, Processing Time 0.026 seconds

Smart Mirror for Facial Expression Recognition Based on Convolution Neural Network (컨볼루션 신경망 기반 표정인식 스마트 미러)

  • Choi, Sung Hwan;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.200-203
    • /
    • 2021
  • This paper introduces a smart mirror technology that recognizes a person's facial expressions through image classification among several artificial intelligence technologies and presents them in a mirror. 5 types of facial expression images are trained through artificial intelligence. When someone looks at the smart mirror, the mirror recognizes my expression and shows the recognized result in the mirror. The dataset fer2013 provided by kaggle used the faces of several people to be separated by facial expressions. For image classification, the network structure is trained using convolution neural network (CNN). The face is recognized and presented on the screen in the smart mirror with the embedded board such as Raspberry Pi4.

  • PDF

The Construction Method of Precise DTM of UAV Images Using Sobel-median Filtering (소벨-메디언 필터링을 이용한 UAV 영상의 정밀 DTM 구축 방법에 관한 연구)

  • Na, Young-Woo
    • Journal of Urban Science
    • /
    • v.12 no.2
    • /
    • pp.43-52
    • /
    • 2023
  • UAV have the disadvantage that are weak from rainfall or winds due to the light platform, so use Scale-Invariant Feature Transform (SIFT) method which extrude keypoints in image matching process. To find the efficient filtering method for the construction of precise Digital Terrain Model (DTM) using UAV images, comparatively analyzed sobel and Differential of Gaussian (DoG) and found sobel is more efficient way to extrude buildings, trees, and so on. And edges are extruded more clearly when applying median additionally which have the merit of preserving edge and eliminating noise. In this study, applied sobel-median filtering which plus median to sobel and constructed the 1st filtered DTM that extrude building and trees and 2nd filtered DTM that extrude cars by threshold of gradient, Analysis of the degree of accuracy improvement showed that standard deviations of 1st filtered DTM and 2nd filtered DTM are 0.32m, 0.287m respectively, and both are acceptable for the tolerance of 0.33m for elevation points of 1/1,000 digital map, and the accuracy was increased about 10% by filtering automobiles. Plus, moving things are changed those position and direction in every image, and these are not target to filter because of the characteristic that is excluded from SIFT method.

Experimental study of turbulent flow in a scaled RPV model by PIV technology

  • Luguo Liu;Wenhai Qu;Yu Liu;Jinbiao Xiong;Songwei Li;Guangming Jiang
    • Nuclear Engineering and Technology
    • /
    • v.56 no.7
    • /
    • pp.2458-2473
    • /
    • 2024
  • The turbulent flow in reactor pressure vessel (RPV) of pressurized water reactor (PWR) is important for the flow rate distribution at core inlet. Thus, it is vital to study the turbulent flow phenomena in RPV. However, the complicated fluid channel consisted of inner structures of RPV will block or refract the laser sheet of particle image velocimetry (PIV). In this work, the matched index of refraction (MIR) of sodium iodide (NaI) solution and acrylic was applied to support optical path for flow field measurements by PIV in the 1/10th scaled-down RPV model. The experimental results show detailed velocity field at different locations inside the scaled-down RPV model. Some interesting phenomena are obtained, including the non-negligible counterflow at the corner of nozzle edge, the high downward flowing stream in downcomer, large vortices above vortex suppression plate in lower plenum. And the intensity of counterflow and the strength of vortices increase as inlet flow rate increasing. Finally, the case of asymmetry flow was also studied. The turbulent flow has different pattern compared with the case of symmetrical inlet flow rate, which may affect the uniformity of flow distribution at the core inlet.

A Study on the Extraction of a River from the RapidEye Image Using ISODATA Algorithm (ISODATA 기법을 이용한 RapidEye 영상으로부터 하천의 추출에 관한 연구)

  • Jo, Myung-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.15 no.4
    • /
    • pp.1-14
    • /
    • 2012
  • A river is defined as the watercourse flowing through its channel, and the mapping tasks of a river plays an important role for the research on the topographic changes in the riparian zones and the research on the monitoring of flooding in its floodplain. However, the utilization of the ground surveying technologies is not efficient for the mapping tasks of a river due to the irregular surfaces of the riparian zones and the dynamic changes of water level of a river. Recently, the spatial information data sets are widely used for the coastal mapping tasks due to the acquisition of the topographic information without human accessibility. In this research, we tried to extract a river from the RapidEye imagery by using the ISODATA(Iterative Self_Organizing Data Analysis) classification algorithm with the two different parameters(NIR (Near Infra-Red) band and NDVI(Normalized Difference Vegetation Index)). First, the two different images(the NIR band image and the NDVI image) were generated from the RapidEye imagery. Second, the ISODATA algorithm were applied to each image and each river was generated in each image through the post-processing steps. River boundaries were also extracted from each classified image using the Sobel edge detection algorithm. Ground truths determined by the experienced expert are used for the assessment of the accuracy of an each generated river. Statistical results show that the extracted river using the NIR band has higher accuracies than the extracted river using the NDVI.

Image Separation of Talker from a Background by Differential Image and Contours Information (차영상 및 윤곽선에 의한 배경에서 화자분리)

  • Park Jong-Il;Park Young-Bum;Yoo Hyun-Joong
    • The KIPS Transactions:PartB
    • /
    • v.12B no.6 s.102
    • /
    • pp.671-678
    • /
    • 2005
  • In this paper, we suggest an algorithm that allows us to extract the important obbject from motion pictures and then replace the background with arbitrary images. The suggested technique can be used not only for protecting privacy and reducing the size of data to be transferred by removing the background of each frame, but also for replacing the background with user-selected image in video communication systems including mobile phones. Because of the relatively large size of image data, digital image processing usually takes much of the resources like memory and CPU. This can cause trouble especially for mobile video phones which typically have restricted resources. In our experiments, we could reduce the requirements of time and memory for processing the images by restricting the search area to the vicinity of major object's contour found in the previous frame based on the fact that the movement of major object is not wide or rapid in general. Specifically, we detected edges and used the edge image of the initial frame to locate candidate-object areas. Then, on the located areas, we computed the difference image between adjacent frames and used it to determine and trace the major object that might be moving. And then we computed the contour of the major object and used it to separate major object from the background. We could successfully separate major object from the background and replate the background with arbitrary images.

Depth Map Generation Using Infocused and Defocused Images (초점 영상 및 비초점 영상으로부터 깊이맵을 생성하는 방법)

  • Mahmoudpour, Saeed;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.3
    • /
    • pp.362-371
    • /
    • 2014
  • Blur variation caused by camera de-focusing provides a proper cue for depth estimation. Depth from Defocus (DFD) technique calculates the blur amount present in an image considering that blur amount is directly related to scene depth. Conventional DFD methods use two defocused images that might yield the low quality of an estimated depth map as well as a reconstructed infocused image. To solve this, a new DFD methodology based on infocused and defocused images is proposed in this paper. In the proposed method, the outcome of Subbaro's DFD is combined with a novel edge blur estimation method so that improved blur estimation can be achieved. In addition, a saliency map mitigates the ill-posed problem of blur estimation in the region with low intensity variation. For validating the feasibility of the proposed method, twenty image sets of infocused and defocused images with 2K FHD resolution were acquired from a camera with a focus control in the experiments. 3D stereoscopic image generated by an estimated depth map and an input infocused image could deliver the satisfactory 3D perception in terms of spatial depth perception of scene objects.

Analysis of Effect on Camera Distortion for Measuring Velocity Using Surface Image Velocimeter (표면영상유속측정법을 이용한 유속 측정 시 카메라 왜곡 영향 분석)

  • Lee, Jun Hyeong;Yoon, Byung Man;Kim, Seo Jun
    • Ecology and Resilient Infrastructure
    • /
    • v.8 no.1
    • /
    • pp.1-8
    • /
    • 2021
  • A surface image velocimeter (SIV) measures the velocity of a particle group by calculating the intensity distribution of the particle group in two consecutive images of the water surface using a cross-correlation method. Therefore, to increase the accuracy of the flow velocity calculated by a SIV, it is important to accurately calculate the displacement of the particle group in the images. In other words, the change in the physical distance of the particle group in the two images to be analyzed must be accurately calculated. In the image of an actual river taken using a camera, camera lens distortion inevitably occurs, which affects the displacement calculation in the image. In this study, we analyzed the effect of camera lens distortion on the displacement calculation using a dense and uniformly spaced grid board. The results showed that the camera lens distortion gradually increased in the radial direction from the center of the image. The displacement calculation error reached 8.10% at the outer edge of the image and was within 5% at the center of the image. In the future, camera lens distortion correction can be applied to improve the accuracy of river surface flow rate measurements.

Makeup transfer by applying a loss function based on facial segmentation combining edge with color information (에지와 컬러 정보를 결합한 안면 분할 기반의 손실 함수를 적용한 메이크업 변환)

  • Lim, So-hyun;Chun, Jun-chul
    • Journal of Internet Computing and Services
    • /
    • v.23 no.4
    • /
    • pp.35-43
    • /
    • 2022
  • Makeup is the most common way to improve a person's appearance. However, since makeup styles are very diverse, there are many time and cost problems for an individual to apply makeup directly to himself/herself.. Accordingly, the need for makeup automation is increasing. Makeup transfer is being studied for makeup automation. Makeup transfer is a field of applying makeup style to a face image without makeup. Makeup transfer can be divided into a traditional image processing-based method and a deep learning-based method. In particular, in deep learning-based methods, many studies based on Generative Adversarial Networks have been performed. However, both methods have disadvantages in that the resulting image is unnatural, the result of makeup conversion is not clear, and it is smeared or heavily influenced by the makeup style face image. In order to express the clear boundary of makeup and to alleviate the influence of makeup style facial images, this study divides the makeup area and calculates the loss function using HoG (Histogram of Gradient). HoG is a method of extracting image features through the size and directionality of edges present in the image. Through this, we propose a makeup transfer network that performs robust learning on edges.By comparing the image generated through the proposed model with the image generated through BeautyGAN used as the base model, it was confirmed that the performance of the model proposed in this study was superior, and the method of using facial information that can be additionally presented as a future study.

Incremental Image Noise Reduction in Coronary CT Angiography Using a Deep Learning-Based Technique with Iterative Reconstruction

  • Jung Hee Hong;Eun-Ah Park;Whal Lee;Chulkyun Ahn;Jong-Hyo Kim
    • Korean Journal of Radiology
    • /
    • v.21 no.10
    • /
    • pp.1165-1177
    • /
    • 2020
  • Objective: To assess the feasibility of applying a deep learning-based denoising technique to coronary CT angiography (CCTA) along with iterative reconstruction for additional noise reduction. Materials and Methods: We retrospectively enrolled 82 consecutive patients (male:female = 60:22; mean age, 67.0 ± 10.8 years) who had undergone both CCTA and invasive coronary artery angiography from March 2017 to June 2018. All included patients underwent CCTA with iterative reconstruction (ADMIRE level 3, Siemens Healthineers). We developed a deep learning based denoising technique (ClariCT.AI, ClariPI), which was based on a modified U-net type convolutional neural net model designed to predict the possible occurrence of low-dose noise in the originals. Denoised images were obtained by subtracting the predicted noise from the originals. Image noise, CT attenuation, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were objectively calculated. The edge rise distance (ERD) was measured as an indicator of image sharpness. Two blinded readers subjectively graded the image quality using a 5-point scale. Diagnostic performance of the CCTA was evaluated based on the presence or absence of significant stenosis (≥ 50% lumen reduction). Results: Objective image qualities (original vs. denoised: image noise, 67.22 ± 25.74 vs. 52.64 ± 27.40; SNR [left main], 21.91 ± 6.38 vs. 30.35 ± 10.46; CNR [left main], 23.24 ± 6.52 vs. 31.93 ± 10.72; all p < 0.001) and subjective image quality (2.45 ± 0.62 vs. 3.65 ± 0.60, p < 0.001) improved significantly in the denoised images. The average ERDs of the denoised images were significantly smaller than those of originals (0.98 ± 0.08 vs. 0.09 ± 0.08, p < 0.001). With regard to diagnostic accuracy, no significant differences were observed among paired comparisons. Conclusion: Application of the deep learning technique along with iterative reconstruction can enhance the noise reduction performance with a significant improvement in objective and subjective image qualities of CCTA images.

A Study on the Observation of Soil Moisture Conditions and its Applied Possibility in Agriculture Using Land Surface Temperature and NDVI from Landsat-8 OLI/TIRS Satellite Image (Landsat-8 OLI/TIRS 위성영상의 지표온도와 식생지수를 이용한 토양의 수분 상태 관측 및 농업분야에의 응용 가능성 연구)

  • Chae, Sung-Ho;Park, Sung-Hwan;Lee, Moung-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_1
    • /
    • pp.931-946
    • /
    • 2017
  • The purpose of this study is to observe and analyze soil moisture conditions with high resolution and to evaluate its application feasibility to agriculture. For this purpose, we used three Landsat-8 OLI (Operational Land Imager)/TIRS (Thermal Infrared Sensor) optical and thermal infrared satellite images taken from May to June 2015, 2016, and 2017, including the rural areas of Jeollabuk-do, where 46% of agricultural areas are located. The soil moisture conditions at each date in the study area can be effectively obtained through the SPI (Standardized Precipitation Index)3 drought index, and each image has near normal, moderately wet, and moderately dry soil moisture conditions. The temperature vegetation dryness index (TVDI) was calculated to observe the soil moisture status from the Landsat-8 OLI/TIRS images with different soil moisture conditions and to compare and analyze the soil moisture conditions obtained from the SPI3 drought index. TVDI is estimated from the relationship between LST (Land Surface Temperature) and NDVI (Normalized Difference Vegetation Index) calculated from Landsat-8 OLI/TIRS satellite images. The maximum/minimum values of LST according to NDVI are extracted from the distribution of pixels in the feature space of LST-NDVI, and the Dry/Wet edges of LST according to NDVI can be determined by linear regression analysis. The TVDI value is obtained by calculating the ratio of the LST value between the two edges. We classified the relative soil moisture conditions from the TVDI values into five stages: very wet, wet, normal, dry, and very dry and compared to the soil moisture conditions obtained from SPI3. Due to the rice-planing season from May to June, 62% of the whole images were classified as wet and very wet due to paddy field areas which are the largest proportions in the image. Also, the pixels classified as normal were analyzed because of the influence of the field area in the image. The TVDI classification results for the whole image roughly corresponded to the SPI3 soil moisture condition, but they did not correspond to the subdivision results which are very dry, wet, and very wet. In addition, after extracting and classifying agricultural areas of paddy field and field, the paddy field area did not correspond to the SPI3 drought index in the very dry, normal and very wet classification results, and the field area did not correspond to the SPI3 drought index in the normal classification. This is considered to be a problem in Dry/Wet edge estimation due to outlier such as extremely dry bare soil and very wet paddy field area, water, cloud and mountain topography effects (shadow). However, in the agricultural area, especially the field area, in May to June, it was possible to effectively observe the soil moisture conditions as a subdivision. It is expected that the application of this method will be possible by observing the temporal and spatial changes of the soil moisture status in the agricultural area using the optical satellite with high spatial resolution and forecasting the agricultural production.