• Title/Summary/Keyword: Reference Images

Search Result 1,046, Processing Time 0.021 seconds

Fast and Accurate Visual Place Recognition Using Street-View Images

  • Lee, Keundong;Lee, Seungjae;Jung, Won Jo;Kim, Kee Tae
    • ETRI Journal
    • /
    • v.39 no.1
    • /
    • pp.97-107
    • /
    • 2017
  • A fast and accurate building-level visual place recognition method built on an image-retrieval scheme using street-view images is proposed. Reference images generated from street-view images usually depict multiple buildings and confusing regions, such as roads, sky, and vehicles, which degrades retrieval accuracy and causes matching ambiguity. The proposed practical database refinement method uses informative reference image and keypoint selection. For database refinement, the method uses a spatial layout of the buildings in the reference image, specifically a building-identification mask image, which is obtained from a prebuilt three-dimensional model of the site. A global-positioning-system-aware retrieval structure is incorporated in it. To evaluate the method, we constructed a dataset over an area of $0.26km^2$. It was comprised of 38,700 reference images and corresponding building-identification mask images. The proposed method removed 25% of the database images using informative reference image selection. It achieved 85.6% recall of the top five candidates in 1.25 s of full processing. The method thus achieved high accuracy at a low computational complexity.

Diagnostic performance of dental students in identifying mandibular condyle fractures by panoramic radiography and the usefulness of reference images

  • Cho, Bong-Hae
    • Imaging Science in Dentistry
    • /
    • v.41 no.2
    • /
    • pp.53-57
    • /
    • 2011
  • Purpose : The purpose of this study was to evaluate the diagnostic performance of dental students in detection of mandibular condyle fractures and the effectiveness of reference panoramic images. Materials and Methods : Forty-six undergraduates evaluated 25 panoramic radiographs for condylar fractures and the data were analyzed through receiver operating characteristic (ROC) analysis. After a month, they were divided into two homogeneous groups based on the first results and re-evaluated the images with (group A) or without (group B) reference images. Eight reference images included indications showing either typical condylar fractures or anatomic structures which could be confused with fractures. Paired t-test was used for statistical analysis of the difference between the first and the second evaluations for each group, and student�fs t-test was used between the two groups in the second evaluation. The intra- and inter-observer agreements were evaluated with Kappa statistics. Results : Intra- and inter-observer agreements were substantial (k=0.66) and moderate (k=0.53), respectively. The area under the ROC curve (Az) in the first evaluation was 0.802. In the second evaluation, it was increased to 0.823 for group A and 0.814 for group B. The difference between the first and second evaluations for group A was statistically significant (p<0.05), however there was no statistically significant difference between the two groups in the second evaluation. Conclusion : Providing reference images to less experienced clinicians would be a good way to improve the diagnostic ability in detecting condylar fracture.

Cody Recommendation System Using Deep Learning and User Preferences

  • Kwak, Naejoung;Kim, Doyun;kim, Minho;kim, Jongseo;Myung, Sangha;Yoon, Youngbin;Choi, Jihye
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.321-326
    • /
    • 2019
  • As AI technology is recently introduced into various fields, it is being applied to the fashion field. This paper proposes a system for recommending cody clothes suitable for a user's selected clothes. The proposed system consists of user app, cody recommendation module, and server interworking of each module and managing database data. Cody recommendation system classifies clothing images into 80 categories composed of feature combinations, selects multiple representative reference images for each category, and selects 3 full body cordy images for each representative reference image. Cody images of the representative reference image were determined by analyzing the user's preference using Google survey app. The proposed algorithm classifies categories the clothing image selected by the user into a category, recognizes the most similar image among the classification category reference images, and transmits the linked cody images to the user's app. The proposed system uses the ResNet-50 model to categorize the input image and measures similarity using ORB and HOG features to select a reference image in the category. We test the proposed algorithm in the Android app, and the result shows that the recommended system runs well.

Color assessment of resin composite by using cellphone images compared with a spectrophotometer

  • Rafaella Mariana Fontes de Braganca;Rafael Ratto Moraes ;Andre Luis Faria-e-Silva
    • Restorative Dentistry and Endodontics
    • /
    • v.46 no.2
    • /
    • pp.23.1-23.11
    • /
    • 2021
  • Objectives: This study assessed the reliability of digital color measurements using images of resin composite specimens captured with a cellphone. Materials and Methods: The reference color of cylindrical specimens built-up with the use of resin composite (shades A1, A2, A3, and A4) was measured with a portable spectrophotometer (CIELab). Images of the specimens were obtained individually or pairwise (compared shades in the same photograph) under standardized parameters. The color of the specimens was measured in the images using RGB system and converted to CIELab system using image processing software. Whiteness index (WID) and color differences (ΔE00) were calculated for each color measurement method. For the cellphone, the ΔE00 was calculated between the pairs of shades in separate images and in the same image. Data were analyzed using 2-way repeated-measures analysis of variance (α = 0.05). Linear regression models were used to predict the reference ΔE00 values of those calculated using color measured in the images. Results: Images captured with the cellphone resulted in different WID values from the spectrophotometer only for shades A3 and A4. No difference to the reference ΔE00 was observed when individual images were used. In general, a similar ranking of ΔE00 among resin composite shades was observed for all methods. Stronger correlation coefficients with the reference ΔE00 were observed using individual than pairwise images. Conclusions: This study showed that the use of cellphone images to measure the color difference seems to be a feasible alternative providing outcomes similar to those obtained with the spectrophotometer.

Image Similarity Analysis in Generative AI

  • Choi Haerin;Lee Hyunseok
    • International Journal of Advanced Culture Technology
    • /
    • v.12 no.4
    • /
    • pp.208-214
    • /
    • 2024
  • In Consciousness Explained, Daniel Dennett argued that consciousness is a phenomenon emerging from the complex flow of information in the brain, and to understand it, an objective approach is necessary. While AI is increasingly mimicking human functions, it is difficult to say that AI possesses consciousness similar to humans. However, consciousness is an essential factor for perception, but perception does not necessarily require consciousness. Therefore, this study aims to analyze how similar the way AI, particularly the DALL-E model developed by OpenAI, processes visual information is to the structure of human perception. In the study, new images were generated using the GPT-4 DALL-E model based on five sets of reference images, and the structural similarity between the generated images and the reference images was analyzed using SSIM (Structural Similarity Index Measure). The SSIM scores of the images generated by DALL-E based on the reference images ranged between 0.131 and 0.63. This confirmed that AI learned some degree of the visual patterns from the reference images. However, AI did not generate images that perfectly aligned with human perception, and images that contained complex shapes or fine textures recorded lower SSIM scores. Notably, the AI showed limitations in depicting human portraits, suggesting that AI's perception system is simplified compared to the complexity of human perception structures. This study demonstrated that while the DALL-E model has potential in processing visual information, there remains a clear difference from the complex human perception system. These results suggest that AI still has limitations in mimicking the way humans process visual information, indicating a need for further in-depth research into the independent characteristics of AI perception in the future

Super-Resolution Image Reconstruction Using Multi-View Cameras (다시점 카메라를 이용한 초고해상도 영상 복원)

  • Ahn, Jae-Kyun;Lee, Jun-Tae;Kim, Chang-Su
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.463-473
    • /
    • 2013
  • In this paper, we propose a super-resolution (SR) image reconstruction algorithm using multi-view images. We acquire 25 images from multi-view cameras, which consist of a $5{\times}5$ array of cameras, and then reconstruct an SR image of the center image using a low resolution (LR) input image and the other 24 LR reference images. First, we estimate disparity maps from the input image to the 24 reference images, respectively. Then, we interpolate a SR image by employing the LR image and matching points in the reference images. Finally, we refine the SR image using an iterative regularization scheme. Experimental results demonstrate that the proposed algorithm provides higher quality SR images than conventional algorithms.

The Estimation of the Transform Parameters Using the Pattern Matching with 2D Images (2차원 영상에서 패턴매칭을 이용한 3차원 물체의 변환정보 추정)

  • 조택동;이호영;양상민
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.7
    • /
    • pp.83-91
    • /
    • 2004
  • The determination of camera position and orientation from known correspondences of 3D reference points and their images is known as pose estimation in computer vision or space resection in photogrammetry. This paper discusses estimation of transform parameters using the pattern matching method with 2D images only. In general, the 3D reference points or lines are needed to find out the 3D transform parameters, but this method is applied without the 3D reference points or lines. It uses only two images to find out the transform parameters between two image. The algorithm is simulated using Visual C++ on Windows 98.

Compression Artifact Reduction for 360-degree Images using Reference-based Deformable Convolutional Neural Network

  • Kim, Hee-Jae;Kang, Je-Won;Lee, Byung-Uk
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.41-44
    • /
    • 2021
  • In this paper, we propose an efficient reference-based compression artifact reduction network for 360-degree images in an equi-rectangular projection (ERP) domain. In our insight, conventional image restoration methods cannot be applied straightforwardly to 360-degree images due to the spherical distortion. To address this problem, we propose an adaptive disparity estimator using a deformable convolution to exploit correlation among 360-degree images. With the help of the proposed convolution, the disparity estimator establishes the spatial correspondence successfully between the ERPs and extract matched textures to be used for image restoration. The experimental results demonstrate that the proposed algorithm provides reliable high-quality textures from the reference and improves the quality of the restored image as compared to the state-of-the-art single image restoration methods.

  • PDF

Exaggerated Cartooning using a Reference Image (참조 이미지를 이용한 과장된 카투닝)

  • Han, Myoung-Hun;Seo, Sang-Hyun;Ryoo, Seung-Taek;Yoon, Kyung-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.1
    • /
    • pp.33-38
    • /
    • 2011
  • This paper proposes the method of image cartooning, that makes cartoon-like images of a target, using reference images. We deform a target image using pre-defined reference images. For this deformation, we extract feature points from the target image by Active Appearance Model(AAM) and apply the warping method to the target using feature points of target and feature points of reference image as a basis of warping function. We create simplified cartoon-like images by abstraction of the deformed target image and drawing of edges and quantization of luminance of the abstracted image. Two main concept of cartoon(exaggeration and simplification) is inhered in this method when we use a exaggerated cartoon image as a reference image. It is possible for this method to create various results by control of warping and change of reference image.

Reference Functions for Synthesis and Analysis of Multiview and Integral Images

  • Saveljev, Vladimir;Kim, Sung-Kyu
    • Journal of the Optical Society of Korea
    • /
    • v.17 no.2
    • /
    • pp.148-161
    • /
    • 2013
  • We propose one- and two-dimensional reference functions for processing of integral/multiview imaging. The functions provide the synthesis/analysis of the integral image by distance, as an alternative to the composition/decomposition by view images (directions). The synthesized image was observed experimentally. In analysis confirmed by simulation in a qualitative sense, the distance was obtained by convolution of the integral image with the reference functions.