• Title/Summary/Keyword: Images of Seoul

Search Result 2,122, Processing Time 0.035 seconds

3D Coordinates Acquisition by using Multi-view X-ray Images (다시점 X선 영상을 이용한 3차원 좌표 획득)

  • Yi, Sooyeong;Rhi, Jaeyoung;Kim, Soonchul;Lee, Jeonggyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.10
    • /
    • pp.886-890
    • /
    • 2013
  • In this paper, a 3D coordinates acquisition method for a mechanical assembly is developed by using multiview X-ray images. The multi-view X-ray images of an object are obtained by a rotary table. From the rotation transformation, it is possible to obtain the 3D coordinates of corresponding edge points on multi-view X-ray images by triangulation. The edge detection algorithm in this paper is based on the attenuation characteristic of the X-ray. The 3D coordinates of the object points are represented on a graphic display, which is used for the inspection of a mechanical assembly.

Image Analysis of Black Female Fashion Models (흑인 여성 패션모델의 이미지 분석)

  • Rhew, Soo-Hyeon;Kim, Min-Ja
    • Journal of the Korean Society of Costume
    • /
    • v.59 no.2
    • /
    • pp.87-100
    • /
    • 2009
  • This study examines black images as 'the other' in history and aims to analyze images of black female fashion models which have been changed in modern society, particularly in $21^{st}$ century post-modern world. Black images, established historically as illustrated on the paintings in $19^{th}$ century, were disseminated in $20^{th}$ century throughout the world especially by way of TV and movies as several typical images such as 'Coon' the clown as the object of entertainment, 'Buck' wild and resistant black rascal, and 'Mammy' obedient and fat black woman servant. The result of image analysis of black female fashion models, can be summarized as following five images. The first is the image of 'powerful'. Black female models frequently represent healthy image which reflects black people's excellence in sports and also the traditional Image of black skin color as strength. The second is the image of 'sexy'. They are adored as having perfect ideal body shape. They show off their sex appeal with their body. The third image is 'multicultural'. Black models represent cultures besides the western. The fourth is the image of 'fantastic'. In contrast to the real, resonable things, black female models represent wild, fancy, ghost things. The fifth is the image of 'racial discrimination' By arranging them in contrast to whites, a metaphoric image of racial discrimination can be displayed. The result shows that tome of racial images still remain on the other way.

Land cover classification of a non-accessible area using multi-sensor images and GIS data (다중센서와 GIS 자료를 이용한 접근불능지역의 토지피복 분류)

  • Kim, Yong-Min;Park, Wan-Yong;Eo, Yang-Dam;Kim, Yong-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.5
    • /
    • pp.493-504
    • /
    • 2010
  • This study proposes a classification method based on an automated training extraction procedure that may be used with very high resolution (VHR) images of non-accessible areas. The proposed method overcomes the problem of scale difference between VHR images and geographic information system (GIS) data through filtering and use of a Landsat image. In order to automate maximum likelihood classification (MLC), GIS data were used as an input to the MLC of a Landsat image, and a binary edge and a normalized difference vegetation index (NDVI) were used to increase the purity of the training samples. We identified the thresholds of an NDVI and binary edge appropriate to obtain pure samples of each class. The proposed method was then applied to QuickBird and SPOT-5 images. In order to validate the method, visual interpretation and quantitative assessment of the results were compared with products of a manual method. The results showed that the proposed method could classify VHR images and efficiently update GIS data.

Computer vision-based remote displacement monitoring system for in-situ bridge bearings robust to large displacement induced by temperature change

  • Kim, Byunghyun;Lee, Junhwa;Sim, Sung-Han;Cho, Soojin;Park, Byung Ho
    • Smart Structures and Systems
    • /
    • v.30 no.5
    • /
    • pp.521-535
    • /
    • 2022
  • Efficient management of deteriorating civil infrastructure is one of the most important research topics in many developed countries. In particular, the remote displacement measurement of bridges using linear variable differential transformers, global positioning systems, laser Doppler vibrometers, and computer vision technologies has been attempted extensively. This paper proposes a remote displacement measurement system using closed-circuit televisions (CCTVs) and a computer-vision-based method for in-situ bridge bearings having relatively large displacement due to temperature change in long term. The hardware of the system is composed of a reference target for displacement measurement, a CCTV to capture target images, a gateway to transmit images via a mobile network, and a central server to store and process transmitted images. The usage of CCTV capable of night vision capture and wireless data communication enable long-term 24-hour monitoring on wide range of bridge area. The computer vision algorithm to estimate displacement from the images involves image preprocessing for enhancing the circular features of the target, circular Hough transformation for detecting circles on the target in the whole field-of-view (FOV), and homography transformation for converting the movement of the target in the images into an actual expansion displacement. The simple target design and robust circle detection algorithm help to measure displacement using target images where the targets are far apart from each other. The proposed system is installed at the Tancheon Overpass located in Seoul, and field experiments are performed to evaluate the accuracy of circle detection and displacement measurements. The circle detection accuracy is evaluated using 28,542 images captured from 71 CCTVs installed at the testbed, and only 48 images (0.168%) fail to detect the circles on the target because of subpar imaging conditions. The accuracy of displacement measurement is evaluated using images captured for 17 days from three CCTVs; the average and root-mean-square errors are 0.10 and 0.131 mm, respectively, compared with a similar displacement measurement. The long-term operation of the system, as evaluated using 8-month data, shows high accuracy and stability of the proposed system.

Differences in Clothing Selection Criteria of Regional Subculture Groups

  • Youn, Cho-Rong;Choo, Ho-Jung
    • International Journal of Costume and Fashion
    • /
    • v.10 no.2
    • /
    • pp.51-59
    • /
    • 2010
  • This study regarded fashion selection criteria as clothing consumption value and desired fashion images, and examined selection differences according to regional subculture groups. Clothing consumption value is a direct value that people seek with clothing products and a perceived value which is divided into emotional, social, price, quality values. Fashion image which is a feeling communicated to others by wearing a certain fashion style is the most superficial value. Multivariate Analysis of Variance (MANOVA) was performed to test the differences between regional subculture groups in clothing consumption values and desired fashion images. We found some differences in clothing consumption value specifically in emotional value and social value. The group differences were remarkably significant in fashion image comparison. 'Kang-nam' group pursued 'lively', 'sophisticated', 'charming', feminine', 'gorgeous' image more than 'Kang-buk' group. While 'Kang-buk' group produced lower scores in ideal fashion images, the group had significant higher seeking in 'sportive' image compared to 'Kangnam' group.

A Novel Fundus Image Reading Tool for Efficient Generation of a Multi-dimensional Categorical Image Database for Machine Learning Algorithm Training

  • Park, Sang Jun;Shin, Joo Young;Kim, Sangkeun;Son, Jaemin;Jung, Kyu-Hwan;Park, Kyu Hyung
    • Journal of Korean Medical Science
    • /
    • v.33 no.43
    • /
    • pp.239.1-239.12
    • /
    • 2018
  • Background: We described a novel multi-step retinal fundus image reading system for providing high-quality large data for machine learning algorithms, and assessed the grader variability in the large-scale dataset generated with this system. Methods: A 5-step retinal fundus image reading tool was developed that rates image quality, presence of abnormality, findings with location information, diagnoses, and clinical significance. Each image was evaluated by 3 different graders. Agreements among graders for each decision were evaluated. Results: The 234,242 readings of 79,458 images were collected from 55 licensed ophthalmologists during 6 months. The 34,364 images were graded as abnormal by at-least one rater. Of these, all three raters agreed in 46.6% in abnormality, while 69.9% of the images were rated as abnormal by two or more raters. Agreement rate of at-least two raters on a certain finding was 26.7%-65.2%, and complete agreement rate of all-three raters was 5.7%-43.3%. As for diagnoses, agreement of at-least two raters was 35.6%-65.6%, and complete agreement rate was 11.0%-40.0%. Agreement of findings and diagnoses were higher when restricted to images with prior complete agreement on abnormality. Retinal/glaucoma specialists showed higher agreements on findings and diagnoses of their corresponding subspecialties. Conclusion: This novel reading tool for retinal fundus images generated a large-scale dataset with high level of information, which can be utilized in future development of machine learning-based algorithms for automated identification of abnormal conditions and clinical decision supporting system. These results emphasize the importance of addressing grader variability in algorithm developments.

The Software Development for Diffusion Tensor Imaging

  • Song, In-Chan;Chang, Kee-Hyun;Han, Moon-Hee
    • Proceedings of the KSMRM Conference
    • /
    • 2001.11a
    • /
    • pp.112-112
    • /
    • 2001
  • Purpose: We developed the software for diffusion tensor imaging and evaluated its feasibility in norm brains. Method: Five normal volunteers, aged from 25 to 29 years, were examined on a 1.5 T MR system. the diffusion tensor pulse sequence used a SE-EPI with 6 diffusion gradie directions of (1, 1, 0), (-1, 1,0), (1, 0, 1), (-1, 0, 1), (0, 1, 1), (0, 1, -1) and also with no diffusion gradient. A b-factor of 500 sec/mm2 was used. Measurement parameter were as follows; TR/TE=10000 ms/99 ms, FOV=240 mm, matrix=128$\times$128, slice thickness/gap=6 mm/0 mm, bandwidth=91 kHz and the number of total slices=20. Four repeated axial diffusion images were averaged for diffusion tensor imaging. A total scan 11 of 4 min 30 sec was used. Six full diffusion tensor components of Dxx, Dyy, Dzz, Dxy, Dxz and Dyz were obtained using two-point linear regression model from 7 diffusion-weight images at each pixel and fractional anisotropy and lattice index images was estimated fr their eigenvectors and eigenvalues. Our program was written on a platform of IDL. W evaluated the qualities of fractional anisotropy and lattice index images of normal brains a knew whether our software for diffusion tensor imaging may be feasible.

  • PDF

Detecting and Restoring the Occlusion Area for Generating the True Orthoimage Using IKONOS Image (IKONOS 정사영상제작을 위한 폐색 영역의 탐지와 복원)

  • Seo Min-Ho;Lee Byoung-Kil;Kim Yong-Il;Han Dong-Yeob
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.2
    • /
    • pp.131-139
    • /
    • 2006
  • IKONOS images have the perspective geometry in CCD sensor line like aerial images with central perspective geometry. So the occlusion by buildings, terrain or other objects exist in the image. It is difficult to detect the occlusion with RPCs(rational polynomial coefficients) for ortho-rectification of image. Therefore, in this study, we detected the occlusion areas in IKONOS images using the nominal collection elevation/azimuth angle and restored the hidden areas using another stereo images, from which the rue ortho image could be produced. The algorithm's validity was evaluated using the geometric accuracy of the generated ortho image.

Image Quality and Lesion Detectability of Lower-Dose Abdominopelvic CT Obtained Using Deep Learning Image Reconstruction

  • June Park;Jaeseung Shin;In Kyung Min;Heejin Bae;Yeo-Eun Kim;Yong Eun Chung
    • Korean Journal of Radiology
    • /
    • v.23 no.4
    • /
    • pp.402-412
    • /
    • 2022
  • Objective: To evaluate the image quality and lesion detectability of lower-dose CT (LDCT) of the abdomen and pelvis obtained using a deep learning image reconstruction (DLIR) algorithm compared with those of standard-dose CT (SDCT) images. Materials and Methods: This retrospective study included 123 patients (mean age ± standard deviation, 63 ± 11 years; male:female, 70:53) who underwent contrast-enhanced abdominopelvic LDCT between May and August 2020 and had prior SDCT obtained using the same CT scanner within a year. LDCT images were reconstructed with hybrid iterative reconstruction (h-IR) and DLIR at medium and high strengths (DLIR-M and DLIR-H), while SDCT images were reconstructed with h-IR. For quantitative image quality analysis, image noise, signal-to-noise ratio, and contrast-to-noise ratio were measured in the liver, muscle, and aorta. Among the three different LDCT reconstruction algorithms, the one showing the smallest difference in quantitative parameters from those of SDCT images was selected for qualitative image quality analysis and lesion detectability evaluation. For qualitative analysis, overall image quality, image noise, image sharpness, image texture, and lesion conspicuity were graded using a 5-point scale by two radiologists. Observer performance in focal liver lesion detection was evaluated by comparing the jackknife free-response receiver operating characteristic figures-of-merit (FOM). Results: LDCT (35.1% dose reduction compared with SDCT) images obtained using DLIR-M showed similar quantitative measures to those of SDCT with h-IR images. All qualitative parameters of LDCT with DLIR-M images but image texture were similar to or significantly better than those of SDCT with h-IR images. The lesion detectability on LDCT with DLIR-M images was not significantly different from that of SDCT with h-IR images (reader-averaged FOM, 0.887 vs. 0.874, respectively; p = 0.581). Conclusion: Overall image quality and detectability of focal liver lesions is preserved in contrast-enhanced abdominopelvic LDCT obtained with DLIR-M relative to those in SDCT with h-IR.