• Title/Summary/Keyword: Learned images

Search Result 208, Processing Time 0.027 seconds

Lung Segmentation Considering Global and Local Properties in Chest X-ray Images (흉부 X선 영상에서의 전역 및 지역 특성을 고려한 폐 영역 분할 연구)

  • Jeon, Woong-Gi;Kim, Tae-Yun;Kim, Sung Jun;Choi, Heung-Kuk;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.7
    • /
    • pp.829-840
    • /
    • 2013
  • In this paper, we propose a new lung segmentation method for chest x-ray images which can take both global and local properties into account. Firstly, the initial lung segmentation is computed by applying the active shape model (ASM) which keeps the shape of deformable model from the pre-learned model and searches the image boundaries. At the second segmentation stage, we also applied the localizing region-based active contour model (LRACM) for correcting various regional errors in the initial segmentation. Finally, to measure the similarities, we calculated the Dice coefficient of the segmented area using each semiautomatic method with the result of the manually segmented area by a radiologist. The comparison experiments were performed using 5 lung x-ray images. In our experiment, the Dice coefficient with manually segmented area was $95.33%{\pm}0.93%$ for the proposed method. Effective segmentation methods will be essential for the development of computer-aided diagnosis systems for a more accurate early diagnosis and prognosis regarding lung cancer in chest x-ray images.

Active Shape Model-based Objectionable Image Detection (활동적 형태 모델을 이용한 유해영상 탐지)

  • Jang, Seok-Woo;Joo, Seong-Il;Kim, Gye-Young
    • Journal of Internet Computing and Services
    • /
    • v.10 no.5
    • /
    • pp.183-194
    • /
    • 2009
  • In this paper, we propose a new method for detecting objectionable images with an active shape model. Our method first learns the shape of breast lines through principle component analysis and alignment as well as the distribution of intensity values of corresponding landmarks, and then extracts breast lines with the learned shape and intensity distribution. To accurately select the initial position of active shape model, we obtain parameters on scale, rotation, and translation. After positioning the initial location of active shape model using scale and rotation information, iterative searches are performed. We can identify adult images by calculating the average of the distance between each landmark and a candidate breast line. The experiment results show that the proposed method can detect adult images effectively by comparing various results.

  • PDF

An Efficient Hand Gesture Recognition Method using Two-Stream 3D Convolutional Neural Network Structure (이중흐름 3차원 합성곱 신경망 구조를 이용한 효율적인 손 제스처 인식 방법)

  • Choi, Hyeon-Jong;Noh, Dae-Cheol;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.6
    • /
    • pp.66-74
    • /
    • 2018
  • Recently, there has been active studies on hand gesture recognition to increase immersion and provide user-friendly interaction in a virtual reality environment. However, most studies require specialized sensors or equipment, or show low recognition rates. This paper proposes a hand gesture recognition method using Deep Learning technology without separate sensors or equipment other than camera to recognize static and dynamic hand gestures. First, a series of hand gesture input images are converted into high-frequency images, then each of the hand gestures RGB images and their high-frequency images is learned through the DenseNet three-dimensional Convolutional Neural Network. Experimental results on 6 static hand gestures and 9 dynamic hand gestures showed an average of 92.6% recognition rate and increased 4.6% compared to previous DenseNet. The 3D defense game was implemented to verify the results of our study, and an average speed of 30 ms of gesture recognition was found to be available as a real-time user interface for virtual reality applications.

Analysis of Building Object Detection Based on the YOLO Neural Network Using UAV Images (YOLO 신경망 기반의 UAV 영상을 이용한 건물 객체 탐지 분석)

  • Kim, June Seok;Hong, Il Young
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.381-392
    • /
    • 2021
  • In this study, we perform deep learning-based object detection analysis on eight types of buildings defined by the digital map topography standard code, leveraging images taken with UAV (Unmanned Aerial Vehicle). Image labeling was done for 509 images taken by UAVs and the YOLO (You Only Look Once) v5 model was applied to proceed with learning and inference. For experiments and analysis, data were analyzed by applying an open source-based analysis platform and algorithm, and as a result of the analysis, building objects were detected with a prediction probability of 88% to 98%. In addition, the learning method and model construction method necessary for the high accuracy of building object detection in the process of constructing and repetitive learning of training data were analyzed, and a method of applying the learned model to other images was sought. Through this study, a model in which high-efficiency deep neural networks and spatial information data are fused will be proposed, and the fusion of spatial information data and deep learning technology will provide a lot of help in improving the efficiency, analysis and prediction of spatial information data construction in the future.

Putting Images into Second Language: Do They Survive in the Written Drafts?

  • Huh, Myung-Hye
    • Journal of English Language & Literature
    • /
    • v.56 no.6
    • /
    • pp.1255-1279
    • /
    • 2010
  • Much has already been learned about what goes on in the minds of second language writers as they compose, yet, oddly enough, until recently little in the L2 research literature has addressed writing and mental imagery together. However, images and imaging (visual thinking) play a crucial role in perception (the basis of mental imagery), in turn, affecting language, thinking, and writing. Many theorists of mental imagery also agree that more than just language accounts for how we think and that imagery is at least as crucial as language. All of these demands, to be sure, are compounded for EFL students, which is why I investigate EFL students' writing process, focusing on the use of mental imagery and its relationship to the writing. First I speculate upon some ways that imagery influences EFL students' composing processes and products. Next, I want to explore how and whether the images in a writer's mind can be shaped effectively into a linear piece of written English in one's writing. I studied two university undergraduate EFL students, L and J. They had fairly advanced levels of English proficiency and exhibited high level of writing ability, as measured by TOEFL iBT Test. Each student wrote two comparison and contrast essays: one written under specified time limitations and the other written without the pressure of time. In order to investigate whether the amount of time in itself causes differences within an individual in imagery ability, the students were placed under strict time constraints for Topic 1. But for Topic 2, they were encouraged to take as much time as necessary to complete this essay. Immediately after completing their essays, I conducted face-to-face retrospective interviews with students to prompt them for information about the role of imagery as they write. Both L and J have spent more time on their second (untimed) essays. Without time constraint, they produced longer texts on untimed essay (149 vs. 170; 186 vs 284 words). However, despite a relatively long period of time spent writing an essay, these students neither described their images nor detailed them in their essays. Although their mental imagery generated an explosion of ideas for their writings, most visual thinking must merely be a means toward an end-pictures that writers spent in purchasing the right words or ideas.

A Car Plate Area Detection System Using Deep Convolution Neural Network (딥 컨볼루션 신경망을 이용한 자동차 번호판 영역 검출 시스템)

  • Jeong, Yunju;Ansari, Israfil;Shim, Jaechang;Lee, Jeonghwan
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1166-1174
    • /
    • 2017
  • In general, the detection of the vehicle license plate is a previous step of license plate recognition and has been actively studied for several decades. In this paper, we propose an algorithm to detect a license plate area of a moving vehicle from a video captured by a fixed camera installed on the road using the Convolution Neural Network (CNN) technology. First, license plate images and non-license plate images are applied to a previously learned CNN model (AlexNet) to extract and classify features. Then, after detecting the moving vehicle in the video, CNN detects the license plate area by comparing the features of the license plate region with the features of the license plate area. Experimental result shows relatively good performance in various environments such as incomplete lighting, noise due to rain, and low resolution. In addition, to protect personal information this proposed system can also be used independently to detect the license plate area and hide that area to secure the public's personal information.

Quantitative Visualization of Supersonic Jet Flows (초음속 제트 유동의 정량적 가시화)

  • Lee, Jae Hyeok;Zhang, Guang;Kim, Heuy Dong
    • Journal of the Korean Society of Visualization
    • /
    • v.15 no.1
    • /
    • pp.53-63
    • /
    • 2017
  • Sonic and supersonic jets include many complicated flow physics associated with shock waves, shear layers, vortices as well as strong interactions among them, and have a variety of engineering applications. Much has been learned from the previous researches on the sonic and supersonic jets but quantitative assessment of these jets is still uneasy due to the high velocity of flow, compressibility effects, and sometimes flow unsteadiness. In the present study, the sonic jets issuing from a convergent nozzle were measured by PIV and Schlieren optical techniques. Particle Image Velocimetry (PIV) with Olive oil particles of $1{\mu}m$ was employed to obtain the velocity field of the jets, and the black-white and color Schlieren images were obtained using Xe ramp. A color filter of Blue-Green-Red has been designed for the color Schlieren and obtained from an Ink jet printer. In experiments, two types of sonic nozzles were used at different operating pressure ratios(NPR). The obtained images clearly showed the major features of the jets such as Mach disk, barrel shock waves, jet boundaries, etc.

Automatic melody extraction algorithm using a convolutional neural network

  • Lee, Jongseol;Jang, Dalwon;Yoon, Kyoungro
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.12
    • /
    • pp.6038-6053
    • /
    • 2017
  • In this study, we propose an automatic melody extraction algorithm using deep learning. In this algorithm, feature images, generated using the energy of frequency band, are extracted from polyphonic audio files and a deep learning technique, a convolutional neural network (CNN), is applied on the feature images. In the training data, a short frame of polyphonic music is labeled as a musical note and a classifier based on CNN is learned in order to determine a pitch value of a short frame of audio signal. We want to build a novel structure of melody extraction, thus the proposed algorithm has a simple structure and instead of using various signal processing techniques for melody extraction, we use only a CNN to find a melody from a polyphonic audio. Despite of simple structure, the promising results are obtained in the experiments. Compared with state-of-the-art algorithms, the proposed algorithm did not give the best result, but comparable results were obtained and we believe they could be improved with the appropriate training data. In this paper, melody extraction and the proposed algorithm are introduced first, and the proposed algorithm is then further explained in detail. Finally, we present our experiment and the comparison of results follows.

Local Binary Pattern Based Defocus Blur Detection Using Adaptive Threshold

  • Mahmood, Muhammad Tariq;Choi, Young Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.3
    • /
    • pp.7-11
    • /
    • 2020
  • Enormous methods have been proposed for the detection and segmentation of blur and non-blur regions of the images. Due to the limited available information about the blur type, scenario and the level of blurriness, detection and segmentation is a challenging task. Hence, the performance of the blur measure operators is an essential factor and needs improvement to attain perfection. In this paper, we propose an effective blur measure based on the local binary pattern (LBP) with the adaptive threshold for blur detection. The sharpness metric developed based on LBP uses a fixed threshold irrespective of the blur type and level which may not be suitable for images with large variations in imaging conditions and blur type and level. Contradictory, the proposed measure uses an adaptive threshold for each image based on the image and the blur properties to generate an improved sharpness metric. The adaptive threshold is computed based on the model learned through the support vector machine (SVM). The performance of the proposed method is evaluated using a well-known dataset and compared with five state-of-the-art methods. The comparative analysis reveals that the proposed method performs significantly better qualitatively and quantitatively against all the methods.

Inadvertent Self-Detachment of Solitaire AB Stent during the Mechanical Thrombectomy for Recanalization of Acute Ischemic Stroke : Lessons Learned from the Removal of Stent via Surgical Embolectomy

  • Kang, Dong-Hun;Park, Jaechan;Hwang, Yang-Ha;Kim, Yong-Sun
    • Journal of Korean Neurosurgical Society
    • /
    • v.53 no.6
    • /
    • pp.360-363
    • /
    • 2013
  • We recently experienced self-detachment of the Solitaire stent during mechanical thrombectomy of acute ischemic stroke. Then, we tried to remove the detached stent and to recanalize the occlusion, but failed with endovascular means. The following diffusion weighted image MRI revealed no significant increase in infarction size, therefore, we performed surgical removal of the stent to rescue the patient and to elucidate the reason why the self-detachment occurred. Based upon the operative findings, the stent grabbed the main thrombi but inadvertently detached at a severely tortuous, acutely angled, and circumferentially calcified segment of the internal carotid artery. Postoperative angiography demonstrated complete recanalization of the internal carotid artery. The patient's neurological deficits gradually improved, and the modified Rankin scale score was 2 at three months after surgery. In the retrospective case review, bone window images of the baseline computed tomography (CT) scan corresponded to the operative findings. According to this finding, we hypothesized that bone window images of a baseline CT scan can play a role in terms of anticipating difficult stent retrieval before the procedure.