• Title/Summary/Keyword: Visual Feature Extraction

Search Result 141, Processing Time 0.024 seconds

Generating Radiology Reports via Multi-feature Optimization Transformer

  • Rui Wang;Rong Hua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.10
    • /
    • pp.2768-2787
    • /
    • 2023
  • As an important research direction of the application of computer science in the medical field, the automatic generation technology of radiology report has attracted wide attention in the academic community. Because the proportion of normal regions in radiology images is much larger than that of abnormal regions, words describing diseases are often masked by other words, resulting in significant feature loss during the calculation process, which affects the quality of generated reports. In addition, the huge difference between visual features and semantic features causes traditional multi-modal fusion method to fail to generate long narrative structures consisting of multiple sentences, which are required for medical reports. To address these challenges, we propose a multi-feature optimization Transformer (MFOT) for generating radiology reports. In detail, a multi-dimensional mapping attention (MDMA) module is designed to encode the visual grid features from different dimensions to reduce the loss of primary features in the encoding process; a feature pre-fusion (FP) module is constructed to enhance the interaction ability between multi-modal features, so as to generate a reasonably structured radiology report; a detail enhanced attention (DEA) module is proposed to enhance the extraction and utilization of key features and reduce the loss of key features. In conclusion, we evaluate the performance of our proposed model against prevailing mainstream models by utilizing widely-recognized radiology report datasets, namely IU X-Ray and MIMIC-CXR. The experimental outcomes demonstrate that our model achieves SOTA performance on both datasets, compared with the base model, the average improvement of six key indicators is 19.9% and 18.0% respectively. These findings substantiate the efficacy of our model in the domain of automated radiology report generation.

A Spatial Filtering Neural Network Extracting Feature Information Of Handwritten Character (필기체 문자 인식에서 특징 추출을 위한 공간 필터링 신경회로망)

  • Hong, Keong-Ho;Jeong, Eun-Hwa
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.1
    • /
    • pp.19-25
    • /
    • 2001
  • A novel approach for the feature extraction of handwritten characters is proposed by using spatial filtering neural networks with 4 layers. The proposed system first removes rough pixels which are easy to occur in handwritten characters. The system then extracts and removes the boundary information which have no influence on characters recognition. Finally, The system extracts feature information and removes the noises from feature information. The spatial filters adapted in the system correspond to the receptive fields of ganglion cells in retina and simple cells in visual cortex. With PE2 Hangul database, we perform experiments extracting features of handwritten characters recognition. It will be shown that the network can extract feature informations from handwritten characters successfully.

  • PDF

Automatic Registration between Multiple IR Images Using Simple Pre-processing Method and Modified Local Features Extraction Algorithm (단순 전처리 방법과 수정된 지역적 피쳐 추출기법을 이용한 다중 적외선영상 자동 기하보정)

  • Kim, Dae Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.485-494
    • /
    • 2017
  • This study focuses on automatic image registration between multiple IR images using simple preprocessing method and modified local feature extraction algorithm. The input images were preprocessed by using the median and absolute value after histogram equalization, and it could be effectively applied to reduce the brightness difference value between images by applying the similarity of extracted features to the concept of angle instead of distance. The results were evaluated using visual and inverse RMSE methods. The features that could not be achieved by the existing local feature extraction technique showed high image matching reliability and application convenience. It is expected that this method can be used as one of the automatic registration methods between multi-sensor images under specific conditions.

Enhancement on 3 DoF Image Stitching Using Inertia Sensor Data (관성 센서 데이터를 활용한 3 DoF 이미지 스티칭 향상)

  • Kim, Minwoo;Kim, Sang-Kyun
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.51-61
    • /
    • 2017
  • This paper proposes a method to generate panoramic images by combining conventional feature extraction algorithms (e.g., SIFT, SURF, MPEG-7 CDVS) with sensed data from an inertia sensor to enhance the stitching results. The challenge of image stitching increases when the images are taken from two different mobile phones with no posture calibration. Using inertia sensor data obtained by the mobile phone, images with different yaw angles, pitch angles, roll angles are preprocessed and adjusted before performing stitching process. Performance of stitching (e.g., feature extraction time, inlier point numbers, stitching accuracy) between conventional feature extraction algorithms is reported along with the stitching performance with/without using the inertia sensor data.

Using Keystroke Dynamics for Implicit Authentication on Smartphone

  • Do, Son;Hoang, Thang;Luong, Chuyen;Choi, Seungchan;Lee, Dokyeong;Bang, Kihyun;Choi, Deokjai
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.8
    • /
    • pp.968-976
    • /
    • 2014
  • Authentication methods on smartphone are demanded to be implicit to users with minimum users' interaction. Existing authentication methods (e.g. PINs, passwords, visual patterns, etc.) are not effectively considering remembrance and privacy issues. Behavioral biometrics such as keystroke dynamics and gait biometrics can be acquired easily and implicitly by using integrated sensors on smartphone. We propose a biometric model involving keystroke dynamics for implicit authentication on smartphone. We first design a feature extraction method for keystroke dynamics. And then, we build a fusion model of keystroke dynamics and gait to improve the authentication performance of single behavioral biometric on smartphone. We operate the fusion at both feature extraction level and matching score level. Experiment using linear Support Vector Machines (SVM) classifier reveals that the best results are achieved with score fusion: a recognition rate approximately 97.86% under identification mode and an error rate approximately 1.11% under authentication mode.

Texture Analysis and Classification Using Wavelet Extension and Gray Level Co-occurrence Matrix for Defect Detection in Small Dimension Images

  • Agani, Nazori;Al-Attas, Syed Abd Rahman;Salleh, Sheikh Hussain Sheikh
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.2059-2064
    • /
    • 2004
  • Texture analysis is an important role for automatic visual insfection. This paper presents an application of wavelet extension and Gray level co-occurrence matrix (GLCM) for detection of defect encountered in textured images. Texture characteristic in low quality images is not to easy task to perform caused by noise, low frequency and small dimension. In order to solve this problem, we have developed a procedure called wavelet image extension. Wavelet extension procedure is used to determine the frequency bands carrying the most information about the texture by decomposing images into multiple frequency bands and to form an image approximation with higher resolution. Thus, wavelet extension procedure offers the ability to robust feature extraction in images. Then the features are extracted from the co-occurrence matrices computed from the sub-bands which performed by partitioning the texture image into sub-window. In the detection part, Mahalanobis distance classifier is used to decide whether the test image is defective or non defective.

  • PDF

Spatial-temporal texture features for 3D human activity recognition using laser-based RGB-D videos

  • Ming, Yue;Wang, Guangchao;Hong, Xiaopeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1595-1613
    • /
    • 2017
  • The IR camera and laser-based IR projector provide an effective solution for real-time collection of moving targets in RGB-D videos. Different from the traditional RGB videos, the captured depth videos are not affected by the illumination variation. In this paper, we propose a novel feature extraction framework to describe human activities based on the above optical video capturing method, namely spatial-temporal texture features for 3D human activity recognition. Spatial-temporal texture feature with depth information is insensitive to illumination and occlusions, and efficient for fine-motion description. The framework of our proposed algorithm begins with video acquisition based on laser projection, video preprocessing with visual background extraction and obtains spatial-temporal key images. Then, the texture features encoded from key images are used to generate discriminative features for human activity information. The experimental results based on the different databases and practical scenarios demonstrate the effectiveness of our proposed algorithm for the large-scale data sets.

The Impacts of Decomposition Levels in Wavelet Transform on Anomaly Detection from Hyperspectral Imagery

  • Yoo, Hee Young;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.6
    • /
    • pp.623-632
    • /
    • 2012
  • In this paper, we analyzed the effect of wavelet decomposition levels in feature extraction for anomaly detection from hyperspectral imagery. After wavelet analysis, anomaly detection was experimentally performed using the RX detector algorithm to analyze the detecting capabilities. From the experiment for anomaly detection using CASI imagery, the characteristics of extracted features and the changes of their patterns showed that radiance curves were simplified as wavelet transform progresses and H bands did not show significant differences between target anomaly and background in the previous levels. The results of anomaly detection and their ROC curves showed the best performance when using the appropriate sub-band decided from the visual interpretation of wavelet analysis which was L band at the decomposition level where the overall shape of profile was preserved. The results of this study would be used as fundamental information or guidelines when applying wavelet transform to feature extraction and selection from hyperspectral imagery. However, further researches for various anomaly targets and the quantitative selection of optimal decomposition levels are needed for generalization.

A Method for Identifying Tubercle Bacilli using Neural Networks

  • Lin, Sheng-Fuu;Chen, Hsien-Tse
    • Journal of Biomedical Engineering Research
    • /
    • v.30 no.3
    • /
    • pp.191-198
    • /
    • 2009
  • Phlegm smear testing for acid-fast bacilli (AFB) requires careful examination of tubercle bacilli under a microscope to distinguish between positive and negative findings. The biggest weakness of this method is the visual limitations of the examiners. It is also time-consuming, and mistakes may easily occur. This paper proposes a method of identifying tubercle bacilli that uses a computer instead of a human. To address the challenges of AFB testing, this study designs and investigates image systems that can be used to identify tubercle bacilli. The proposed system uses an electronic microscope to capture digital images that are then processed through feature extraction, image segmentation, image recognition, and neural networks to analyze tubercle bacilli. The proposed system can detect the amount of tubercle bacilli and find their locations. This paper analyzes 184 tubercle bacilli images. Fifty images are used to train the artificial neural network, and the rest are used for testing. The proposed system has a 95.6% successful identification rate, and only takes 0.8 seconds to identify an image.

MPEG-7 Homogeneous Texture Descriptor

  • Ro, Yong-Man;Kim, Mun-Churl;Kang, Ho-Kyung;Manjunath, B.S.;Kim, Jin-Woong
    • ETRI Journal
    • /
    • v.23 no.2
    • /
    • pp.41-51
    • /
    • 2001
  • MPEG-7 standardization work has started with the aims of providing fundamental tools for describing multimedia contents. MPEG-7 defines the syntax and semantics of descriptors and description schemes so that they may be used as fundamental tools for multimedia content description. In this paper, we introduce a texture based image description and retrieval method, which is adopted as the homogeneous texture descriptor in the visual part of the MPEG-7 final committee draft. The current MPEG-7 homogeneous texture descriptor consists of the mean, the standard deviation value of an image, energy, and energy deviation values of Fourier transform of the image. These are extracted from partitioned frequency channels based on the human visual system (HVS). For reliable extraction of the texture descriptor, Radon transformation is employed. This is suitable for HVS behavior. We also introduce various matching methods; for example, intensity-invariant, rotation-invariant and/or scale-invariant matching. This technique retrieves relevant texture images when the user gives a querying texture image. In order to show the promising performance of the texture descriptor, we take the experimental results with the MPEG-7 test sets. Experimental results show that the MPEG-7 texture descriptor gives an efficient and effective retrieval rate. Furthermore, it gives fast feature extraction time for constructing the texture descriptor.

  • PDF