• 제목/요약/키워드: Image Annotation

검색결과 114건 처리시간 0.038초

A Survey on VR-Based Annotation of Medical Images

  • Mika Anttonen;Dongwann Kang
    • Journal of Information Processing Systems
    • /
    • 제20권4호
    • /
    • pp.418-431
    • /
    • 2024
  • The usage of virtual reality (VR) in healthcare field has been gaining attention lately. The main use cases revolve around medical imaging and clinical skill training. Healthcare professionals have found great benefits in these cases when done in VR. While medical imaging on the desktop has lots of available software with various tools, VR versions are mostly stripped-down with only basic tools. One of the many tool groups significantly missing is annotation. In this paper, we survey the current situation of medical imaging software both on the desktop and in the VR environment. We will discuss general information on medical imaging and provide examples of both desktop and VR applications. We will also discuss the current status of annotation in VR, the problems that need to be overcome and possible solutions for them. The findings of this paper should help developers of future medical image annotation tools in choosing which problem they want to tackle and possible methods. The findings will be used to help in our future work of developing annotation tools.

AgeCAPTCHA: an Image-based CAPTCHA that Annotates Images of Human Faces with their Age Groups

  • Kim, Jonghak;Yang, Joonhyuk;Wohn, Kwangyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권3호
    • /
    • pp.1071-1092
    • /
    • 2014
  • Annotating images with tags that describe the content of the images facilitates image retrieval. However, this task is challenging for both humans and computers. In response, a new approach has been proposed that converts the manual image annotation task into CAPTCHA challenges. However, this approach has not been widely used because of its weak security and the fact that it can be applied only to annotate for a specific type of attribute clearly separated into mutually exclusive categories (e.g., gender). In this paper, we propose a novel image annotation CAPTCHA scheme, which can successfully differentiate between humans and computers, annotate image content difficult to separate into mutually exclusive categories, and generate verified test images difficult for computers to identify but easy for humans. To test its feasibility, we applied our scheme to annotate images of human faces with their age groups and conducted user studies. The results showed that our proposed system, called AgeCAPTCHA, annotated images of human faces with high reliability, yet the process was completed by the subjects quickly and accurately enough for practical use. As a result, we have not only verified the effectiveness of our scheme but also increased the applicability of image annotation CAPTCHAs.

SHAP를 이용한 이미지 어노테이션 자동화 프로세스 연구 (A Study on Image Annotation Automation Process using SHAP for Defect Detection)

  • 정진형;심현수;김용수
    • 산업경영시스템학회지
    • /
    • 제46권1호
    • /
    • pp.76-83
    • /
    • 2023
  • Recently, the development of computer vision with deep learning has made object detection using images applicable to diverse fields, such as medical care, manufacturing, and transportation. The manufacturing industry is saving time and money by applying computer vision technology to detect defects or issues that may occur during the manufacturing and inspection process. Annotations of collected images and their location information are required for computer vision technology. However, manually labeling large amounts of images is time-consuming, expensive, and can vary among workers, which may affect annotation quality and cause inaccurate performance. This paper proposes a process that can automatically collect annotations and location information for images using eXplainable AI, without manual annotation. If applied to the manufacturing industry, this process is thought to save the time and cost required for image annotation collection and collect relatively high-quality annotation information.

다중 클래스 SVM과 주석 코드 배열을 이용한 의료 영상 자동 주석 생성 (Medical Image Automatic Annotation Using Multi-class SVM and Annotation Code Array)

  • 박기희;고병철;남재열
    • 정보처리학회논문지B
    • /
    • 제16B권4호
    • /
    • pp.281-288
    • /
    • 2009
  • 본 논문은 의료 영상 중 X-ray 영상에 대한 효과적인 분류와 자동 주석 생성을 위한 방법을 제안한다. X-ray 영상은 일반 자연 영상과는 다르게 영상 내에 중요한 의미를 가지고 있는 관심 영역과 어두운 단색의 배경으로 구성된 특징을 가지고 있음으로 본 논문에서는, 영상의 중요영역에서 해리스 코너 검출기를 이용한 색 구조 기술자(H-CSD)로 색 특징을 추출하고, 질감 특징을 위해 경계선 히스토그램 기술자(EHD)를 사용하였다. 추출된 두 개의 특징 벡터들은 각각 다중 클래스 Support Vector Machine에 적용되어 20개의 카테고리 중 하나로 영상을 분류한다. 마지막으로, 영상은 미리 정의된 카테고리들의 계층적인 관계와 우선 순위에 기반하여 주석 코드 배열(Annotation Code Array)을 부여 받고 이를 이용하여 다수의 최적 키워드를 얻으며 갖게 된다. 실험에서는 제안한 주석 생성방법을 관련 연구 방법과 비교하여 성능이 개선 되었음을 보여주고 있다.

GLCM을 이용한 다중 베르누이 확률 변수 기반 자동 영상 동적 키워드 추출 방법 (Automatically Dynamic Image Annotation Method Based on Multiple Bernoulli Relevance Models Using GLCM Feature)

  • 박태준
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2009년도 추계학술발표대회
    • /
    • pp.335-336
    • /
    • 2009
  • In this paper, I propose an automatic approach to annotating images dynamically based on MBRM(Multiple Bernoulli Relevance Models) using GLCM(Grey Level Co-occurrence Matrix). MBRM is more appropriate to annotate images compare with multinomial distribution. The model is used in limited test set, MSRC-v2 (Microsoft Research Cambridge Image Database). The results show that this model is significantly outperforms previously reported results on the task of image annotation and retrieval.

Efficient Semi-automatic Annotation System based on Deep Learning

  • Hyunseok Lee;Hwa Hui Shin;Soohoon Maeng;Dae Gwan Kim;Hyojeong Moon
    • 대한임베디드공학회논문지
    • /
    • 제18권6호
    • /
    • pp.267-275
    • /
    • 2023
  • This paper presents the development of specialized software for annotating volume-of-interest on 18F-FDG PET/CT images with the goal of facilitating the studies and diagnosis of head and neck cancer (HNC). To achieve an efficient annotation process, we employed the SE-Norm-Residual Layer-based U-Net model. This model exhibited outstanding proficiency to segment cancerous regions within 18F-FDG PET/CT scans of HNC cases. Manual annotation function was also integrated, allowing researchers and clinicians to validate and refine annotations based on dataset characteristics. Workspace has a display with fusion of both PET and CT images, providing enhance user convenience through simultaneous visualization. The performance of deeplearning model was validated using a Hecktor 2021 dataset, and subsequently developed semi-automatic annotation functionalities. We began by performing image preprocessing including resampling, normalization, and co-registration, followed by an evaluation of the deep learning model performance. This model was integrated into the software, serving as an initial automatic segmentation step. Users can manually refine pre-segmented regions to correct false positives and false negatives. Annotation images are subsequently saved along with their corresponding 18F-FDG PET/CT fusion images, enabling their application across various domains. In this study, we developed a semi-automatic annotation software designed for efficiently generating annotated lesion images, with applications in HNC research and diagnosis. The findings indicated that this software surpasses conventional tools, particularly in the context of HNC-specific annotation with 18F-FDG PET/CT data. Consequently, developed software offers a robust solution for producing annotated datasets, driving advances in the studies and diagnosis of HNC.

Active Learning on Sparse Graph for Image Annotation

  • Li, Minxian;Tang, Jinhui;Zhao, Chunxia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제6권10호
    • /
    • pp.2650-2662
    • /
    • 2012
  • Due to the semantic gap issue, the performance of automatic image annotation is still far from satisfactory. Active learning approaches provide a possible solution to cope with this problem by selecting most effective samples to ask users to label for training. One of the key research points in active learning is how to select the most effective samples. In this paper, we propose a novel active learning approach based on sparse graph. Comparing with the existing active learning approaches, the proposed method selects the samples based on two criteria: uncertainty and representativeness. The representativeness indicates the contribution of a sample's label propagating to the other samples, while the existing approaches did not take the representativeness into consideration. Extensive experiments show that bringing the representativeness criterion into the sample selection process can significantly improve the active learning effectiveness.

멀티-큐 통합을 기반으로 WWW 영상의 자동 주석 (A WWW Images Automatic Annotation Based On Multi-cues Integration)

  • 신성윤;문형윤;이양원
    • 한국컴퓨터정보학회논문지
    • /
    • 제13권4호
    • /
    • pp.79-86
    • /
    • 2008
  • 인터넷의 빠른 발전으로 현재 HTML 웹 페이지에 내장된 영상들은 눈에 띄게 두드러졌다. 내용을 묘사하고 주의를 끄는 놀랄만한 함수 때문에 영상들은 웹 페이지에서 사실상 중요하게 되었다. 모든 영상들은 가공할 만한 데이터베이스로 구성되어있다. 게다가. 영상들의 의미론적인 의미도 주변의 텍스트나 링크에 의해 잘 표현된다. 하지만 이들 영상의 소수들이 주요 구에 정확히 할당되고 주요 구들을 현재의 영상에 수작업으로 할당하는 것은 매우 어렵다. 따라서 주요 구들을 추출하는 절차의 자동화는 매우 바람직하다. 본 논문에서는 먼저 저수준 특징, 페이지 태그, 전체적인 단어 빈도수와 지역적 단어 빈도수를 기반으로 한 WWW 영상 주석 방법을 소개한다. 그리고 멀티-큐 통합영상 주석 방법을 전개해 나간다. 또한 실험을 통하여 멀티-큐 영상 주석 방법이 다른 방법보다 우수함을 보여준다.

  • PDF

강건한 CNN기반 수중 물체 인식을 위한 이미지 합성과 자동화된 Annotation Tool (Synthesizing Image and Automated Annotation Tool for CNN based Under Water Object Detection)

  • 전명환;이영준;신영식;장혜수;여태경;김아영
    • 로봇학회논문지
    • /
    • 제14권2호
    • /
    • pp.139-149
    • /
    • 2019
  • In this paper, we present auto-annotation tool and synthetic dataset using 3D CAD model for deep learning based object detection. To be used as training data for deep learning methods, class, segmentation, bounding-box, contour, and pose annotations of the object are needed. We propose an automated annotation tool and synthetic image generation. Our resulting synthetic dataset reflects occlusion between objects and applicable for both underwater and in-air environments. To verify our synthetic dataset, we use MASK R-CNN as a state-of-the-art method among object detection model using deep learning. For experiment, we make the experimental environment reflecting the actual underwater environment. We show that object detection model trained via our dataset show significantly accurate results and robustness for the underwater environment. Lastly, we verify that our synthetic dataset is suitable for deep learning model for the underwater environments.

Using Context Information to Improve Retrieval Accuracy in Content-Based Image Retrieval Systems

  • Hejazi, Mahmoud R.;Woo, Woon-Tack;Ho, Yo-Sung
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2006년도 학술대회 1부
    • /
    • pp.926-930
    • /
    • 2006
  • Current image retrieval techniques have shortcomings that make it difficult to search for images based on a semantic understanding of what the image is about. Since an image is normally associated with multiple contexts (e.g. when and where a picture was taken,) the knowledge of these contexts can enhance the quantity of semantic understanding of an image. In this paper, we present a context-aware image retrieval system, which uses the context information to infer a kind of metadata for the captured images as well as images in different collections and databases. Experimental results show that using these kinds of information can not only significantly increase the retrieval accuracy in conventional content-based image retrieval systems but decrease the problems arise by manual annotation in text-based image retrieval systems as well.

  • PDF