• 제목/요약/키워드: Image Labeling

검색결과 375건 처리시간 0.034초

HSI 색상 모델에서 색상 분할을 이용한 교통 신호등 검출과 인식 (Traffic Signal Detection and Recognition Using a Color Segmentation in a HSI Color Model)

  • 정민철
    • 반도체디스플레이기술학회지
    • /
    • 제21권4호
    • /
    • pp.92-98
    • /
    • 2022
  • This paper proposes a new method of the traffic signal detection and the recognition in an HSI color model. The proposed method firstly converts a ROI image in the RGB model to in the HSI model to segment the color of a traffic signal. Secondly, the segmented colors are dilated by the morphological processing to connect the traffic signal light and the signal light case and finally, it extracts the traffic signal light and the case by the aspect ratio using the connected component analysis. The extracted components show the detection and the recognition of the traffic signal lights. The proposed method is implemented using C language in Raspberry Pi 4 system with a camera module for a real-time image processing. The system was fixedly installed in a moving vehicle, and it recorded a video like a vehicle black box. Each frame of the recorded video was extracted, and then the proposed method was tested. The results show that the proposed method is successful for the detection and the recognition of traffic signals.

Development of Dataset Items for Commercial Space Design Applying AI

  • Jung Hwa SEO;Segeun CHUN;Ki-Pyeong, KIM
    • 한국인공지능학회지
    • /
    • 제11권1호
    • /
    • pp.25-29
    • /
    • 2023
  • In this paper, the purpose is to create a standard of AI training dataset type for commercial space design. As the market size of the field of space design continues to increase and the time spent increases indoors after COVID-19, interest in space is expanding throughout society. In addition, more and more consumers are getting used to the digital environment. Therefore, If you identify trends and preemptively propose the atmosphere and specifications that customers require quickly and easily, you can increase customer trust and conduct effective sales. As for the data set type, commercial districts were divided into a total of 8 categories, and images that could be processed were derived by refining 4,009,30MB JPG format images collected through web crawling. Then, by performing bounding and labeling operations, we developed a 'Dataset for AI Training' of 3,356 commercial space image data in CSV format with a size of 2.08MB. Through this study, elements of spatial images such as place type, space classification, and furniture can be extracted and used when developing AI algorithms, and it is expected that images requested by clients can be easily and quickly collected through spatial image input information.

다중 신경망을 이용한 객체 탐지 효율성 개선방안 (Improving Efficiency of Object Detection using Multiple Neural Networks)

  • 박대흠;임종훈;장시웅
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2022년도 춘계학술대회
    • /
    • pp.154-157
    • /
    • 2022
  • 기존의 Tensorflow CNN 환경에서 Object 탐지 방식은 Tensorflow 자체적으로 Object 라벨링 작업과 탐지를 하는 방식이다. 그러나 현재 YOLO의 등장으로 이미지 객체 탐지의 효율성이 높아졌다. 그로 인하여 기존 신경망보다 더 많은 심층 레이어를 구축할 수 있으며 또한 이미지 객체 인식률을 높일 수 있다. 따라서 본 논문에서는 Darknet, YOLO를 기반으로 한 Object 탐지 시스템을 설계하고 기존에 사용하던 합성곱 신경망에 기반한 다중 레이어 구축과 학습을 수행함으로써 탐지능력과 속도를 비교, 분석하였다. 이로 인하여 본 논문에서는 Darknet의 학습을 효율적으로 이용하는 신경망 방법론을 제시하였다.

  • PDF

기계시각을 이용한 고단 직립식 산란계 케이지의 유선 감시시스템 개발 (Development of Wired Monitoring System for Layers Rearing in Muti-tier Layers Battery by Machine Vision)

  • 정쌍양;장동일;이승주;소재광
    • Journal of Biosystems Engineering
    • /
    • 제31권5호
    • /
    • pp.436-442
    • /
    • 2006
  • This research was conducted to design and develop a wired monitoring system for judging if sick or dead layers (SDL) exist in multi-tier layers battery (MLB) by machine vision, and to analyze its performance. In this study, 20 Brown Leghorn (Hi-Brown) layers aged 37 weeks old, were used as the experimental animals. The intensity of concern paid by layers on feed was over 90% during 5 minutes and 30 seconds after providing feed, and normal layers (NL) had been standing to take feed for that period. Therefore, in this study, the optimal judging time was set by this test result. The wired monitoring system developed was consisted of a driving device for carrying machine vision systems, a control program, a RS232 to RS485 convertor, an automatic positioning system, and an image capture system. An image processing algorithm was developed to find SDL in MLB by the processes of binary processing, erosion, expansion, labeling, and reckoning central coordinate of the captured images. The optimal velocity for driving unit was set up as 0.13 m/s by the test results for wired monitoring system, and the proximity switch was controlled not to be operated for 1.0 second after first image captured. The wired monitoring system developed was tested to evaluate the remote monitoring performance at lab-scale laying hen house. Results showed that its judgement success.ate on normal cage (without SDL) was 87% and that on abnormal cage (with SDL) was 90%, respectively. Therefore, it would be concluded that the wired monitoring system developed in this study was well suited to the purpose of this study.

SVM(Support Vector Machine)을 이용한 묘삼 자동등급 판정 알고리즘 개발에 관한 연구 (Study on the Development of Auto-classification Algorithm for Ginseng Seedling using SVM (Support Vector Machine))

  • 오현근;이훈수;정선옥;조병관
    • Journal of Biosystems Engineering
    • /
    • 제36권1호
    • /
    • pp.40-47
    • /
    • 2011
  • Image analysis algorithm for the quality evaluation of ginseng seedling was investigated. The images of ginseng seedling were acquired with a color CCD camera and processed with the image analysis methods, such as binary conversion, labeling, and thinning. The processed images were used to calculate the length and weight of ginseng seedlings. The length and weight of the samples could be predicted with standard errors of 0.343 mm, and 0.0214 g respectively, $R^2$ values of 0.8738 and 0.9835 respectively. For the evaluation of the three quality grades of Gab, Eul, and abnormal ginseng seedlings, features from the processed images were extracted. The features combined with the ratio of the lengths and areas of the ginseng seedlings efficiently differentiate the abnormal shapes from the normal ones of the samples. The grade levels were evaluated with an efficient pattern recognition method of support vector machine analysis. The quality grade of ginseng seedling could be evaluated with an accuracy of 95% and 97% for training and validation, respectively. The result indicates that color image analysis with support vector machine algorithm has good potential to be used for the development of an automatic sorting system for ginseng seedling.

조립체 결함 분석 지원을 위한 영상 영역과 부품 정보의 병합 ^x Integration of Image Regions and Product Components Information to Support Fault (Integration of Image Regions and Product Components Information to Support Fault)

  • 김선희;김경윤;이형재;권오법;양형정
    • 한국콘텐츠학회논문지
    • /
    • 제6권11호
    • /
    • pp.266-275
    • /
    • 2006
  • 조립체 공정은 많은 부분이 자동화 되었지만 결함 진단 부분에서는 포괄적인 의사 결정을 지원하기 위해 다양한 분야의 전문성과 지식을 필요로 하기 때문에 자동화가 이루어지지 않고 있다. 본 논문에서는 다양한 분야의 전문가가 쉽게 접근할 수 있고 직관적으로 이해할 수 있는 영상 정보를 이용한 조립체 결함 분석 지원시스템을 제안한다. 본 시스템은 영상 정보와 제품 설계 정보 그리고 결함탐지 정보를 병합함으로써 조립체에서 효과적으로 결함을 분석하도록 지원한다. 제안된 방법은 라벨링을 이용하여 조립체의 영상을 부품 단위로 분할하고 확장된 속성 관계 그래프(eARG)를 사용하여 설계 정보와 결함 분석 정보를 일관되게 표현하여 결함 정보를 영상 정보로부터 접근할 수 있도록 한다.

  • PDF

제초로봇 개발을 위한 2차원 콩 작물 위치 자동검출 (Estimation of two-dimensional position of soybean crop for developing weeding robot)

  • 조수현;이충열;정희종;강승우;이대현
    • 드라이브 ㆍ 컨트롤
    • /
    • 제20권2호
    • /
    • pp.15-23
    • /
    • 2023
  • In this study, two-dimensional location of crops for auto weeding was detected using deep learning. To construct a dataset for soybean detection, an image-capturing system was developed using a mono camera and single-board computer and the system was mounted on a weeding robot to collect soybean images. A dataset was constructed by extracting RoI (region of interest) from the raw image and each sample was labeled with soybean and the background for classification learning. The deep learning model consisted of four convolutional layers and was trained with a weakly supervised learning method that can provide object localization only using image-level labeling. Localization of the soybean area can be visualized via CAM and the two-dimensional position of the soybean was estimated by clustering the pixels associated with the soybean area and transforming the pixel coordinates to world coordinates. The actual position, which is determined manually as pixel coordinates in the image was evaluated and performances were 6.6(X-axis), 5.1(Y-axis) and 1.2(X-axis), 2.2(Y-axis) for MSE and RMSE about world coordinates, respectively. From the results, we confirmed that the center position of the soybean area derived through deep learning was sufficient for use in automatic weeding systems.

머신 러닝을 사용한 이미지 클러스터링: K-means 방법을 사용한 InceptionV3 연구 (Image Clustering Using Machine Learning : Study of InceptionV3 with K-means Methods.)

  • 닌담 솜사우트;이효종
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2021년도 추계학술발표대회
    • /
    • pp.681-684
    • /
    • 2021
  • In this paper, we study image clustering without labeling using machine learning techniques. We proposed an unsupervised machine learning technique to design an image clustering model that automatically categorizes images into groups. Our experiment focused on inception convolutional neural networks (inception V3) with k-mean methods to cluster images. For this, we collect the public datasets containing Food-K5, Flowers, Handwritten Digit, Cats-dogs, and our dataset Rice Germination, and the owner dataset Palm print. Our experiment can expand into three-part; First, format all the images to un-label and move to whole datasets. Second, load dataset into the inception V3 extraction image features and transferred to the k-mean cluster group hold on six classes. Lastly, evaluate modeling accuracy using the confusion matrix base on precision, recall, F1 to analyze. In this our methods, we can get the results as 1) Handwritten Digit (precision = 1.000, recall = 1.000, F1 = 1.00), 2) Food-K5 (precision = 0.975, recall = 0.945, F1 = 0.96), 3) Palm print (precision = 1.000, recall = 0.999, F1 = 1.00), 4) Cats-dogs (precision = 0.997, recall = 0.475, F1 = 0.64), 5) Flowers (precision = 0.610, recall = 0.982, F1 = 0.75), and our dataset 6) Rice Germination (precision = 0.997, recall = 0.943, F1 = 0.97). Our experiment showed that modeling could get an accuracy rate of 0.8908; the outcomes state that the proposed model is strongest enough to differentiate the different images and classify them into clusters.

Damage Detection and Damage Quantification of Temporary works Equipment based on Explainable Artificial Intelligence (XAI)

  • Cheolhee Lee;Taehoe Koo;Namwook Park;Nakhoon Lim
    • 인터넷정보학회논문지
    • /
    • 제25권2호
    • /
    • pp.11-19
    • /
    • 2024
  • This paper was studied abouta technology for detecting damage to temporary works equipment used in construction sites with explainable artificial intelligence (XAI). Temporary works equipment is mostly composed of steel or aluminum, and it is reused several times due to the characters of the materials in temporary works equipment. However, it sometimes causes accidents at construction sites by using low or decreased quality of temporary works equipment because the regulation and restriction of reuse in them is not strict. Currently, safety rules such as related government laws, standards, and regulations for quality control of temporary works equipment have not been established. Additionally, the inspection results were often different according to the inspector's level of training. To overcome these limitations, a method based with AI and image processing technology was developed. In addition, it was devised by applying explainableartificial intelligence (XAI) technology so that the inspector makes more exact decision with resultsin damage detect with image analysis by the XAI which is a developed AI model for analysis of temporary works equipment. In the experiments, temporary works equipment was photographed with a 4k-quality camera, and the learned artificial intelligence model was trained with 610 labelingdata, and the accuracy was tested by analyzing the image recording data of temporary works equipment. As a result, the accuracy of damage detect by the XAI was 95.0% for the training dataset, 92.0% for the validation dataset, and 90.0% for the test dataset. This was shown aboutthe reliability of the performance of the developed artificial intelligence. It was verified for usability of explainable artificial intelligence to detect damage in temporary works equipment by the experiments. However, to improve the level of commercial software, the XAI need to be trained more by real data set and the ability to detect damage has to be kept or increased when the real data set is applied.

SURF 알고리즘을 이용한 파노라마 영상 재구성 (Panoramic Image Reconstruction using SURF Algorithm)

  • 김광백
    • 한국컴퓨터정보학회논문지
    • /
    • 제18권4호
    • /
    • pp.13-18
    • /
    • 2013
  • 디지털 카메라의 보급으로 카메라만 있으면 누구나 손쉽게 파노라마 사진을 찍을 수 있다. 파노라마 사진이란 카메라를 삼각대에 고정시킨 후, 일부분을 중첩시키면서 회전하여 얻어진 이미지를 수평으로 이동하여 이미지를 결합시키는 사진이다. 이때 수동으로 사진을 찍을 경우에는 각도가 틀어져 겹쳐지는 부분을 자연스럽게 정합하기 어렵다. 기존의 방법에서는 라벨링을 이용하여 객체를 비교한 후에 결합시키는 방법을 적용하였으나 시간이 많이 소요되고 각각의 이미지를 라벨링하는 과정에서 개체 간의 불일치가 발생하여 정확히 영상을 결합할 수 없는 경우가 발생한다. 따라서 본 논문에서는 처리 속도 개선을 위하여 전체 이미지의 1/3만 라벨링한 후에 객체 간을 비교하여 결함시킨다. 그리고 각도가 틀린 경우에는 특징점을 찾아내는 SURF 알고리즘을 적용하여 각각의 이미지에서 라벨링한 사각형의 4개의 포인터에 대해 1개의 중심점을 구하여 호모그래피를 이용하여 2개의 영상을 자연스럽게 정합한다. 본 논문에서 제안한 파노라마 영상 재구성 방법의 성능을 평가하기 위하여 다양한 이미지를 대상으로 실험한 결과, 기존의 방법보다 영상을 재구성하는데 효과적인 것을 확인하였다.