• 제목/요약/키워드: 학습 객체

Search Result 769, Processing Time 0.028 seconds

Style-Generative Adversarial Networks for Data Augmentation of Human Images at Homecare Environments (조호환경 내 사람 이미지 데이터 증강을 위한 Style-Generative Adversarial Networks 기법)

  • Park, Changjoon;Kim, Beomjun;Kim, Inki;Gwak, Jeonghwan
    • Annual Conference of KIPS
    • /
    • 2022.11a
    • /
    • pp.565-567
    • /
    • 2022
  • 질병을 앓고 있는 환자는 상태에 따라 병실, 주거지, 요양원 등 조호환경 내 생활 시 의료 인력의 지속적인 추적 및 관찰을 통해 신체에 이상이 생긴 경우 이를 감지하고, 신속하게 조치할 수 있도록 해야 한다. 의료 인력이 직접 환자를 확인하는 방법은 의료 인력의 반복적인 노동이 요구되며 실시간으로 환자를 확인해야 한다는 특성상 의료 인력이 상주해야 하기에 이는 곧, 의료 인력의 부족과 낭비로 이어진다. 해당 문제 해결을 위해 의료 인력을 대신하여 조호환경 내 환자의 상태를 실시간으로 모니터링할 수 있는 딥러닝 모델들이 연구되고 있다. 딥러닝 모델은 데이터의 수가 많을수록 강인한 모델을 설계할 수 있으며, 데이터셋의 배경, 객체의 특징 분포 등 다양한 조건에 영향을 받기 때문에 학습에 필요한 도메인을 가지는 많은 양의 전처리된 데이터를 수집해야 한다. 따라서, 조호환경 내 환자에 대한 데이터셋이 필요하지만, 공개된 데이터셋의 경우 양이 매우 적으며 이를 반전, 회전기법 등을이용할 경우 데이터의 수를 늘릴 수 있지만, 같은 분포의 특징을 가지는 데이터가 생성되기에 데이터 증강 기법을 단순하게 적용하면 딥러닝 모델의 과적합을 야기한다. 또한, 조호환경 내 이미지 데이터셋은 얼굴 노출과 같은 개인정보가 포함 될 수 있으며 이를 보호하기 위해 정보들을 비식별화 해야 한다는 문제점이 있다. 따라서 본 논문에서는 조호환경에서 수집된 데이터 증강을 위한 Style-Generative Adversarial Networks 기법을 적용하여 조호환경 데이터셋 수집에 효과적인 증강 기법을 제안한다.

A Study on Automatic Vehicle Extraction within Drone Image Bounding Box Using Unsupervised SVM Classification Technique (무감독 SVM 분류 기법을 통한 드론 영상 경계 박스 내 차량 자동 추출 연구)

  • Junho Yeom
    • Land and Housing Review
    • /
    • v.14 no.4
    • /
    • pp.95-102
    • /
    • 2023
  • Numerous investigations have explored the integration of machine leaning algorithms with high-resolution drone image for object detection in urban settings. However, a prevalent limitation in vehicle extraction studies involves the reliance on bounding boxes rather than instance segmentation. This limitation hinders the precise determination of vehicle direction and exact boundaries. Instance segmentation, while providing detailed object boundaries, necessitates labour intensive labelling for individual objects, prompting the need for research on automating unsupervised instance segmentation in vehicle extraction. In this study, a novel approach was proposed for vehicle extraction utilizing unsupervised SVM classification applied to vehicle bounding boxes in drone images. The method aims to address the challenges associated with bounding box-based approaches and provide a more accurate representation of vehicle boundaries. The study showed promising results, demonstrating an 89% accuracy in vehicle extraction. Notably, the proposed technique proved effective even when dealing with significant variations in spectral characteristics within the vehicles. This research contributes to advancing the field by offering a viable solution for automatic and unsupervised instance segmentation in the context of vehicle extraction from image.

Automatic Fracture Detection in CT Scan Images of Rocks Using Modified Faster R-CNN Deep-Learning Algorithm with Rotated Bounding Box (회전 경계박스 기능의 변형 FASTER R-CNN 딥러닝 알고리즘을 이용한 암석 CT 영상 내 자동 균열 탐지)

  • Pham, Chuyen;Zhuang, Li;Yeom, Sun;Shin, Hyu-Soung
    • Tunnel and Underground Space
    • /
    • v.31 no.5
    • /
    • pp.374-384
    • /
    • 2021
  • In this study, we propose a new approach for automatic fracture detection in CT scan images of rock specimens. This approach is built on top of two-stage object detection deep learning algorithm called Faster R-CNN with a major modification of using rotated bounding box. The use of rotated bounding box plays a key role in the future work to overcome several inherent difficulties of fracture segmentation relating to the heterogeneity of uninterested background (i.e., minerals) and the variation in size and shape of fracture. Comparing to the commonly used bounding box (i.e., axis-align bounding box), rotated bounding box shows a greater adaptability to fit with the elongated shape of fracture, such that minimizing the ratio of background within the bounding box. Besides, an additional benefit of rotated bounding box is that it can provide relative information on the orientation and length of fracture without the further segmentation and measurement step. To validate the applicability of the proposed approach, we train and test our approach with a number of CT image sets of fractured granite specimens with highly heterogeneous background and other rocks such as sandstone and shale. The result demonstrates that our approach can lead to the encouraging results on fracture detection with the mean average precision (mAP) up to 0.89 and also outperform the conventional approach in terms of background-to-object ratio within the bounding box.

Improvement of Mid-Wave Infrared Image Visibility Using Edge Information of KOMPSAT-3A Panchromatic Image (KOMPSAT-3A 전정색 영상의 윤곽 정보를 이용한 중적외선 영상 시인성 개선)

  • Jinmin Lee;Taeheon Kim;Hanul Kim;Hongtak Lee;Youkyung Han
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1283-1297
    • /
    • 2023
  • Mid-wave infrared (MWIR) imagery, due to its ability to capture the temperature of land cover and objects, serves as a crucial data source in various fields including environmental monitoring and defense. The KOMPSAT-3A satellite acquires MWIR imagery with high spatial resolution compared to other satellites. However, the limited spatial resolution of MWIR imagery, in comparison to electro-optical (EO) imagery, constrains the optimal utilization of the KOMPSAT-3A data. This study aims to create a highly visible MWIR fusion image by leveraging the edge information from the KOMPSAT-3A panchromatic (PAN) image. Preprocessing is implemented to mitigate the relative geometric errors between the PAN and MWIR images. Subsequently, we employ a pre-trained pixel difference network (PiDiNet), a deep learning-based edge information extraction technique, to extract the boundaries of objects from the preprocessed PAN images. The MWIR fusion imagery is then generated by emphasizing the brightness value corresponding to the edge information of the PAN image. To evaluate the proposed method, the MWIR fusion images were generated in three different sites. As a result, the boundaries of terrain and objects in the MWIR fusion images were emphasized to provide detailed thermal information of the interest area. Especially, the MWIR fusion image provided the thermal information of objects such as airplanes and ships which are hard to detect in the original MWIR images. This study demonstrated that the proposed method could generate a single image that combines visible details from an EO image and thermal information from an MWIR image, which contributes to increasing the usage of MWIR imagery.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.

A Performance Comparison of Land-Based Floating Debris Detection Based on Deep Learning and Its Field Applications (딥러닝 기반 육상기인 부유쓰레기 탐지 모델 성능 비교 및 현장 적용성 평가)

  • Suho Bak;Seon Woong Jang;Heung-Min Kim;Tak-Young Kim;Geon Hui Ye
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.193-205
    • /
    • 2023
  • A large amount of floating debris from land-based sources during heavy rainfall has negative social, economic, and environmental impacts, but there is a lack of monitoring systems for floating debris accumulation areas and amounts. With the recent development of artificial intelligence technology, there is a need to quickly and efficiently study large areas of water systems using drone imagery and deep learning-based object detection models. In this study, we acquired various images as well as drone images and trained with You Only Look Once (YOLO)v5s and the recently developed YOLO7 and YOLOv8s to compare the performance of each model to propose an efficient detection technique for land-based floating debris. The qualitative performance evaluation of each model showed that all three models are good at detecting floating debris under normal circumstances, but the YOLOv8s model missed or duplicated objects when the image was overexposed or the water surface was highly reflective of sunlight. The quantitative performance evaluation showed that YOLOv7 had the best performance with a mean Average Precision (intersection over union, IoU 0.5) of 0.940, which was better than YOLOv5s (0.922) and YOLOv8s (0.922). As a result of generating distortion in the color and high-frequency components to compare the performance of models according to data quality, the performance degradation of the YOLOv8s model was the most obvious, and the YOLOv7 model showed the lowest performance degradation. This study confirms that the YOLOv7 model is more robust than the YOLOv5s and YOLOv8s models in detecting land-based floating debris. The deep learning-based floating debris detection technique proposed in this study can identify the spatial distribution of floating debris by category, which can contribute to the planning of future cleanup work.

Region of Interest Extraction and Bilinear Interpolation Application for Preprocessing of Lipreading Systems (입 모양 인식 시스템 전처리를 위한 관심 영역 추출과 이중 선형 보간법 적용)

  • Jae Hyeok Han;Yong Ki Kim;Mi Hye Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.189-198
    • /
    • 2024
  • Lipreading is one of the important parts of speech recognition, and several studies have been conducted to improve the performance of lipreading in lipreading systems for speech recognition. Recent studies have used method to modify the model architecture of lipreading system to improve recognition performance. Unlike previous research that improve recognition performance by modifying model architecture, we aim to improve recognition performance without any change in model architecture. In order to improve the recognition performance without modifying the model architecture, we refer to the cues used in human lipreading and set other regions such as chin and cheeks as regions of interest along with the lip region, which is the existing region of interest of lipreading systems, and compare the recognition rate of each region of interest to propose the highest performing region of interest In addition, assuming that the difference in normalization results caused by the difference in interpolation method during the process of normalizing the size of the region of interest affects the recognition performance, we interpolate the same region of interest using nearest neighbor interpolation, bilinear interpolation, and bicubic interpolation, and compare the recognition rate of each interpolation method to propose the best performing interpolation method. Each region of interest was detected by training an object detection neural network, and dynamic time warping templates were generated by normalizing each region of interest, extracting and combining features, and mapping the dimensionality reduction of the combined features into a low-dimensional space. The recognition rate was evaluated by comparing the distance between the generated dynamic time warping templates and the data mapped to the low-dimensional space. In the comparison of regions of interest, the result of the region of interest containing only the lip region showed an average recognition rate of 97.36%, which is 3.44% higher than the average recognition rate of 93.92% in the previous study, and in the comparison of interpolation methods, the bilinear interpolation method performed 97.36%, which is 14.65% higher than the nearest neighbor interpolation method and 5.55% higher than the bicubic interpolation method. The code used in this study can be found a https://github.com/haraisi2/Lipreading-Systems.

Development of deep learning network based low-quality image enhancement techniques for improving foreign object detection performance (이물 객체 탐지 성능 개선을 위한 딥러닝 네트워크 기반 저품질 영상 개선 기법 개발)

  • Ki-Yeol Eom;Byeong-Seok Min
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • Along with economic growth and industrial development, there is an increasing demand for various electronic components and device production of semiconductor, SMT component, and electrical battery products. However, these products may contain foreign substances coming from manufacturing process such as iron, aluminum, plastic and so on, which could lead to serious problems or malfunctioning of the product, and fire on the electric vehicle. To solve these problems, it is necessary to determine whether there are foreign materials inside the product, and may tests have been done by means of non-destructive testing methodology such as ultrasound ot X-ray. Nevertheless, there are technical challenges and limitation in acquiring X-ray images and determining the presence of foreign materials. In particular Small-sized or low-density foreign materials may not be visible even when X-ray equipment is used, and noise can also make it difficult to detect foreign objects. Moreover, in order to meet the manufacturing speed requirement, the x-ray acquisition time should be reduced, which can result in the very low signal- to-noise ratio(SNR) lowering the foreign material detection accuracy. Therefore, in this paper, we propose a five-step approach to overcome the limitations of low resolution, which make it challenging to detect foreign substances. Firstly, global contrast of X-ray images are increased through histogram stretching methodology. Second, to strengthen the high frequency signal and local contrast, we applied local contrast enhancement technique. Third, to improve the edge clearness, Unsharp masking is applied to enhance edges, making objects more visible. Forth, the super-resolution method of the Residual Dense Block (RDB) is used for noise reduction and image enhancement. Last, the Yolov5 algorithm is employed to train and detect foreign objects after learning. Using the proposed method in this study, experimental results show an improvement of more than 10% in performance metrics such as precision compared to low-density images.

Recognition method using stereo images-based 3D information for improvement of face recognition (얼굴인식의 향상을 위한 스테레오 영상기반의 3차원 정보를 이용한 인식)

  • Park Chang-Han;Paik Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.30-38
    • /
    • 2006
  • In this paper, we improved to drops recognition rate according to distance using distance and depth information with 3D from stereo face images. A monocular face image has problem to drops recognition rate by uncertainty information such as distance of an object, size, moving, rotation, and depth. Also, if image information was not acquired such as rotation, illumination, and pose change for recognition, it has a very many fault. So, we wish to solve such problem. Proposed method consists of an eyes detection algorithm, analysis a pose of face, md principal component analysis (PCA). We also convert the YCbCr space from the RGB for detect with fast face in a limited region. We create multi-layered relative intensity map in face candidate region and decide whether it is face from facial geometry. It can acquire the depth information of distance, eyes, and mouth in stereo face images. Proposed method detects face according to scale, moving, and rotation by using distance and depth. We train by using PCA the detected left face and estimated direction difference. Simulation results with face recognition rate of 95.83% (100cm) in the front and 98.3% with the pose change were obtained successfully. Therefore, proposed method can be used to obtain high recognition rate with an appropriate scaling and pose change according to the distance.

A Study on Enhancing the Performance of Detecting Lip Feature Points for Facial Expression Recognition Based on AAM (AAM 기반 얼굴 표정 인식을 위한 입술 특징점 검출 성능 향상 연구)

  • Han, Eun-Jung;Kang, Byung-Jun;Park, Kang-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.299-308
    • /
    • 2009
  • AAM(Active Appearance Model) is an algorithm to extract face feature points with statistical models of shape and texture information based on PCA(Principal Component Analysis). This method is widely used for face recognition, face modeling and expression recognition. However, the detection performance of AAM algorithm is sensitive to initial value and the AAM method has the problem that detection error is increased when an input image is quite different from training data. Especially, the algorithm shows high accuracy in case of closed lips but the detection error is increased in case of opened lips and deformed lips according to the facial expression of user. To solve these problems, we propose the improved AAM algorithm using lip feature points which is extracted based on a new lip detection algorithm. In this paper, we select a searching region based on the face feature points which are detected by AAM algorithm. And lip corner points are extracted by using Canny edge detection and histogram projection method in the selected searching region. Then, lip region is accurately detected by combining color and edge information of lip in the searching region which is adjusted based on the position of the detected lip corners. Based on that, the accuracy and processing speed of lip detection are improved. Experimental results showed that the RMS(Root Mean Square) error of the proposed method was reduced as much as 4.21 pixels compared to that only using AAM algorithm.