• Title/Summary/Keyword: Automatic Data Extraction

Search Result 310, Processing Time 0.032 seconds

Automatic 3D data extraction method of fashion image with mannequin using watershed and U-net (워터쉐드와 U-net을 이용한 마네킹 패션 이미지의 자동 3D 데이터 추출 방법)

  • Youngmin Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.825-834
    • /
    • 2023
  • The demands of people who purchase fashion products on Internet shopping are gradually increasing, and attempts are being made to provide user-friendly images with 3D contents and web 3D software instead of pictures and videos of products provided. As a reason for this issue, which has emerged as the most important aspect in the fashion web shopping industry, complaints that the product is different when the product is received and the image at the time of purchase has been heightened. As a way to solve this problem, various image processing technologies have been introduced, but there is a limit to the quality of 2D images. In this study, we proposed an automatic conversion technology that converts 2D images into 3D and grafts them to web 3D technology that allows customers to identify products in various locations and reduces the cost and calculation time required for conversion. We developed a system that shoots a mannequin by placing it on a rotating turntable using only 8 cameras. In order to extract only the clothing part from the image taken by this system, markers are removed using U-net, and an algorithm that extracts only the clothing area by identifying the color feature information of the background area and mannequin area is proposed. Using this algorithm, the time taken to extract only the clothes area after taking an image is 2.25 seconds per image, and it takes a total of 144 seconds (2 minutes and 4 seconds) when taking 64 images of one piece of clothing. It can extract 3D objects with very good performance compared to the system.

Automatic Extraction of Eye and Mouth Fields from Face Images using MultiLayer Perceptrons and Eigenfeatures (고유특징과 다층 신경망을 이용한 얼굴 영상에서의 눈과 입 영역 자동 추출)

  • Ryu, Yeon-Sik;O, Se-Yeong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.2
    • /
    • pp.31-43
    • /
    • 2000
  • This paper presents a novel algorithm lot extraction of the eye and mouth fields (facial features) from 2D gray level face images. First of all, it has been found that Eigenfeatures, derived from the eigenvalues and the eigenvectors of the binary edge data set constructed from the eye and mouth fields are very good features to locate these fields. The Eigenfeatures, extracted from the positive and negative training samples for the facial features, ate used to train a MultiLayer Perceptron(MLP) whose output indicates the degree to which a particular image window contains the eye or the mouth within itself. Second, to ensure robustness, the ensemble network consisting of multiple MLPs is used instead of a single MLP. The output of the ensemble network becomes the average of the multiple locations of the field each found by the constituent MLPs. Finally, in order to reduce the computation time, we extracted the coarse search region lot eyes and mouth by using prior information on face images. The advantages of the proposed approach includes that only a small number of frontal faces are sufficient to train the nets and furthermore, lends themselves to good generalization to non-frontal poses and even to other people's faces. It was also experimentally verified that the proposed algorithm is robust against slight variations of facial size and pose due to the generalization characteristics of neural networks.

  • PDF

Adaptive Image Content-Based Retrieval Techniques for Multiple Queries (다중 질의를 위한 적응적 영상 내용 기반 검색 기법)

  • Hong Jong-Sun;Kang Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.3 s.303
    • /
    • pp.73-80
    • /
    • 2005
  • Recently there have been many efforts to support searching and browsing based on the visual content of image and multimedia data. Most existing approaches to content-based image retrieval rely on query by example or user based low-level features such as color, shape, texture. But these methods of query are not easy to use and restrict. In this paper we propose a method for automatic color object extraction and labelling to support multiple queries of content-based image retrieval system. These approaches simplify the regions within images using single colorizing algorithm and extract color object using proposed Color and Spatial based Binary tree map(CSB tree map). And by searching over a large of number of processed regions, a index for the database is created by using proposed labelling method. This allows very fast indexing of the image by color contents of the images and spatial attributes. Futhermore, information about the labelled regions, such as the color set, size, and location, enables variable multiple queries that combine both color content and spatial relationships of regions. We proved our proposed system to be high performance through experiment comparable with another algorithm using 'Washington' image database.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.

An Accuracy Evaluation of Algorithm for Shoreline Change by using RTK-GPS (RTK-GPS를 이용한 해안선 변화 자동추출 알고리즘의 정확도 평가)

  • Lee, Jae One;Kim, Yong Suk;Lee, In Su
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.32 no.1D
    • /
    • pp.81-88
    • /
    • 2012
  • This present research was carried out by dividing two parts; field surveying and data processing, in order to analyze changed patterns of a shoreline. Firstly, the shoreline information measured by the precise GPS positioning during long duration was collected. Secondly, the algorithm for detecting an auto boundary with regards to the changed shoreline with multi-image data was developed. Then, a comparative research was conducted. Haeundae beach which is one of the most famous ones in Korea was selected as a test site. RTK-GPS surveying had been performed overall eight times from September 2005 to September 2009. The filed test by aerial Lidar was conducted twice on December 2006 and March 2009 respectively. As a result estimated from both sensors, there is a slight difference. The average length of shoreline analyzed by RTK-GPS is approximately 1,364.6 m, while one from aerial Lidar is about 1,402.5 m. In this investigation, the specific algorithm for detecting the shoreline detection was developed by Visual C++ MFC (Microsoft Foundation Class). The analysis result estimated by aerial photo and satellite image was 1,391.0 m. The level of reliability was 98.1% for auto boundary detection when it compared with real surveying data.

Automatic Extraction of River Levee Slope Using MMS Point Cloud Data (MMS 포인트 클라우드를 활용한 하천제방 경사도 자동 추출에 관한 연구)

  • Kim, Cheolhwan;Lee, Jisang;Choi, Wonjun;Kim, Wondae;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1425-1434
    • /
    • 2021
  • Continuous and periodic data acquisition must be preceded to maintain and manage the river facilities effectively. Adapting the existing general facilities methods, which include river surveying methods such as terrestrial laser scanners, total stations, and Global Navigation Satellite System (GNSS), has limitation in terms of its costs, manpower, and times to acquire spatial information since the river facilities are distributed across the wide and long area. On the other hand, the Mobile Mapping System (MMS) has comparative advantage in acquiring the data of river facilities since it constructs three-dimensional spatial information while moving. By using the MMS, 184,646,009 points could be attained for Anyang stream with a length of 4 kilometers only in 20 minutes. Levee points were divided at intervals of 10 meters so that about 378 levee cross sections were generated. In addition, the waterside maximum and average slope could be automatically calculated by separating slope plane form levee point cloud, and the accuracy of RMSE was confirmed by comparing with manually calculated slope. The reference slope was calculated manually by plotting point cloud of levee slope plane and selecting two points that use location information when calculating the slope. Also, as a result of comparing the water side slope with slope standard in basic river plan for Anyang stream, it is confirmed that inspecting the river facilities with the MMS point cloud is highly recommended than the existing river survey.

Detection of Plastic Greenhouses by Using Deep Learning Model for Aerial Orthoimages (딥러닝 모델을 이용한 항공정사영상의 비닐하우스 탐지)

  • Byunghyun Yoon;Seonkyeong Seong;Jaewan Choi
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.183-192
    • /
    • 2023
  • The remotely sensed data, such as satellite imagery and aerial photos, can be used to extract and detect some objects in the image through image interpretation and processing techniques. Significantly, the possibility for utilizing digital map updating and land monitoring has been increased through automatic object detection since spatial resolution of remotely sensed data has improved and technologies about deep learning have been developed. In this paper, we tried to extract plastic greenhouses into aerial orthophotos by using fully convolutional densely connected convolutional network (FC-DenseNet), one of the representative deep learning models for semantic segmentation. Then, a quantitative analysis of extraction results had performed. Using the farm map of the Ministry of Agriculture, Food and Rural Affairsin Korea, training data was generated by labeling plastic greenhouses into Damyang and Miryang areas. And then, FC-DenseNet was trained through a training dataset. To apply the deep learning model in the remotely sensed imagery, instance norm, which can maintain the spectral characteristics of bands, was used as normalization. In addition, optimal weights for each band were determined by adding attention modules in the deep learning model. In the experiments, it was found that a deep learning model can extract plastic greenhouses. These results can be applied to digital map updating of Farm-map and landcover maps.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Validation of Extreme Rainfall Estimation in an Urban Area derived from Satellite Data : A Case Study on the Heavy Rainfall Event in July, 2011 (위성 자료를 이용한 도시지역 극치강우 모니터링: 2011년 7월 집중호우를 중심으로)

  • Yoon, Sun-Kwon;Park, Kyung-Won;Kim, Jong Pil;Jung, Il-Won
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.4
    • /
    • pp.371-384
    • /
    • 2014
  • This study developed a new algorithm of extreme rainfall extraction based on the Communication, Ocean and Meteorological Satellite (COMS) and the Tropical Rainfall Measurement Mission (TRMM) Satellite image data and evaluated its applicability for the heavy rainfall event in July-2011 in Seoul, South Korea. The power-series-regression-based Z-R relationship was employed for taking into account for empirical relationships between TRMM/PR, TRMM/VIRS, COMS, and Automatic Weather System(AWS) at each elevation. The estimated Z-R relationship ($Z=303R^{0.72}$) agreed well with observation from AWS (correlation coefficient=0.57). The estimated 10-minute rainfall intensities from the COMS satellite using the Z-R relationship generated underestimated rainfall intensities. For a small rainfall event the Z-R relationship tended to overestimated rainfall intensities. However, the overall patterns of estimated rainfall were very comparable with the observed data. The correlation coefficients and the Root Mean Square Error (RMSE) of 10-minute rainfall series from COMS and AWS gave 0.517, and 3.146, respectively. In addition, the averaged error value of the spatial correlation matrix ranged from -0.530 to -0.228, indicating negative correlation. To reduce the error by extreme rainfall estimation using satellite datasets it is required to take into more extreme factors and improve the algorithm through further study. This study showed the potential utility of multi-geostationary satellite data for building up sub-daily rainfall and establishing the real-time flood alert system in ungauged watersheds.

Dynamic ontology construction algorithm from Wikipedia and its application toward real-time nation image analysis (국가이미지 분석을 위한 위키피디아 실시간 동적 온톨로지 구축 알고리즘 및 적용)

  • Lee, Youngwhan
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.4
    • /
    • pp.979-991
    • /
    • 2016
  • Measuring nation images was a challenging task when employing offline surveys was the only option. It was not only prohibitively expensive, but too much time-consuming and therefore unfitted to this rapidly changing world. Although demands for monitoring real-time nation images were ever-increasing, an affordable and reliable solution to measure nation images has not been available up to this date. The researcher in this study developed a semi-automatic ontology construction algorithm, named "double-crossing double keyword collection (or DCDKC)" to measure nation images from Wikipedia in real-time. The ontology, WikiOnto, can be used to reflect dynamic image changes. In this study, an instance of WikiOnto was constructed by applying the algorithm to the big-three exporting countries in East Asia, Korea, Japan, and China. Then, the numbers of page views for words in the instance of WikiOnto were counted. A collection of the counting for each country was compared to each other to inspect the possibility to use for dynamic nation images. As for the conclusion, the result shows how the images of the three countries have changed for the period the study was performed. It confirms that DCDKC can very well be used for a real-time nation-image monitoring system.