• Title/Summary/Keyword: 초해상화 알고리즘

Search Result 20, Processing Time 0.03 seconds

An Efficient Super Resolution Method for Time-Series Remotely Sensed Image (시계열 위성영상을 위한 효과적인 Super Resolution 기법)

  • Jung, Seung-Kyoon;Choi, Yun-Soo;Jung, Hyung-Sup
    • Spatial Information Research
    • /
    • v.19 no.1
    • /
    • pp.29-40
    • /
    • 2011
  • GOCI the world first Ocean Color Imager in Geostationary Orbit, which could obtain total 8 images of the same region a day, however, its spatial resolution(500m) is not enough to use for the accurate land application, Super Resolution(SR), reconstructing the high resolution(HR) image from multiple low resolution(LR) images introduced by computer vision field. could be applied to the time-series remotely sensed images such as GOCI data, and the higher resolution image could be reconstructed from multiple images by the SR, and also the cloud masked area of images could be recovered. As the precedent study for developing the efficient SR method for GOCI images, on this research, it reproduced the simulated data under the acquisition process of the remote sensed data, and then the simulated images arc applied to the proposed algorithm. From the proposed algorithm result of the simulated data, it turned out that low resolution(LR) images could be registered in sub-pixel accuracy, and the reconstructed HR image including RMSE, PSNR, SSIM Index value compared with original HR image were 0.5763, 52.9183 db, 0.9486, could be obtained.

Visualization methods of Terra MODIS and GPM satellite orbits for Water Hazrd Information System Monitoring (수재해 정보시스템 모니터링을 위한 Terra MODIS, GPM 궤도의 시각화 방안)

  • PARK, Gwang-Ha;CHAE, Hyo-Sok;HWANG, Eui-Ho;LEE, Jeong-Ju
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2016.05a
    • /
    • pp.318-318
    • /
    • 2016
  • 위성은 준 실시간으로 국토 전체의 관측과 미계측/비접근 지역의 관측도 가능하여 가뭄, 홍수 등 수재해와 관련된 분석 자료로 활용되고 있으며, 위성 기반의 수재해 모니터링 적용성에 대한 연구 또한 수행되고 있다. 위성에서 관측된 자료는 NASA, JAXA 등의 위성 관리 센터에서 알고리즘을 적용하여 인터넷으로 제공하고, 최근 K-water에서는 수자원분야의 위성활용을 위해 위성 자료 수집 시스템을 갖추어 Aqua/Terra MODIS, GPM, GCOM-W1 등의 위성 자료를 수집하고 있다. 위성 자료는 5분~16일 등의 다양한 주기로 제공되고 있으며, 자료 타입, 측정 시간 등의 간단한 정보만 파일명으로 표시되어 위성의 위치(경위도) 및 해당 지점의 위성 자료를 얻기 위해서는 위성 자료를 확인해야만 하는 번거로움이 따른다. 본 연구에서는 순차적으로 관측된 위성 자료의 시 공간적 속성정보를 추출하고 해당 정보를 영상과 함께 맵핑하여, 시간의 흐름에 따른 위성 궤도의 시각화 방안을 제시하였다. 위성 궤도의 시각화 방안으로 사용된 위성 자료는 Terra MODIS의 'MOD02SSH', GPM GMI 센서의 'GPROF' 자료 타입을 사용하였다. 'MOD02SSH'는 5분 동안 5km의 공간해상도로 측정한 자료가 1개의 파일이며, 'GPROF'는 5분 동안 4km의 공간해상도로 측정한다. 공전 주기의 검증을 위해 케플러의 제3법칙을 적용한 Terra 위성의 공전주기는 98.75분으로 계산되며, 위성 자료의 공전주기는 98.87분으로 나타난다. 검증 결과 약 0.12초의 오차가 발생하며, 정확한 위성 고도와 높은 해상도의 위성 자료를 통해 오차의 감소가 가능하다. 이를 통해 시각화 된 동적 시계열 이미지는 시간에 따른 위성 궤도의 정보를 추출 할 수 있다. 이는 수재해 정보시스템의 모니터링을 위해 사용 가능하고, 시간에 따른 위성 궤도 정보를 통하여 필요한 시간대의 위성 위치 정보, 해당 지점의 관측 자료를 효율적으로 수집하여 자료 수집을 위한 시간 단축이 가능하며, 사용자 또는 관리자를 위한 모니터링 수행 또한 효율적인 운영이 가능할 것으로 사료된다.

  • PDF

Development of compound eye image quality improvement based on ESRGAN (ESRGAN 기반의 복안영상 품질 향상 알고리즘 개발)

  • Taeyoon Lim;Yongjin Jo;Seokhaeng Heo;Jaekwan Ryu
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.2
    • /
    • pp.11-19
    • /
    • 2024
  • Demand for small biomimetic robots that can carry out reconnaissance missions without being exposed to the enemy in underground spaces and narrow passages is increasing in order to increase the fighting power and survivability of soldiers in wartime situations. A small compound eye image sensor for environmental recognition has advantages such as small size, low aberration, wide angle of view, depth estimation, and HDR that can be used in various ways in the field of vision. However, due to the small lens size, the resolution is low, and the problem of resolution in the fused image obtained from the actual compound eye image occurs. This paper proposes a compound eye image quality enhancement algorithm based on Image Enhancement and ESRGAN to overcome the problem of low resolution. If the proposed algorithm is applied to compound eye image fusion images, image resolution and image quality can be improved, so it is expected that performance improvement results can be obtained in various studies using compound eye cameras.

Analysis of Intra Prediction for Digital Watermarking based on HEVC (HEVC기반의 디지털 워터마킹을 위한 인트라 예측의 분석)

  • Seo, Young-Ho;Kim, Bora;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.5
    • /
    • pp.1189-1198
    • /
    • 2015
  • Recently, with rapid development of digital broadcasting technology, high-definition video service increased interest and demand. supplied mobile and image device support that improve 4~16 time existing Full HD. Such as high-definition contents supply, proposed compression for high-efficiency video codec (HEVC). Therefore, watermarking technology is necessary applying HEVC for protecting ownership and intellectual property. In this paper, analysis of prediction mode in intra frame and study feasibility of watermarking in re-encoding based HEVC. Proposed detect un-changed blocks in intra frame, using the result of analysis prediction mode.

Cell-Based Wavelet Compression Method for Volume Data (볼륨 데이터를 위한 셀 기반 웨이브릿 압축 기법)

  • Kim, Tae-Yeong;Sin, Yeong-Gil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.11
    • /
    • pp.1285-1295
    • /
    • 1999
  • 본 논문은 방대한 크기의 볼륨 데이타를 효율적으로 렌더링하기 위한 셀 기반 웨이브릿 압축 방법을 제시한다. 이 방법은 볼륨을 작은 크기의 셀로 나누고, 셀 단위로 웨이브릿 변환을 한 다음 복원 순서에 따른 런-길이(run-length) 인코딩을 수행하여 높은 압축율과 빠른 복원을 제공한다. 또한 최근 복원 정보를 캐쉬 자료 구조에 효율적으로 저장하여 복원 시간을 단축시키고, 에러 임계치의 정규화로 비정규화된 웨이브릿 압축보다 빠른 속도로 정규화된 압축과 같은 고화질의 이미지를 생성하였다. 본 연구의 성능을 평가하기 위하여 {{}} 해상도의 볼륨 데이타를 압축하여 쉬어-? 분해(shear-warp factorization) 알고리즘에 적용한 결과, 손상이 거의 없는 상태로 약 27:1의 압축율이 얻어졌고, 약 3초의 렌더링 시간이 걸렸다.Abstract This paper presents an efficient cell-based wavelet compression method of large volume data. Volume data is divided into individual cell of {{}} voxels, and then wavelet transform is applied to each cell. The transformed cell is run-length encoded according to the reconstruction order resulting in a fairly good compression ratio and fast reconstruction. A cache structure is used to speed up the process of reconstruction and a threshold normalization scheme is presented to produce a higher quality rendered image. We have combined our compression method with shear-warp factorization, which is an accelerated volume rendering algorithm. Experimental results show the space requirement to be about 27:1 and the rendering time to be about 3 seconds for {{}} data sets while preserving the quality of an image as like as using original data.

A license plate area segmentation algorithm using statistical processing on color and edge information (색상과 에지에 대한 통계 처리를 이용한 번호판 영역 분할 알고리즘)

  • Seok Jung-Chul;Kim Ku-Jin;Baek Nak-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.353-360
    • /
    • 2006
  • This paper presents a robust algorithm for segmenting a vehicle license plate area from a road image. We consider the features of license plates in three aspects : 1) edges due to the characters in the plate, 2) colors in the plate, and 3) geometric properties of the plate. In the preprocessing step, we compute the thresholds based on each feature to decide whether a pixel is inside a plate or not. A statistical approach is applied to the sample images to compute the thresholds. For a given road image, our algorithm binarizes it by using the thresholds. Then, we select three candidate regions to be a plate by searching the binary image with a moving window. The plate area is selected among the candidates with simple heuristics. This algorithm robustly detects the plate against the transformation or the difference of color intensity of the plate in the input image. Moreover, the preprocessing step requires only a small number of sample images for the statistical processing. The experimental results show that the algorithm has 97.8% of successful segmentation of the plate from 228 input images. Our prototype implementation shows average processing time of 0.676 seconds per image for a set of $1280{\times}960$ images, executed on a 3GHz Pentium4 PC with 512M byte memory.

Comparison of Seismic Data Interpolation Performance using U-Net and cWGAN (U-Net과 cWGAN을 이용한 탄성파 탐사 자료 보간 성능 평가)

  • Yu, Jiyun;Yoon, Daeung
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.3
    • /
    • pp.140-161
    • /
    • 2022
  • Seismic data with missing traces are often obtained regularly or irregularly due to environmental and economic constraints in their acquisition. Accordingly, seismic data interpolation is an essential step in seismic data processing. Recently, research activity on machine learning-based seismic data interpolation has been flourishing. In particular, convolutional neural network (CNN) and generative adversarial network (GAN), which are widely used algorithms for super-resolution problem solving in the image processing field, are also used for seismic data interpolation. In this study, CNN-based algorithm, U-Net and GAN-based algorithm, and conditional Wasserstein GAN (cWGAN) were used as seismic data interpolation methods. The results and performances of the methods were evaluated thoroughly to find an optimal interpolation method, which reconstructs with high accuracy missing seismic data. The work process for model training and performance evaluation was divided into two cases (i.e., Cases I and II). In Case I, we trained the model using only the regularly sampled data with 50% missing traces. We evaluated the model performance by applying the trained model to a total of six different test datasets, which consisted of a combination of regular, irregular, and sampling ratios. In Case II, six different models were generated using the training datasets sampled in the same way as the six test datasets. The models were applied to the same test datasets used in Case I to compare the results. We found that cWGAN showed better prediction performance than U-Net with higher PSNR and SSIM. However, cWGAN generated additional noise to the prediction results; thus, an ensemble technique was performed to remove the noise and improve the accuracy. The cWGAN ensemble model removed successfully the noise and showed improved PSNR and SSIM compared with existing individual models.

UHD Video Stitching Method for Enhanced User Experience (사용자 경험을 극대화한 UHD 영상 합성 기술)

  • Gankhuyag, Ganzorig;Hong, Eun Gi;Kim, Giyeol;Choe, Yoonsik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.7
    • /
    • pp.1387-1394
    • /
    • 2015
  • Along with the development of network transmission technology, the IPTV market is growing in fast pace. Additionally the UHD resolution broadcasting system along with user experience (UX) that provides better service to user has attracted attention recently since there are not enough research has been done with differentiated the UX that can enhance the UX yet. Therefore we proposed a low complexity syntax level image stitching implementation technique that run with multi-view services, which makes possibility to view multiple channel or video contents on the screen at the same time. Simulation results have demonstrated the liability and effectiveness of the proposed algorithm by showing that capability of generating more than 80 frames per second by stitching four Full-HD size videos into UHD frame.

A Proposed Algorithm and Sampling Conditions for Nonlinear Analysis of EEG (뇌파의 비선형 분석을 위한 신호추출조건 및 계산 알고리즘)

  • Shin, Chul-Jin;Lee, Kwang-Ho;Choi, Sung-Ku;Yoon, In-Young
    • Sleep Medicine and Psychophysiology
    • /
    • v.6 no.1
    • /
    • pp.52-60
    • /
    • 1999
  • Objectives: With the object of finding the appropriate conditions and algorithms for dimensional analysis of human EEG, we calculated correlation dimensions in the various condition of sampling rate and data aquisition time and improved the computation algorithm by taking advantage of bit operation instead of log operation. Methods: EEG signals from 13 scalp lead of a man were digitized with A-D converter under the condition of 12 bit resolution and 1000 Hertz of sampling rate during 32 seconds. From the original data, we made 15 time series data which have different sampling rate of 62.5, 125, 250, 500, 1000 hertz and data acqusition time of 10, 20, 30 second, respectively. New algorithm to shorten the calculation time using bit operation and the Least Trimmed Squares(LTS) estimator to get the optimal slope was applied to these data. Results: The values of the correlation dimension showed the increasing pattern as the data acquisition time becomes longer. The data with sampling rate of 62.5 Hz showed the highest value of correlation dimension regardless of sampling time but the correlation dimension at other sampling rates revealed similar values. The computation with bit operation instead of log operation had a statistically significant effect of shortening of calculation time and LTS method estimated more stably the slope of correlation dimension than the Least Squares estimator. Conclusion: The bit operation and LTS methods were successfully utilized to time-saving and efficient calculation of correlation dimension. In addition, time series of 20-sec length with sampling rate of 125 Hz was adequate to estimate the dimensional complexity of human EEG.

  • PDF

Development of deep learning network based low-quality image enhancement techniques for improving foreign object detection performance (이물 객체 탐지 성능 개선을 위한 딥러닝 네트워크 기반 저품질 영상 개선 기법 개발)

  • Ki-Yeol Eom;Byeong-Seok Min
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • Along with economic growth and industrial development, there is an increasing demand for various electronic components and device production of semiconductor, SMT component, and electrical battery products. However, these products may contain foreign substances coming from manufacturing process such as iron, aluminum, plastic and so on, which could lead to serious problems or malfunctioning of the product, and fire on the electric vehicle. To solve these problems, it is necessary to determine whether there are foreign materials inside the product, and may tests have been done by means of non-destructive testing methodology such as ultrasound ot X-ray. Nevertheless, there are technical challenges and limitation in acquiring X-ray images and determining the presence of foreign materials. In particular Small-sized or low-density foreign materials may not be visible even when X-ray equipment is used, and noise can also make it difficult to detect foreign objects. Moreover, in order to meet the manufacturing speed requirement, the x-ray acquisition time should be reduced, which can result in the very low signal- to-noise ratio(SNR) lowering the foreign material detection accuracy. Therefore, in this paper, we propose a five-step approach to overcome the limitations of low resolution, which make it challenging to detect foreign substances. Firstly, global contrast of X-ray images are increased through histogram stretching methodology. Second, to strengthen the high frequency signal and local contrast, we applied local contrast enhancement technique. Third, to improve the edge clearness, Unsharp masking is applied to enhance edges, making objects more visible. Forth, the super-resolution method of the Residual Dense Block (RDB) is used for noise reduction and image enhancement. Last, the Yolov5 algorithm is employed to train and detect foreign objects after learning. Using the proposed method in this study, experimental results show an improvement of more than 10% in performance metrics such as precision compared to low-density images.