• 제목/요약/키워드: Multi Resolution

검색결과 1,480건 처리시간 0.028초

특징형상 변환을 이용한 B-rep모델의 다중해상도 구현 (Multi-resolutional Representation of B-rep Model Using Feature Conversion)

  • 최동혁;김태완;이건우
    • 한국CDE학회논문집
    • /
    • 제7권2호
    • /
    • pp.121-130
    • /
    • 2002
  • The concept of Level Of Detail (LOD) was introduced and has been used to enhance display performance and to carry out certain engineering analysis effectively. We would like to use an adequate complexity level for each geometric model depending on specific engineering needs and purposes. Solid modeling systems are widely used in industry, and are applied to advanced applications such as virtual assembly. In addition, as the demand to share these engineering tasks through networks is emerging, the problem of building a solid model of an appropriate resolution to a given application becomes a matter of great necessity. However, current researches are mostly focused on triangular mesh models and various operators to reduce the number of triangles. So we are working on the multi-resolution of the solid model itself, rather than that of the triangular mesh model. In this paper, we propose multi-resolution representation of B-rep model by reordering and converting design features into an enclosing volume and subtractive features.

Low Resolution Rate Face Recognition Based on Multi-scale CNN

  • Wang, Ji-Yuan;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제21권12호
    • /
    • pp.1467-1472
    • /
    • 2018
  • For the problem that the face image of surveillance video cannot be accurately identified due to the low resolution, this paper proposes a low resolution face recognition solution based on convolutional neural network model. Convolutional Neural Networks (CNN) model for multi-scale input The CNN model for multi-scale input is an improvement over the existing "two-step method" in which low-resolution images are up-sampled using a simple bi-cubic interpolation method. Then, the up sampled image and the high-resolution image are mixed as a model training sample. The CNN model learns the common feature space of the high- and low-resolution images, and then measures the feature similarity through the cosine distance. Finally, the recognition result is given. The experiments on the CMU PIE and Extended Yale B datasets show that the accuracy of the model is better than other comparison methods. Compared with the CMDA_BGE algorithm with the highest recognition rate, the accuracy rate is 2.5%~9.9%.

Real Scene Text Image Super-Resolution Based on Multi-Scale and Attention Fusion

  • Xinhua Lu;Haihai Wei;Li Ma;Qingji Xue;Yonghui Fu
    • Journal of Information Processing Systems
    • /
    • 제19권4호
    • /
    • pp.427-438
    • /
    • 2023
  • Plenty of works have indicated that single image super-resolution (SISR) models relying on synthetic datasets are difficult to be applied to real scene text image super-resolution (STISR) for its more complex degradation. The up-to-date dataset for realistic STISR is called TextZoom, while the current methods trained on this dataset have not considered the effect of multi-scale features of text images. In this paper, a multi-scale and attention fusion model for realistic STISR is proposed. The multi-scale learning mechanism is introduced to acquire sophisticated feature representations of text images; The spatial and channel attentions are introduced to capture the local information and inter-channel interaction information of text images; At last, this paper designs a multi-scale residual attention module by skillfully fusing multi-scale learning and attention mechanisms. The experiments on TextZoom demonstrate that the model proposed increases scene text recognition's (ASTER) average recognition accuracy by 1.2% compared to text super-resolution network.

MODIS영상의 고해상도화 수법을 이용한 오창평야 NDVI의 평가 (Assessment of the Ochang Plain NDVI using Improved Resolution Method from MODIS Images)

  • 박종화;나상일
    • 한국환경복원기술학회지
    • /
    • 제9권6호
    • /
    • pp.1-12
    • /
    • 2006
  • Remote sensing cannot provide a direct measurement of vegetation index (VI) but it can provide a reasonably good estimate of vegetation index, defined as the ratio of satellite bands. The monitoring of vegetation in nearby urban regions is made difficult by the low spatial resolution and temporal resolution image captures. In this study, enhancing spatial resolution method is adapted as to improve a low spatial resolution. Recent studies have successfully estimated normalized difference vegetation index (NDVI) using improved resolution method such as from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard EOS Terra satellite. Image enhancing spatial resolution is an important tool in remote sensing, as many Earth observation satellites provide both high-resolution and low-resolution multi-spectral images. Examples of enhancement of a MODIS multi-spectral image and a MODIS NDVI image of Cheongju using a Landsat TM high-resolution multi-spectral image are presented. The results are compared with that of the IHS technique is presented for enhancing spatial resolution of multi-spectral bands using a higher resolution data set. To provide a continuous monitoring capability for NDVI, in situ measurements of NDVI from paddy field was carried out in 2004 for comparison with remotely sensed MODIS data. We compare and discuss NDVI estimates from MODIS sensors and in-situ spectroradiometer data over Ochang plain region. These results indicate that the MODIS NDVI is underestimated by approximately 50%.

CUDA를 이용한 초해상도 기법의 영상처리 속도개선 방법 (An Image Processing Speed Enhancement in a Multi-Frame Super Resolution Algorithm by a CUDA Method)

  • 김미정
    • 한국군사과학기술학회지
    • /
    • 제14권4호
    • /
    • pp.663-668
    • /
    • 2011
  • Although multi-frame super resolution algorithm has many merits but it demands too much calculation time. Researches have shown that image processing time can be reduced using a CUDA(Compute unified device architecture) which is one of GPGPU(General purpose computing on graphics processing unit) models. In this paper, we show that the processing time of multi-frame super resolution algorithm can be reduced by employing the CUDA. It was applied not to the whole parts but to the largest time consuming parts of the program. The simulation result shows that using a CUDA can reduce an operation time dramatically. Therefore it can be possible that multi-frame super resolution algorithm is implemented in real time by using libraries of image processing algorithms which are made by a CUDA.

선택적 불리안 연산자를 이용한 솔리드 모델의 다중해상도 구현 (Multi-Resolution Representation of Solid Models using the Selective Boolean Operations)

  • 이상헌;이강수;박상근
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2002년도 춘계학술대회 논문집
    • /
    • pp.833-835
    • /
    • 2002
  • In this paper, we propose multi-resolutional representation of B-rep solid models using the selective Boolean operations on non-manifold geometric models. Since the union and subtraction operations of the selective Boolean operations are commutative, the integrity of the model is guaranteed for reordering design features. A multi-resolution representation is established using a non-manifold merged set model and a feature modeling tree reordered according to some criterion of level of detail (LOD). Then, a solid model for a specified LOD can be extracted from this multi-resolution model using the selective Boolean operations.

  • PDF

Interpolation based Single-path Sub-pixel Convolution for Super-Resolution Multi-Scale Networks

  • Alao, Honnang;Kim, Jin-Sung;Kim, Tae Sung;Oh, Juhyen;Lee, Kyujoong
    • Journal of Multimedia Information System
    • /
    • 제8권4호
    • /
    • pp.203-210
    • /
    • 2021
  • Deep leaning convolutional neural networks (CNN) have successfully been applied to image super-resolution (SR). Despite their great performances, SR techniques tend to focus on a certain upscale factor when training a particular model. Algorithms for single model multi-scale networks can easily be constructed if images are upscaled prior to input, but sub-pixel convolution upsampling works differently for each scale factor. Recent SR methods employ multi-scale and multi-path learning as a solution. However, this causes unshared parameters and unbalanced parameter distribution across various scale factors. We present a multi-scale single-path upsample module as a solution by exploiting the advantages of sub-pixel convolution and interpolation algorithms. The proposed model employs sub-pixel convolution for the highest scale factor among the learning upscale factors, and then utilize 1-dimension interpolation, compressing the learned features on the channel axis to match the desired output image size. Experiments are performed for the single-path upsample module, and compared to the multi-path upsample module. Based on the experimental results, the proposed algorithm reduces the upsample module's parameters by 24% and presents slightly to better performance compared to the previous algorithm.

Mobile Robot navigation using an Multi-resolution Electrostatic Potential Filed

  • Kim, Cheol-Taek;Lee, Ju-Jang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.690-693
    • /
    • 2004
  • This paper proposes a multi-resolution electrostatic potential field (MREPF) based solution to the mobile robot path planning and collision avoidance problem in 2D dynamic environment. The MREPF is an environment method in calculation time and updating field map. The large scale resolution map is added to EPF and this resolution map interacts with the small scale resolution map to find an optimal solution in real time. This approach can be interpreted with Atlantis model. The simulation studies show the efficiency of the proposed algorithm.

  • PDF

다중 해상도 영상에서 페이싯 모델을 이용한 초소형 표적 검출 (Small Target Detection in Multi-Resolution Image Using Facet Model)

  • 박지환;이민우;이철원;주재흠;남기곤
    • 융합신호처리학회논문지
    • /
    • 제12권2호
    • /
    • pp.76-82
    • /
    • 2011
  • 본 논문은 다중 해상도 영상에서 3차 페이싯 모델을 이용하여 적외선 영상의 원거리에 위치하고 있는 초소형 표적의 위치와 크기를 검출하기 위한 방법을 제안한다. 먼저, 원 영상을 점차 축소하여 여려 단계의 다중 해상도의 영상들로 구성한다. 각 단계에서의 다중 해상도 영상들에 대해 페이싯 모델과 국부 극대 조건을 적용하여 초소형 표적의 위치를 검출한다. 다중 해상도 영상에서 각 페이싯 모델의 국부 극대값을 의미하는 $D_2$값 중 최대 크기를 가지는 위치를 표적의 위치라고 평가한다. 이 경우 각 단계의 다중 해상도 영상들에 대해 크기가 다른 표적의 검출이 가능하게 된다. 본 논문에서 제안한 초소형 표적 검출 방법은 초소형 표적이 있는 다양한 적외선 영상에서 실험하였다. 기존의 페이싯 모델을 이용한 방법에서는 하나의 마스크만 적용시킨 것에 반해 제안된 방법은 하나의 마스크를 다중 해상도 영상에서 적용하였다. 고정된 마스크를 다중 해상도 영상에 적용함으로써 마스크의 크기를 달리하는 효과를 확인하였고 그에 따라 검출하는 표적의 크기도 다름을 확인하였다. 이를 이용해서 표적의 위치뿐만 아니라 크기도 검출할 수 있음을 확인하였다.

An Improved Multi-resolution image fusion framework using image enhancement technique

  • Jhee, Hojin;Jang, Chulhee;Jin, Sanghun;Hong, Yonghee
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권12호
    • /
    • pp.69-77
    • /
    • 2017
  • This paper represents a novel framework for multi-scale image fusion. Multi-scale Kalman Smoothing (MKS) algorithm with quad-tree structure can provide a powerful multi-resolution image fusion scheme by employing Markov property. In general, such approach provides outstanding image fusion performance in terms of accuracy and efficiency, however, quad-tree based method is often limited to be applied in certain applications due to its stair-like covariance structure, resulting in unrealistic blocky artifacts at the fusion result where finest scale data are void or missed. To mitigate this structural artifact, in this paper, a new scheme of multi-scale fusion framework is proposed. By employing Super Resolution (SR) technique on MKS algorithm, fine resolved measurement is generated and blended through the tree structure such that missed detail information at data missing region in fine scale image is properly inferred and the blocky artifact can be successfully suppressed at fusion result. Simulation results show that the proposed method provides significantly improved fusion results in the senses of both Root Mean Square Error (RMSE) performance and visual improvement over conventional MKS algorithm.