• Title/Summary/Keyword: 공간 분할 기법

Search Result 654, Processing Time 0.024 seconds

A Study of BWE-Prediction-Based Split-Band Coding Scheme (BWE 예측기반 대역분할 부호화기에 대한 연구)

  • Song, Geun-Bae;Kim, Austin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.6
    • /
    • pp.309-318
    • /
    • 2008
  • In this paper, we discuss a method for efficiently coding the high-band signal in the split-band coding approach where an input signal is divided into two bands and then each band may be encoded separately. Generally, and especially through the research on the artificial bandwidth extension (BWE), it is well known that there is a correlation between the two bands to some degree. Therefore, some coding gain could be achieved by utilizing the correlation. In the BWE-prediction-based coding approach, using a simple linear BWE function may not yield optimal results because the correlation has a non-linear characteristic. In this paper, we investigate the new coding scheme more in details. A few representative BWE functions including linear and non-linear ones are investigated and compared to find a suitable one for the coding purpose. In addition, it is also discussed whether there are some additional gains in combining the BWE coder with the predictive vector quantizer which exploits the temporal correlation.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.

Motion-Compensated Layered Video Coding for Dynamic Adaptation (동적 적응을 위한 움직임 보상 계층형 동영상 부호화)

  • 이재용;박희라;고성제
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.10B
    • /
    • pp.1912-1920
    • /
    • 1999
  • In this paper, we propose a layered video coding scheme which can generate multi-layered bitstream for heterogeneous environments. A new motion prediction structure with temporal hierarchy of frames is developed to afford temporal resolution scalability and the wavelet decomposition is adopted to offer spatial acalability. The proposed scheme can have a higher compression ratio than replenishment schemes by using motion estimation and compensation which can further reduce the temporal redundancy, and it effectively works with dynamic adaption or errors using dispersive intra-subband update (DISU). Moreover, data rate scalability can be attained by employing embeded zerotree wavelet (EZW) technique which can produce embeded bitstream. Therefore, the proposed scheme is expected to be effectively used in heterogeneous environments such as the Internet, ATM, and mobile networks where interoperability are required.

  • PDF

SVDD based Scene Understanding using Color Space Information (색 공간 정보를 이용한 지지벡터 영역 묘사 기반의 장면 이해)

  • Kim, Soo-Wan;Chang, Hyung-Jin;Kang, Woo-Sung;Choi, Jin-Young
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.264-265
    • /
    • 2008
  • 기존 영상감시 시스템의 물체 탐지 알고리즘은 주로 배경 모델링 기법을 기반으로 하고 있다. 이 기법은 차영상 기법보다는 성능이 뛰어나기는 하지만 여전히 정지 카메라에서만 활용이 가능하고, 주변 환경에 따라 알고리즘 상의 많은 임계값을 현재 상황에 맞춰 일일이 조절해 주어야 한다는 한계점이 있다. 따라서 이 논문에서는 배경모델링 기법을 사용하지 않고 입력되는 영상의 Color 정보를 이용하여 영상 내에 있는 여러 대상을 직접 판단하여 관심 있는 물체를 탐지하는 방법을 제안하고자 한다. 제안된 알고리즘은 먼저 현재의 영상을 하나의 물체로 추정되는 영역이 하나의 영역으로 구분되어지게 간단하게 분할해낸다 그리고 나누어진 영역마다 대표 Color 값을 계산하여 미리 학습된 데이터를 기준으로 Support Vector Domain Description (SVDD) 알고리즘을 사용하여 구별해내고 그 결과를 바탕으로 영역이 무엇인지를 판별해낸다. 이 방법은 정지되어 있는 카메라뿐만 아니라 움직이는 카메라 상에서도 사용되어질 수 있으며 알고리즘 상에서 사용되는 임계값의 종류가 적기 때문에 많은 상황에서 일반적으로 쓰일 수 있다.

  • PDF

Process Annotation for Recording the Manipulation of 3D Structured Models (3D 구조물의 조작과정 기록을 위한 어노테이션 기법)

  • Lee, Gui-Hyun;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.3
    • /
    • pp.381-390
    • /
    • 2007
  • 3D object contents are used for various applications in the Web virtual space, where the main concerns are to navigate the 3D virtual space and visualize 3D objects. The techniques to manipulate 3D objects like disassembling and assembling and to record the manipulation process are the very first step. Until now, we can record only the result of 3D object manipulation. Thus, we have tried to study the representation technique to record meaningfully and replay the manipulation process of 3D structured objects. We analyzed the structures and their relations between components to construct 3D objects that are described in XML or VRML. Compared to the previous method, we studied a XML based annotation technique to record and store selectively by user. This technique makes 3D structured objects be used in the various applications by the selective recording and also selective replaying.

  • PDF

A Study on Multi-Object Data Split Technique for Deep Learning Model Efficiency (딥러닝 효율화를 위한 다중 객체 데이터 분할 학습 기법)

  • Jong-Ho Na;Jun-Ho Gong;Hyu-Soung Shin;Il-Dong Yun
    • Tunnel and Underground Space
    • /
    • v.34 no.3
    • /
    • pp.218-230
    • /
    • 2024
  • Recently, many studies have been conducted for safety management in construction sites by incorporating computer vision. Anchor box parameters are used in state-of-the-art deep learning-based object detection and segmentation, and the optimized parameters are critical in the training process to ensure consistent accuracy. Those parameters are generally tuned by fixing the shape and size by the user's heuristic method, and a single parameter controls the training rate in the model. However, the anchor box parameters are sensitive depending on the type of object and the size of the object, and as the number of training data increases. There is a limit to reflecting all the characteristics of the training data with a single parameter. Therefore, this paper suggests a method of applying multiple parameters optimized through data split to solve the above-mentioned problem. Criteria for efficiently segmenting integrated training data according to object size, number of objects, and shape of objects were established, and the effectiveness of the proposed data split method was verified through a comparative study of conventional scheme and proposed methods.

Elasto-plastic Post-buckling Analysis of Spatial Framed Structures using Improved Plastic Hinge Theory (개선된 소성힌지이론을 이용한 공간 뼈대구조물의 탄-소성 후좌굴 해석)

  • Kim, Sung Bo;Ji, Tae Sug;Jung, Kyoung Hwan
    • Journal of Korean Society of Steel Construction
    • /
    • v.18 no.6
    • /
    • pp.687-696
    • /
    • 2006
  • An efficient numerical method is developed to estimate the elasto-plastic post-buckling strength of space-framed structures. The inelastic ultimate strength of beam-columns and frames is evaluated by the parametric study. Applying the improved plastic hinge analysis that evaluate the gradual stiffness decrease effects due to spread of plasticity, elasto-plastic post-buckling behavior of steel frames is investigated considering the various residual stress distributions. Introducing the plastification parameter that represent pread of plasticity in the element and performing parametric study of equivalent element force and member idealization, finite-element solutions for the elasto-plastic analysis of space frames are compared with the results by plastic region analysis, shell elements and experimental results.

Development of Automatized Quantitative Analysis Method in CT Images Evaluation using AAPM Phantom (AAPM Phantom을 이용한 CT 영상 평가 시 자동화된 정량적 분석 방법 개발)

  • Noh, Sung Sun;Um, Hyo Sik;Kim, Ho Chul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.12
    • /
    • pp.163-173
    • /
    • 2014
  • When evaluating the spatial resolution images and evaluation of low contrast resolution using CT standard phantom, and might present a automated quantitative evaluation method for minimizing errors by subjective judgment of the evaluator be, and try to evaluate the usefulness. 120kVp and 250mAs, 10mm collimation, SFOV(scan field of view) of 25cm or more than, exposure conditions DFOV(display field of view) of 25cm, and were evaluated the 24 passing images and 20 failing images taken using a standard reconstruction algorithm by using the Nuclear Associates, Inc. AAPM CT Performance Phantom(Model 76-410). Quantitative evaluation of low contrast resolution and spatial resolution was using an evaluation program that was self-developed using the company Mathwork Matlab(Ver. 7.6. (R2008a)) software. In this study, the results were evaluated using the evaluation program that was self-developed in the evaluation of images using CT standard phantom, it was possible to evaluate an objective numerical qualitative evaluation item. First, if the contrast resolution, if EI is 0.50, 0.51, 0.52, 0.53, as a result of evaluating quantitatively the results were evaluated qualitatively match. Second, if CNR is -0.0018~-0.0010, as a result of evaluating quantitatively the results were evaluated qualitatively match. Third, if the spatial resolution, as a result of using a image segmentation technique, and automatically extract the contour boundary of the hole, as a result of evaluating quantitatively the results were evaluated qualitatively match.

Divide and conquer kernel quantile regression for massive dataset (대용량 자료의 분석을 위한 분할정복 커널 분위수 회귀모형)

  • Bang, Sungwan;Kim, Jaeoh
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.5
    • /
    • pp.569-578
    • /
    • 2020
  • By estimating conditional quantile functions of the response, quantile regression (QR) can provide comprehensive information of the relationship between the response and the predictors. In addition, kernel quantile regression (KQR) estimates a nonlinear conditional quantile function in reproducing kernel Hilbert spaces generated by a positive definite kernel function. However, it is infeasible to use the KQR in analysing a massive data due to the limitations of computer primary memory. We propose a divide and conquer based KQR (DC-KQR) method to overcome such a limitation. The proposed DC-KQR divides the entire data into a few subsets, then applies the KQR onto each subsets and derives a final estimator by aggregating all results from subsets. Simulation studies are presented to demonstrate the satisfactory performance of the proposed method.

A Dynamic Map Partition for Load Balancing of MMORPG based on Virtual Area Information (MMORPG에서의 부하 분산을 위한 가상 영역 정보 기반 동적 지역 분할)

  • Kim Beob-Kyun;An Dong-Un;Chung Seung-Jong
    • The KIPS Transactions:PartA
    • /
    • v.13A no.3 s.100
    • /
    • pp.223-230
    • /
    • 2006
  • A MMORPG(Massively Multiplayer Online Role-Playing Game) is an online role-playing game in which a large number of players can interact with each other in the same world at the same time. Most of them require significant hardware requirements(e.g., servers and bandwidth), and dedicated support staff. Despite the efforts of developers, users often cite overpopulation, lag, and poor support as problems of games. In this paper, a dynamic load balancing method for MMORPGS is proposed. It tries to adapt to dynamic change of population by using dynamic map-partition method with VML(Virtual Map Layer) which consists of fields, sector groups, sectors, and cells. From the experimental results, our approach achieves about $23^{\sim}67%$ lower loads for each field server. By the modification to Virtual Area Layer, we can easily manage problems that come from changes of map data, resources' status, and users' behavior pattern.