• Title/Summary/Keyword: 중복제거기법

Search Result 221, Processing Time 0.025 seconds

Design of Parallel Rasterizer for 3D Graphics Accelerators (3D 그래픽 가속엔진을 위한 병렬 Rasterizer 설계)

  • O, In-Heung;Park, Jae-Seong;Kim, Sin-Deok
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.1
    • /
    • pp.82-97
    • /
    • 1999
  • 3차원 그래픽 렌더링은 화면상의 각 화소에 대하여 색깔뿐만 아니라 깊이 정보가지 계산해야 하기 때문에 방대한 계산량과 메모리 접근, 그리고 데이터 전송량을 필요로 하기 때문이다. 따라서 실시간 3차원 그래픽 처리를 위해서 병렬 처리 기법을 도입한다. 그러나 기존 그래픽 가속엔진은 병렬처리 기법으로 영상-병렬성을 이용한 화면 분할 방식을 사용하기 때문에 크게 두 가지 단점이 발생한다. 첫 번재는 화면 영역의 경게에 위치하는 다각형들에 대한 중복계산이고, 두 번째는 낮은 PE(Processing Element) 활용도이다. 본 논문에서는 이러한 문제를 해결하기 위한 방법으로 객체 기반 렌더링(OBR : Object Based Rendering)방식을 바탕으로 하는 그래픽 가속엔진을 제안하였다. OBR 시스템의 목적은 화면 분할 방식의 불필요한 오버헤드를 제거하여 수행 성능을 높이고, 자원을 효율적으로 사용하여 하드웨어 구성비용을 줄이는 것이다. 본 논문에서는 시뮬레이션을 통하여 OBR 시스템이 화면 분할 방식의 대표적인 그래픽 가속기인 PixelFlow와의 성능을 상대적으로 비교하였다. 결론적으로 OBR 시스템은 화면 분할 방식보다 더 적은 하드웨어 자원으로 보다 효율적으로 렌더링을 수해하였다.

Wider Depth Dynamic Range Using Occupancy Map Correction for Immersive Video Coding (몰입형 비디오 부호화를 위한 점유맵 보정을 사용한 깊이의 동적 범위 확장)

  • Lim, Sung-Gyun;Hwang, Hyeon-Jong;Oh, Kwan-Jung;Jeong, Jun Young;Lee, Gwangsoon;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1213-1215
    • /
    • 2022
  • 몰입형 비디오 부호화를 위한 MIV(MPEG Immersive Video) 표준은 제한된 3D 공간의 다양한 위치의 뷰(view)들을 효율적으로 압축하여 사용자에게 임의의 위치 및 방향에 대한 6 자유도(6DoF)의 몰입감을 제공한다. MIV 의 참조 소프트웨어인 TMIV(Test Model for Immersive Video)에서는 복수의 뷰 간 중복되는 영역을 제거하여 전송할 화소수를 줄이기 때문에 복호화기에서 렌더링(rendering)을 위해서 각 화소의 점유(occupancy) 정보도 전송되어야 한다. TMIV 는 점유맵을 깊이(depth) 아틀라스(atlas)에 포함하여 압축 전송하고, 부호화 오류로 인한 점유 정보 손실을 방지하기 위해 깊이값 표현을 위한 동적 범위의 일부를 보호대역(guard band)으로 할당한다. 이 보호대역을 줄여서 더 넓은 깊이값의 동적 범위를 사용하면 렌더링 화질을 개선시킬 수 있다. 따라서, 본 논문에서는 현재 TMIV 의 점유 정보 오류 분석을 바탕으로 이를 보정하는 기법을 제시하고, 깊이 동적 범위 확장에 따른 부호화 성능을 분석한다. 제안기법은 기존의 TMIV 와 비교하여 평균 1.3%의 BD-rate 성능 향상을 보여준다.

  • PDF

Anomaly Detection Technique of Log Data Using Hadoop Ecosystem (하둡 에코시스템을 활용한 로그 데이터의 이상 탐지 기법)

  • Son, Siwoon;Gil, Myeong-Seon;Moon, Yang-Sae
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.2
    • /
    • pp.128-133
    • /
    • 2017
  • In recent years, the number of systems for the analysis of large volumes of data is increasing. Hadoop, a representative big data system, stores and processes the large data in the distributed environment of multiple servers, where system-resource management is very important. The authors attempted to detect anomalies from the rapid changing of the log data that are collected from the multiple servers using simple but efficient anomaly-detection techniques. Accordingly, an Apache Hive storage architecture was designed to store the log data that were collected from the multiple servers in the Hadoop ecosystem. Also, three anomaly-detection techniques were designed based on the moving-average and 3-sigma concepts. It was finally confirmed that all three of the techniques detected the abnormal intervals correctly, while the weighted anomaly-detection technique is more precise than the basic techniques. These results show an excellent approach for the detection of log-data anomalies with the use of simple techniques in the Hadoop ecosystem.

Incremental Batch Update of Spatial Data Cube with Multi-dimensional Concept Hierarchies (다차원 개념 계층을 지원하는 공간 데이터 큐브의 점진적 일괄 갱신 기법)

  • Ok, Geun-Hyoung;Lee, Dong-Wook;You, Byeong-Seob;Lee, Jae-Dong;Bae, Hae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.11
    • /
    • pp.1395-1409
    • /
    • 2006
  • A spatial data warehouse has spatial data cube composed of multi-dimensional data for efficient OLAP(On-Line Analytical Processing) operations. A spatial data cube supporting concept hierarchies holds huge amount of data so that many researches have studied a incremental update method for minimum modification of a spatial data cube. The Cube, however, compressed by eliminating prefix and suffix redundancy has coalescing paths that cause update inconsistencies for some updates can affect the aggregate value of coalesced cell that has no relationship with the update. In this paper, we propose incremental batch update method of a spatial data cube. The proposed method uses duplicated nodes and extended node structure to avoid update inconsistencies. If any collision is detected during update procedure, the shared node is duplicated and the duplicate is updated. As a result, compressed spatial data cube that includes concept hierarchies can be updated incrementally with no inconsistency. In performance evaluation, we show the proposed method is more efficient than other naive update methods.

  • PDF

An Adaptive Motion Vector Estimation Method for Multi-view Video Coding Based on Spatio-temporal Correlations among Motion Vectors (움직임 벡터들의 시·공간적 상관성을 이용한 다시점 비디오 부호화를 위한 적응적 움직임 벡터 추정 기법)

  • Yoon, Hyo-Sun;Kim, Mi-Young
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.12
    • /
    • pp.35-45
    • /
    • 2018
  • Motion Estimation(ME) has been developed to reduce the redundant data in digital video signal. ME is an important part of video encoding system, However, it requires huge computational complexity of the encoder part, and fast motion search methods have been proposed to reduce huge complexity. Multi- view video is obtained by capturing on a three-dimensional scene with many cameras at different positions and its complexity increases in proportion to the number of cameras. In this paper, we proposed an efficient motion method which chooses a search pattern adaptively by using the temporal-spatial correlation of the block and the characteristics of the block. Experiment results show that the computational complexity reduction of the proposed method over TZ search method and FS method can be up to 70~75% and 99% respectively while keeping similar image quality and bit rates.

Data value extraction through comparison of online big data analysis results and water supply statistics (온라인 빅 데이터 분석 결과와 상수도 통계 비교를 통한 데이터 가치 추출)

  • Hong, Sungjin;Yoo, Do Guen
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.431-431
    • /
    • 2021
  • 4차 산업혁명의 도래로 사회기반시설물의 계획 및 운영관리에 있어 데이터 분석을 통한 가치추출에 대한 관심은 매우 높은 상황이다. 데이터의 가용성과 접근성, 정부 지원 등을 평가하는 공공데이터 개방지수에서 한국은 1점 만점에 0.93점을 획득하여 경제협력개발기구 회원국 중 1위(2019년 기준)를 할 정도로 매우 높은 수준(평균 0.60점)이다. 그러나 공식적으로 발표 및 배포되는 사회기반시설물 관련 정보와 심도 있는 연구 분석이 필요한 정보는 접근이 여전히 제한적이라 할 수 있다. 특히 대표적인 사회기반시설물인 상수도시스템은 대부분 국가중요시설로 지정되어 있어 다양한 정보를 획득하고 분석하는데 제약이 존재하며, 관련 국가통계인 상수도통계에서는 누수사고 등과 같은 비정상적 상황에 대한 사고지점, 원인 등과 같은 세부정보는 제공하고 있지 않다. 본 연구에서는 웹크롤링 및 빅데이터 분석기술을 활용하여 과거 일정기간 발생한 지자체의 상수도 누수사고 관련 뉴스를 전수조사하고 도출된 사고건수를 국가 공인 정보인 상수도통계자료와 비교·분석하였다. 독립적인 누수사고 기사를 추출하기 위해서 중복기사의 제거, 누수 관련 키워드 정립, 상수도분야 이외의 관련기사 제거 등의 절차가 필요하며, 이와 같은 기법은 R프로그래밍을 통해 구현되었다. 추가적으로 뉴스기사의 자연어 처리기반 정보추출기법을 통해 누수사고 건수 뿐만 아니라 사고발생일, 위치, 원인, 피해정도, 그리고 대상 관로의 크기 등을 획득하여 상수도 통계에서 제시하고 있는 정보보다 많은 가치를 추출하여 연계할 수 있는 방안을 제시하였다. 제시된 방법론을 국내 A광역시에 적용하여 누수사고 건수를 비교한 결과 상수도통계에서 제시하고 있는 누수발생건수와 유사한 규모의 사고건수를 뉴스기사분석을 통해 도출할 수 있었다. 제안된 방법론은 추가적인 정보의 추출이 가능하다는 점에서 향후 활용성이 높을 것으로 기대된다.

  • PDF

Reengineering Template-Based Web Applications to Single Page AJAX Applications (단일 페이지 AJAX 애플리케이션을 위한 템플릿 기반 웹 애플리케이션 재공학 기법)

  • Oh, Jaewon;Choi, Hyeon Cheol;Lim, Seung Ho;Ahn, Woo Hyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.1
    • /
    • pp.1-6
    • /
    • 2012
  • Web pages in a template-based web application (TWA) are automatically populated using a template shared by the pages with contents specific to the pages. So users can easily obtain information guided by a consistent structure of the template. Reduced duplicated code helps to increase the level of maintainability as well. However, TWA still has the interaction problem of classic web applications that each time a user clicks a hyperlink a new page is loaded, although a partial update of the page is desirable. This paper proposes a reengineering technique to transform the multi-page structure of legacy Java-based TWA to a single page one with partial page refresh. In this approach, hyperlinks in HTML code are refactored to AJAX-enabled event handlers to achieve the single page structure. In addition, JSP and Servlet code is transformed in order not to send data unnecessary for the partial update. The new single page consists of individual components that are updateable independently when interacting with a user. Therefore, our approach can improve interactivity and responsiveness towards a user while reducing CPU and network usage. The measurement of our technique applied to a typical TWA shows that our technique improves the response time of user requests over the TWA in the range from 1 to 87%.

Efficient Management of Statistical Information of Keywords on E-Catalogs (전자 카탈로그에 대한 효율적인 색인어 통계 정보 관리 방법)

  • Lee, Dong-Joo;Hwang, In-Beom;Lee, Sang-Goo
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.4
    • /
    • pp.1-17
    • /
    • 2009
  • E-Catalogs which describe products or services are one of the most important data for the electronic commerce. E-Catalogs are created, updated, and removed in order to keep up-to-date information in e-Catalog database. However, when the number of catalogs increases, information integrity is violated by the several reasons like catalog duplication and abnormal classification. Catalog search, duplication checking, and automatic classification are important functions to utilize e-Catalogs and keep the integrity of e-Catalog database. To implement these functions, probabilistic models that use statistics of index words extracted from e-Catalogs had been suggested and the feasibility of the methods had been shown in several papers. However, even though these functions are used together in the e-Catalog management system, there has not been enough consideration about how to share common data used for each function and how to effectively manage statistics of index words. In this paper, we suggest a method to implement these three functions by using simple SQL supported by relational database management system. In addition, we use materialized views to reduce the load for implementing an application that manages statistics of index words. This brings the efficiency of managing statistics of index words by putting database management systems optimize statistics updating. We showed that our method is feasible to implement three functions and effective to manage statistics of index words with empirical evaluation.

  • PDF

The Method of Digital Copyright Authentication for Contents of Collective Intelligence (집단지성 콘텐츠에 적합한 저작권 인증 기법)

  • Yun, Sunghyun;Lee, Keunho;Lim, Heuiseok;Kim, Daeryong;Kim, Jung-hoon
    • Journal of the Korea Convergence Society
    • /
    • v.6 no.6
    • /
    • pp.185-193
    • /
    • 2015
  • The wisdom contents consists of an ordinary person's ideas and experience. The Wisdom Market [1] is an online business model where wisdom contents are traded. Thus, the general public could do business activities in the Wisdom Market at ease. As the wisdom contents are themselves the thought of persons, there exists many similar or duplicated contents. Existing copyright protection schemes mainly focus on the primary author's right. Thus, it's not appropriate for protecting the contents of Collective Intelligence that requires to protect the rights of collaborators. There should exist a new method to be dynamic capable of combining and deleting rights of select collaborators. In this study, we propose collective copyright authentication scheme suitable for the contents of Collective Intelligence. The proposed scheme consists of collective copyright registration, addition and verification protocols. It could be applied to various business models that require to combine multiple rights of similar contents or to represent multiple authorships on the same contents.

Automatic Prostate Segmentation in MR Images based on Active Shape Model Using Intensity Distribution and Gradient Information (MR 영상에서 밝기값 분포 및 기울기 정보를 이용한 활성형상모델 기반 전립선 자동 분할)

  • Jang, Yu-Jin;Hong, Helen
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.2
    • /
    • pp.110-119
    • /
    • 2010
  • In this paper, we propose an automatic segmentation of the prostate using intensity distribution and gradient information in MR images. First, active shape model using adaptive intensity profile and multi-resolution technique is used to extract the prostate surface. Second, hole elimination using geometric information is performed to prevent the hole from occurring by converging the surface shape to the local optima. Third, the surface shape with large anatomical variation is corrected by using 2D gradient information. In this case, the corrected surface shape is often represented as rugged shape which is generated by the limited number of vertices. Thus, it is reconstructed by using surface modelling and smoothing. To evaluate our method, we performed the visual inspection, accuracy measures and processing time. For accuracy evaluation, the average distance difference and the overlapping volume ratio between automatic segmentation and manual segmentation by two radiologists are calculated. Experimental results show that the average distance difference was 0.3${\pm}$0.21mm and the overlapping volume ratio was 96.31${\pm}$2.71%. The total processing time of twenty patient data was 16 seconds on average.