• Title/Summary/Keyword: Partitioning Technique

Search Result 214, Processing Time 0.024 seconds

The Efficient Cut Detection Algorithm Using the Weight in News Video Data (뉴스 비디오 데이터에서의 가중치를 이용한 효율적 장면변환 검출 알고리즘)

  • Jeong, Yeong-Eun;Lee, Dong-Seop;Sin, Seong-Yun;Jeon, Geun-Hwan;Bae, Seok-Chan;Lee, Yang-Won
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.2
    • /
    • pp.282-291
    • /
    • 1999
  • In order to construct the News Video Database System, cut detection technique is very important. In general, the color histogram, $\chi$2 histogram or Bin-to-Bin difference(B2B) techniques are mainly using for the scene partitioning. In this paper, we propose the efficient algorithm that is applied the weight in terms of NTSC standard to cut detection. This algorithm is able to reduce the time of acquiring and comparing histogram using by separate calculation of R, G, and B for the color histogram technique. And it also provide the efficient selection method fo threshold value by and use the news videos of KBS, MBC, SBS, CNN and NHK as experimental domains. By the result of experiment, we present the proposed algorithm is more efficient for cut detection than the previous methods, and that the basis for the automatic selection of threshold values.

  • PDF

Shot Boundary Detection of Video Sequence Using Hierarchical Hidden Markov Models (계층적 은닉 마코프 모델을 이용한 비디오 시퀀스의 셧 경계 검출)

  • Park, Jong-Hyun;Cho, Wan-Hyun;Park, Soon-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.8A
    • /
    • pp.786-795
    • /
    • 2002
  • In this paper, we present a histogram and moment-based vidoe scencd change detection technique using hierarchical Hidden Markov Models(HMMs). The proposed method extracts histograms from a low-frequency subband and moments of edge components from high-frequency subbands of wavelet transformed images. Then each HMM is trained by using histogram difference and directional moment difference, respectively, extracted from manually labeled video. The video segmentation process consists of two steps. A histogram-based HMM is first used to segment the input video sequence into three categories: shot, cut, gradual scene changes. In the second stage, a moment-based HMM is used to further segment the gradual changes into a fade and a dissolve. The experimental results show that the proposed technique is more effective in partitioning video frames than the previous threshold-based methods.

An Efficient Video Management Technique using Forward Timeline on Multimedia Local Server (전방향 시간 경계선을 활용한 멀티미디어 지역 서버에서의 효율적인 동영상 관리 기법)

  • Lee, Jun-Pyo;Woo, Soon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.147-153
    • /
    • 2011
  • In this paper, we present a new video management technique using forward timeline to efficiently store and delete the videos on a local server. The proposed method is based on capturing the changing preference of the videos according to recentness, frequency, and playback length of the requested videos. For this purpose, we utilize the forward timeline which represents the time area within a number of predefined intervals. The local server periodically measures time popularity and request segment of all videos. Based on the measured data, time popularity and request segment, the local server calculates the mean time popularity and mean request segment of a video using forward timeline. Using mean time popularity and mean request segment of video, we estimate the ranking and allocated storage space of a video. The ranking represents the priority of deletion when the storage area of local server is running out of space and the allocated storage space means the maximum size of storage space to be allocated to a video. In addition, we propose an efficient storage space partitioning technique in order to stably store videos and present a time based free-up storage space technique using the expected variation of video data in order for avoiding the overflow on a local server in advance. The simulation results show that the proposed method performs better than other methods in terms of hit rate and number of deletion. Therefore, our video management technique for local server provides the lowest user start-up latency and the highest bandwidth saving significantly.

Preference Prediction System using Similarity Weight granted Bayesian estimated value and Associative User Clustering (베이지안 추정치가 부여된 유사도 가중치와 연관 사용자 군집을 이용한 선호도 예측 시스템)

  • 정경용;최성용;임기욱;이정현
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.316-325
    • /
    • 2003
  • A user preference prediction method using an exiting collaborative filtering technique has used the nearest-neighborhood method based on the user preference about items and has sought the user's similarity from the Pearson correlation coefficient. Therefore, it does not reflect any contents about items and also solve the problem of the sparsity. This study suggests the preference prediction system using the similarity weight granted Bayesian estimated value and the associative user clustering to complement problems of an exiting collaborative preference prediction method. This method suggested in this paper groups the user according to the Genre by using Association Rule Hypergraph Partitioning Algorithm and the new user is classified into one of these Genres by Naive Bayes classifier to slove the problem of sparsity in the collaborative filtering system. Besides, for get the similarity between users belonged to the classified genre and new users, this study allows the different estimated value to item which user vote through Naive Bayes learning. If the preference with estimated value is applied to the exiting Pearson correlation coefficient, it is able to promote the precision of the prediction by reducing the error of the prediction because of missing value. To estimate the performance of suggested method, the suggested method is compared with existing collaborative filtering techniques. As a result, the proposed method is efficient for improving the accuracy of prediction through solving problems of existing collaborative filtering techniques.

Performance Improvement of Collaborative Filtering System Using Associative User′s Clustering Analysis for the Recalculation of Preference and Representative Attribute-Neighborhood (선호도 재계산을 위한 연관 사용자 군집 분석과 Representative Attribute -Neighborhood를 이용한 협력적 필터링 시스템의 성능향상)

  • Jung, Kyung-Yong;Kim, Jin-Su;Kim, Tae-Yong;Lee, Jung-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.10B no.3
    • /
    • pp.287-296
    • /
    • 2003
  • There has been much research focused on collaborative filtering technique in Recommender System. However, these studies have shown the First-Rater Problem and the Sparsity Problem. The main purpose of this Paper is to solve these Problems. In this Paper, we suggest the user's predicting preference method using Bayesian estimated value and the associative user clustering for the recalculation of preference. In addition to this method, to complement a shortcoming, which doesn't regard the attribution of item, we use Representative Attribute-Neighborhood method that is used for the prediction when we find the similar neighborhood through extracting the representative attribution, which most affect the preference. We improved the efficiency by using the associative user's clustering analysis in order to calculate the preference of specific item within the cluster item vector to the collaborative filtering algorithm. Besides, for the problem of the Sparsity and First-Rater, through using Association Rule Hypergraph Partitioning algorithm associative users are clustered according to the genre. New users are classified into one of these genres by Naive Bayes classifier. In addition, in order to get the similarity value between users belonged to the classified genre and new users, and this paper allows the different estimated value to item which user evaluated through Naive Bayes learning. As applying the preference granted the estimated value to Pearson correlation coefficient, it can make the higher accuracy because the errors that cause the missing value come less. We evaluate our method on a large collaborative filtering database of user rating and it significantly outperforms previous proposed method.

The Contact and Parallel Analysis of Smoothed Particle Hydrodynamics (SPH) Using Polyhedral Domain Decomposition (다면체영역분할을 이용한 SPH의 충돌 및 병렬해석)

  • Moonho Tak
    • Journal of the Korean GEO-environmental Society
    • /
    • v.25 no.4
    • /
    • pp.21-28
    • /
    • 2024
  • In this study, a polyhedral domain decomposition method for Smoothed Particle Hydrodynamics (SPH) analysis is introduced. SPH which is one of meshless methods is a numerical analysis method for fluid flow simulation. It can be useful for analyzing fluidic soil or fluid-structure interaction problems. SPH is a particle-based method, where increased particle count generally improves accuracy but diminishes numerical efficiency. To enhance numerical efficiency, parallel processing algorithms are commonly employed with the Cartesian coordinate-based domain decomposition method. However, for parallel analysis of complex geometric shapes or fluidic problems under dynamic boundary conditions, the Cartesian coordinate-based domain decomposition method may not be suitable. The introduced polyhedral domain decomposition technique offers advantages in enhancing parallel efficiency in such problems. It allows partitioning into various forms of 3D polyhedral elements to better fit the problem. Physical properties of SPH particles are calculated using information from neighboring particles within the smoothing length. Methods for sharing particle information physically separable at partitioning and sharing information at cross-points where parallel efficiency might diminish are presented. Through numerical analysis examples, the proposed method's parallel efficiency approached 95% for up to 12 cores. However, as the number of cores is increased, parallel efficiency is decreased due to increased information sharing among cores.

A Non-Shared Metadata Management Scheme for Large Distributed File Systems (대용량 분산파일시스템을 위한 비공유 메타데이타 관리 기법)

  • Yun, Jong-Byeon;Park, Yang-Bun;Lee, Seok-Jae;Jang, Su-Min;Yoo, Jae-Soo;Kim, Hong-Yeon;Kim, Young-Kyun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.4
    • /
    • pp.259-273
    • /
    • 2009
  • Most of large-scale distributed file systems decouple a metadata operation from read and write operations for a file. In the distributed file systems, a certain server named a metadata server (MDS) maintains metadata information in file system such as access information for a file, the position of a file in the repository, the namespace of the file system, and so on. But, the existing systems used restrictive metadata management schemes, because most of the distributed file systems designed to focus on the distributed management and the input/output performance of data rather than the metadata. Therefore, in the existing systems, the metadata throughput and expandability of the metadata server are limited. In this paper, we propose a new non-shared metadata management scheme in order to provide the high metadata throughput and scalability for a cluster of MDSs. First, we derive a dictionary partitioning scheme as a new metadata distribution technique. Then, we present a load balancing technique based on the distribution technique. It is shown through various experiments that our scheme outperforms existing metadata management schemes in terms of scalability and load balancing.

An Energy-Efficient Clustering Using Division of Cluster in Wireless Sensor Network (무선 센서 네트워크에서 클러스터의 분할을 이용한 에너지 효율적 클러스터링)

  • Kim, Jong-Ki;Kim, Yoeng-Won
    • Journal of Internet Computing and Services
    • /
    • v.9 no.4
    • /
    • pp.43-50
    • /
    • 2008
  • Various studies are being conducted to achieve efficient routing and reduce energy consumption in wireless sensor networks where energy replacement is difficult. Among routing mechanisms, the clustering technique has been known to be most efficient. The clustering technique consists of the elements of cluster construction and data transmission. The elements that construct a cluster are repeated in regular intervals in order to equalize energy consumption among sensor nodes in the cluster. The algorithms for selecting a cluster head node and arranging cluster member nodes optimized for the cluster head node are complex and requires high energy consumption. Furthermore, energy consumption for the data transmission elements is proportional to $d^2$ and $d^4$ around the crossover region. This paper proposes a means of reducing energy consumption by increasing the efficiency of the cluster construction elements that are regularly repeated in the cluster technique. The proposed approach maintains the number of sensor nodes in a cluster at a constant level by equally partitioning the region where nodes with density considerations will be allocated in cluster construction, and reduces energy consumption by selecting head nodes near the center of the cluster. It was confirmed through simulation experiments that the proposed approach consumes less energy than the LEACH algorithm.

  • PDF

Performance Evaluation of Bit Error Resilience for Pixel-domain Wyner-Ziv Video Codec with Frame Difference Residual Signal (화면 간 차이 신호에 대한 화소 영역 위너-지브 비디오 코덱의 비트 에러 내성 성능 평가)

  • Kim, Jin-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.8
    • /
    • pp.20-28
    • /
    • 2012
  • DVC(Distributed Video Coding) technique is a new paradigm, which is based on the Slepian-Wolf and Wyner-Ziv theorems. DVC offers not only flexible partitioning of the complexity between the encoder and decoder, but also robustness to channel errors due to intrinsic joint source-channel coding. Many conventional research works have been focused on the light video encoder and its rate-distortion performance improvement. However, in this paper, we propose a new DVC codec which is effectively applicable for error-prone environment. The proposed method adopts a quantiser without dead-zone and symmetric Gray code around zero value. Through computer simulations, the proposed method is evaluated by the bit errors position as well as the number of burst bit errors. Additionally, it is shown that the maximum and minimum transmission rate for the given application can be linearly determined by the number of bit errors.

A Real-time SoC Design of Foreground Object Segmentation (Foreground 객체 추출을 위한 실시간 SoC 설계)

  • Kim Ji-Su;Lee Tae-Ho;Lee Hyuk-Jae
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.9 s.351
    • /
    • pp.44-52
    • /
    • 2006
  • Recently developed MPEG-4 Part 2 compression standard provides a novel capability to handle arbitrary video objects. To support this capability, an efficient object segmentation technique is required. This paper proposes a real-time algorithm for foreground object segmentation in video sequences. The proposed algorithm consists of two steps: the first step that segments a video frame into multiple sub-regions using Spatio-Temporal Watershed Transform and the second step in which a foreground object segment is extracted from the sub-regions generated in the first step. For real-time processing, the algorithm is partitioned into hardware and software parts so that computationally expensive parts are off-loaded from a processor and executed by hardware accelerators. Simulation results show that the proposed implementation can handle QCIF-size video at 15 fps and extracts an accurate foreground object.