• Title/Summary/Keyword: 검색기법

Search Result 2,750, Processing Time 0.028 seconds

A Design of 4×4 Block Parallel Interpolation Motion Compensation Architecture for 4K UHD H.264/AVC Decoder (4K UHD급 H.264/AVC 복호화기를 위한 4×4 블록 병렬 보간 움직임보상기 아키텍처 설계)

  • Lee, Kyung-Ho;Kong, Jin-Hyeung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.102-111
    • /
    • 2013
  • In this paper, we proposed a $4{\times}4$ block parallel architecture of interpolation for high-performance H.264/AVC Motion Compensation in 4K UHD($3840{\times}2160$) video real time processing. To improve throughput, we design $4{\times}4$ block parallel interpolation. For supplying the $9{\times}9$ reference data for interpolation, we design 2D cache buffer which consists of the $9{\times}9$ memory arrays. We minimize redundant storage of the reference pixel by applying the Search Area Stripe Reuse scheme(SASR), and implement high-speed plane interpolator with 3-stage pipeline(Horizontal Vertical 1/2 interpolation, Diagonal 1/2 interpolation, 1/4 interpolation). The proposed architecture was simulated in 0.13um standard cell library. The maximum operation frequency is 150MHz. The gate count is 161Kgates. The proposed H.264/AVC Motion Compensation can support 4K UHD at 72 frames per second by running at 150MHz.

An Efficient Top-k Query Processing Algorithm over Encrypted Outsourced-Data in the Cloud (아웃소싱 암호화 데이터에 대한 효율적인 Top-k 질의 처리 알고리즘)

  • Kim, Jong Wook;Suh, Young-Kyoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.12
    • /
    • pp.543-548
    • /
    • 2015
  • Recently top-k query processing has been extremely important along with the explosion of data produced by a variety of applications. Top-k queries return the best k results ordered by a user-provided monotone scoring function. As cloud computing service has been getting more popular than ever, a hot attention has been paid to cloud-based data outsourcing in which clients' data are stored and managed by the cloud. The cloud-based data outsourcing, though, exposes a critical secuity concern of sensitive data, resulting in the misuse of unauthorized users. Hence it is essential to encrypt sensitive data before outsourcing the data to the cloud. However, there has been little attention to efficient top-k processing on the encrypted cloud data. In this paper we propose a novel top-k processing algorithm that can efficiently process a large amount of encrypted data in the cloud. The main idea of the algorithm is to prune unpromising intermediate results at the early phase without decrypting the encrypted data by leveraging an order-preserving encrypted technique. Experiment results show that the proposed top-k processing algorithm significantly reduces the overhead of client systems from 10X to 10000X.

A Study on the Change of the View of Love using Text Mining and Sentiment Analysis (텍스트 마이닝과 감성 분석을 통한 연애관의 변화 연구 : <공항가는 길>과 <이번 주 아내가 바람을 핍니다>를 중심으로)

  • Kim, Kyung-Ae;Ku, Jin-Hee
    • Journal of Digital Convergence
    • /
    • v.15 no.2
    • /
    • pp.285-294
    • /
    • 2017
  • In this study, change of the view of love was analyzed by big data analysis in TV drama of married person's love. Two dramas were selected for analysis with opposite theme of love story. The sympathy of audience for the one month period from the end of the drama was analyzed by text mining and sentiment analysis. In particular, changes in the meaning of home meaning are identified. Home is not 'a place where a husband and wife play a social role', but 'a place where they can share real sympathy and one can be happy'. If individuals are not happy, they need to break their homes. In this study, the current divorce rate and the question regarding the matter should be considered. But based on Google Trends, in Korean society, interest in marriage were still higher than romance. It means that people prefer to 'a love to get marriage' in Korean modern society, than 'love for love affair'. It seems to be reflection of cognition change, marriage should be based on true love. This study is expected to be applied to the study of trend change through social media.

A PageRank based Data Indexing Method for Designing Natural Language Interface to CRM Databases (분석 CRM 실무자의 자연어 질의 처리를 위한 기업 데이터베이스 구성요소 인덱싱 방법론)

  • Park, Sung-Hyuk;Hwang, Kyeong-Seo;Lee, Dong-Won
    • CRM연구
    • /
    • v.2 no.2
    • /
    • pp.53-70
    • /
    • 2009
  • Understanding consumer behavior based on the analysis of the customer data is one essential part of analytic CRM. To do this, the analytic skills for data extraction and data processing are required to users. As a user has various kinds of questions for the consumer data analysis, the user should use database language such as SQL. However, for the firm's user, to generate SQL statements is not easy because the accuracy of the query result is hugely influenced by the knowledge of work-site operation and the firm's database. This paper proposes a natural language based database search framework finding relevant database elements. Specifically, we describe how our TableRank method can understand the user's natural query language and provide proper relations and attributes of data records to the user. Through several experiments, it is supported that the TableRank provides accurate database elements related to the user's natural query. We also show that the close distance among relations in the database represents the high data connectivity which guarantees matching with a search query from a user.

  • PDF

Convergence Effects of Treadmill Training on Plantar Pressure, Lower Limb Muscle Function, and Balance in Chronic Stroke : A Meta-Analysis (만성 뇌졸중 환자의 트레드밀 훈련이 족저압, 하지 근 기능, 균형에 미치는 융복합적 효과 : 메타분석)

  • Choi, Ki-Bok;Cho, Sung-Hyoun
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.5
    • /
    • pp.87-96
    • /
    • 2020
  • The purpose of this study is to evaluate the convergence effectiveness of treadmill training in patients with chronic stroke through a meta-analysis. After searching the literature based on the patients, intervention, comparison, outcome criteria, and study desigan, a total of 22 studies related to "stroke" and "treadmill" were eligible for inclusion. Effect size was calculated using the comprehensive meta-analysis program for the meta-analysis. Based on the forest plot results, the overall effect size of treadmill training was 0.661 (95% confidence interval: 0.456-0.865), which was statistically significant with a medium effect size (p < 0.05). The effects of treadmill training on patients with stroke were separated by dependent variables of interest-plantar pressure (1.147), lower limb muscle function (0.875), and balance (0.664). The effect sizes were evaluated for the subdomains of timed up and go test (0.553), Berg Balance Scale (0.760), and static balance index (0.654) for balance. Therefore, treadmill training can be expected to have a positive impact on improving the quality of life of patients with chronic stroke. This meta-analysis of treadmill training may the lead to an industry paradigm shift toward healthcare convergence of information, communication, and medical technology.

A Secure Mobile Message Authentication Over VANET (VANET 상에서의 이동성을 고려한 안전한 메시지 인증기법)

  • Seo, Hwa-Jeong;Kim, Ho-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.5
    • /
    • pp.1087-1096
    • /
    • 2011
  • Vehicular Ad Hoc Network(VANET) using wireless network is offering the communications between vehicle and vehicle(V2V) or vehicle and infrastructure(V2I). VANET is being actively researched from industry field and university because of the rapid developments of the industry and vehicular automation. Information, collected from VANET, of velocity, acceleration, condition of road and environments provides various services related with safe drive to the drivers, so security over network is the inevitable factor. For the secure message authentication, a number of authentication proposals have been proposed. Among of them, a scheme, proposed by Jung, applying database search algorithm, Bloom filter, to RAISE scheme, is efficient authentication algorithm in a dense space. However, k-anonymity used for obtaining the accurate vehicular identification in the paper has a weak point. Whenever requesting the righteous identification, all hash value of messages are calculated. For this reason, as the number of car increases, a amount of hash operation increases exponentially. Moreover the paper does not provide a complete key exchange algorithm while the hand-over operation. In this paper, we use a Received Signal Strength Indicator(RSSI) based velocity and distance estimation algorithm to localize the identification and provide the secure and efficient algorithm in which the problem of hand-over algorithm is corrected.

AST-AET Data Migration Strategy considering Characteristics of Temporal Data (시간지원 데이터의 특성을 고려한 AST-AET 데이터 이동 기법)

  • Yun, Hong-Won;Gim, Gyong-Sok
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.384-394
    • /
    • 2001
  • In this paper, we propose AST-AET(Average valid Start Time-Average valid End Time) data migration strategy based on the storage structure where temporal data is divided into a past segment, a current segment, and a future segment. We define AST and AET which are used in AST-AET data migration strategy and also define entity versions to be stored in each segment. We describe methods to compute AST and AET, and processes to search entity versions for migration and move them. We compare average response times for user queries between AST-AET data migration strategy and the existing LST-GET(Least valid Start Time-Greatest valid End Time) data migration strategy. The experimental results show that, when there are no LLTs(Long Lived Tuples), there is little difference in performance between the two migration strategies because the size of a current segment is nearly equal. However, when there are LLTs, the average response time of AST-AET data migration strategy is smaller than that of LST-GET data migration strategy because the size of a current segment of LST-GET data migration strategy becomes larger. In addition, when we change average interarrival times of temporal queries, generally the average response time of AST-AET data migration strategy is smaller than that of LST-GET data migration strategy.

  • PDF

Inductive Inverse Kinematics Algorithm for the Natural Posture Control (자연스러운 자세 제어를 위한 귀납적 역운동학 알고리즘)

  • Lee, Bum-Ro;Chung, Chin-Hyun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.367-375
    • /
    • 2002
  • Inverse kinematics is a very useful method for control]ing the posture of an articulated body. In most inverse kinematics processes, the major matter of concern is not the posture of an articulated body itself but the position and direction of the end effector. In some applications such as 3D character animations, however, it is more important to generate an overall natural posture for the character rather than place the end effector in the exact position. Indeed, when an animator wants to modify the posture of a human-like 3D character with many physical constraints, he has to undergo considerable trial-and-error to generate a realistic posture for the character. In this paper, the Inductive Inverse Kinematics(IIK) algorithm using a Uniform Posture Map(UPM) is proposed to control the posture of a human-like 3D character. The proposed algorithm quantizes human behaviors without distortion to generate a UPM, and then generates a natural posture by searching the UPM. If necessary, the resulting posture could be compensated with a traditional Cyclic Coordinate Descent (CCD). The proposed method could be applied to produce 3D-character animations based on the key frame method, 3D games and virtual reality.

Research in the Direction of Improvement of the Web Site Utilizing Google Analytics (구글 애널리틱스를 활용한 웹 사이트의 개선방안 연구 : 앱팩토리를 대상으로)

  • Kim, Donglim;Lim, Younghwan
    • Cartoon and Animation Studies
    • /
    • s.36
    • /
    • pp.553-572
    • /
    • 2014
  • In this paper, for the evaluation of the ease of a particular Web site (www.appbelt.net), insert the log tracking code for Google Analytics in a page of the Web site to collect behavioral data of visitor and has studied the improvement measures for the problems of the Web site, after the evaluation of the overall quality of the Web site through the evaluation of Coolcheck. These findings set the target value of the company's priority (importance) companies want to influence the direction of the business judgment are set up correctly, and the user's needs and behavior will be appropriate for the service seems to help improvement.

Multiple Cause Model-based Topic Extraction and Semantic Kernel Construction from Text Documents (다중요인모델에 기반한 텍스트 문서에서의 토픽 추출 및 의미 커널 구축)

  • 장정호;장병탁
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.595-604
    • /
    • 2004
  • Automatic analysis of concepts or semantic relations from text documents enables not only an efficient acquisition of relevant information, but also a comparison of documents in the concept level. We present a multiple cause model-based approach to text analysis, where latent topics are automatically extracted from document sets and similarity between documents is measured by semantic kernels constructed from the extracted topics. In our approach, a document is assumed to be generated by various combinations of underlying topics. A topic is defined by a set of words that are related to the same topic or cooccur frequently within a document. In a network representing a multiple-cause model, each topic is identified by a group of words having high connection weights from a latent node. In order to facilitate teaming and inferences in multiple-cause models, some approximation methods are required and we utilize an approximation by Helmholtz machines. In an experiment on TDT-2 data set, we extract sets of meaningful words where each set contains some theme-specific terms. Using semantic kernels constructed from latent topics extracted by multiple cause models, we also achieve significant improvements over the basic vector space model in terms of retrieval effectiveness.