• Title/Summary/Keyword: IS usage performance

Search Result 1,419, Processing Time 0.033 seconds

A Real Time Processing Technique for Content-Aware Video Scaling (내용기반 동영상 기하학적 변환을 위한 실시간 처리 기법)

  • Lee, Kang-Hee;Yoo, Jae-Wook;Park, Dae-Hyun;Kim, Yoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.80-89
    • /
    • 2011
  • In this paper, a new real time video scaling technique which preserved the contents of a movie was proposed. Because in a movie a correlation exists between consecutive frames, in this paper by determining the seam of the current frame considering the seam of the previous frame, it was proposed the real time video scaling technique without the shaking phenomenon of the contents even though the entire video is not analyzed. For this purpose, frames which have similar features in a movie are classified into a scene, and the first frame of a scene is resized by the seam carving at the static images so that it can preserve the contents of the image to the utmost. At this time, the information about the seam extracted to convert the image size is saved, and the sizes of the next frames are controlled with reference to the seam information stored in the previous frame by each frame. The proposed algorithm has the fast processing speed of the extent of being similar to a bilinear method and preserves the main content of an image to the utmost at the same time. Also because the memory usage is remarkably small compared with the existing seam carving method, the proposed algorithm is usable in the mobile terminal in which there are many memory restrictions. Computer simulation results indicate that the proposed technique provides better objective performance and subjective image quality about the real time processing and shaking phenomenon removal and contents conservation than conventional algorithms.

A Study on Analysis Method to Evaluate Influence of Damage on Composite Layer in Type3 Composite Cylinder (Type3 복합재료 압력용기의 복합재층 손상에 따른 영향성 평가를 위한 해석기법에 관한 연구)

  • Lee, Kyo-Min;Park, Ji-Sang;Lee, Hak-Gu;Kim, Yeong-Seop
    • Composites Research
    • /
    • v.23 no.6
    • /
    • pp.7-13
    • /
    • 2010
  • Type3 cylinder is a composite pressure vessel fully over-wrapped with carbon/epoxy composite layers over an aluminum liner, which is the most ideal and safe high pressure gas container for CNG vehicles due to the lightweight and the leakage-before-burst characteristics. During service in CNG vehicle, if a fiber cut damage occurs in outer composite layers, it can degrade structural performance, reducing cycling life from the original design life. In this study, finite element modeling and analysis technique for the composite cylinder with fiber-cut crack damage is presented. Because FE analysis of type3 cylinder is path dependant due to plastic deformation of aluminum liner in autofrettage process, method to introduce a crack into FE model affect analysis result. A crack should be introduced after autofrettage in analysis step considering real circumstances where crack occurs during usage in service. For realistic simulation of this situation, FE modeling and analysis technique introducing a crack in the middle of analysis step is presented and the results are compared with usual FE analysis which has initial crack in the model from the beginning of analysis. Proposed analysis technique can be used effectively in the evaluation of influence of damage on composite layers of type3 cylinder and establish inspection criteria of composite cylinder in service.

File System Support for Multimedia Streaming in Internet Home Appliances (인터넷 홈서버를 위한 스트리밍 전용 파일 시스템)

  • 박진연;송승호;진종현;원유집;박승민;김정기
    • Journal of Broadcast Engineering
    • /
    • v.6 no.3
    • /
    • pp.246-259
    • /
    • 2001
  • Due to recent rapid deployment of Internet streaming service and digital broadcasting service, the issue of how to efficiently support streaming workload in so called "Internet Home Appliance" receives prime interests from industry as well as academia. The underlying dilemma is that it may not be feasible to put cutting edge CPU, boards, disks and other peripherals into that type of device. The primary reason is its cost. Usually, Internet Home Appliances has its dedicated usage, e.g. Internet Radio, and thus it does not require high-end CPU nor high-end Va subsystem. The same reasoning applies to I/O subsystem. In Internet Home Appliances dedicated to handle compressed moving picture, it is not equipped with high end SCSI disk with fast rotational speed. Thus, it is mandatory to devise elaborate software algorithm to exploit the available hardware resources and maximize the efficiency of the system. This paper presents our experiences in the design and implementation of a new multimedia file system which can efficiently deliver the required disk bandwidth for a periodic I/O workload. We have implemented the file system on the Linux operating system, and examined itsperformance under streaming I/O workload. The results of the study show that the proposed file system exhibits superior performance than the Linux Ext2 file system under streaming I/O workload. The result of this work not only contribute to advance the state f art file system technology for multimedia streaming but also put forth the software which is readily available and can be deployed. deployed.

  • PDF

A Study on the Development of Sharing Taxi Service Platform and Economic Value Estimation (공유택시 서비스 플랫폼 개발과 경제적 가치추정에 관한 연구)

  • Kim, Min Jae
    • Journal of the Korean Regional Science Association
    • /
    • v.38 no.1
    • /
    • pp.21-32
    • /
    • 2022
  • The purpose of this study is two things. First, it is to develop and demonstrate a sharong taxi platform. To this end, the implications for platform development were derived by analyzing consumers' perceptions of existing taxi services using IPA. As a result, abnormal business activities and safe services in the maintenance area were found to be safe rides and easy rides in the key improvement area. Safety such as usage fee level and driver information provision were derived in the areas subject to improvement, and friendly response and internal and external cleanliness were derived in the areas of excessive investment. The second purpose of this study is to estimate the value given to users for sharing taxi service platforms using the CVM. As a result of estimating the value of the demonstration service of the shared taxi platform developed through this study, the WTP was 3,621 won/per household/per year when expanding throughout Gimhae-si, and 2,515 won/per household/per year. Compared to the willingness to pay for empirical services, only 69.5% of the willingness to pay for the spread project in Gimhae-si. This is the result of a combination of service spread to an unspecified number of people and concerns about service quality due to spatial expansion. This suggests that it is necessary to build data through continuous demonstration and to carefully build a roadmap for spread by upgrading services based on this.

Analysis of Distributed Computational Loads in Large-scale AC/DC Power System using Real-Time EMT Simulation (대규모 AC/DC 전력 시스템 실시간 EMP 시뮬레이션의 부하 분산 연구)

  • In Kwon, Park;Yi, Zhong Hu;Yi, Zhang;Hyun Keun, Ku;Yong Han, Kwon
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.8 no.2
    • /
    • pp.159-179
    • /
    • 2022
  • Often a network becomes complex, and multiple entities would get in charge of managing part of the whole network. An example is a utility grid. While the entire grid would go under a single utility company's responsibility, the network is often split into multiple subsections. Subsequently, each subsection would be given as the responsibility area to the corresponding sub-organization in the utility company. The issue of how to make subsystems of adequate size and minimum number of interconnections between subsystems becomes more critical, especially in real-time simulations. Because the computation capability limit of a single computation unit, regardless of whether it is a high-speed conventional CPU core or an FPGA computational engine, it comes with a maximum limit that can be completed within a given amount of execution time. The issue becomes worsened in real time simulation, in which the computation needs to be in precise synchronization with the real-world clock. When the subject of the computation allows for a longer execution time, i.e., a larger time step size, a larger portion of the network can be put on a computation unit. This translates into a larger margin of the difference between the worst and the best. In other words, even though the worst (or the largest) computational burden is orders of magnitude larger than the best (or the smallest) computational burden, all the necessary computation can still be completed within the given amount of time. However, the requirement of real-time makes the margin much smaller. In other words, the difference between the worst and the best should be as small as possible in order to ensure the even distribution of the computational load. Besides, data exchange/communication is essential in parallel computation, affecting the overall performance. However, the exchange of data takes time. Therefore, the corresponding consideration needs to be with the computational load distribution among multiple calculation units. If it turns out in a satisfactory way, such distribution will raise the possibility of completing the necessary computation in a given amount of time, which might come down in the level of microsecond order. This paper presents an effective way to split a given electrical network, according to multiple criteria, for the purpose of distributing the entire computational load into a set of even (or close to even) sized computational loads. Based on the proposed system splitting method, heavy computation burdens of large-scale electrical networks can be distributed to multiple calculation units, such as an RTDS real time simulator, achieving either more efficient usage of the calculation units, a reduction of the necessary size of the simulation time step, or both.

A New Item Recommendation Procedure Using Preference Boundary

  • Kim, Hyea-Kyeong;Jang, Moon-Kyoung;Kim, Jae-Kyeong;Cho, Yoon-Ho
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.81-99
    • /
    • 2010
  • Lately, in consumers' markets the number of new items is rapidly increasing at an overwhelming rate while consumers have limited access to information about those new products in making a sensible, well-informed purchase. Therefore, item providers and customers need a system which recommends right items to right customers. Also, whenever new items are released, for instance, the recommender system specializing in new items can help item providers locate and identify potential customers. Currently, new items are being added to an existing system without being specially noted to consumers, making it difficult for consumers to identify and evaluate new products introduced in the markets. Most of previous approaches for recommender systems have to rely on the usage history of customers. For new items, this content-based (CB) approach is simply not available for the system to recommend those new items to potential consumers. Although collaborative filtering (CF) approach is not directly applicable to solve the new item problem, it would be a good idea to use the basic principle of CF which identifies similar customers, i,e. neighbors, and recommend items to those customers who have liked the similar items in the past. This research aims to suggest a hybrid recommendation procedure based on the preference boundary of target customer. We suggest the hybrid recommendation procedure using the preference boundary in the feature space for recommending new items only. The basic principle is that if a new item belongs within the preference boundary of a target customer, then it is evaluated to be preferred by the customer. Customers' preferences and characteristics of items including new items are represented in a feature space, and the scope or boundary of the target customer's preference is extended to those of neighbors'. The new item recommendation procedure consists of three steps. The first step is analyzing the profile of items, which are represented as k-dimensional feature values. The second step is to determine the representative point of the target customer's preference boundary, the centroid, based on a personal information set. To determine the centroid of preference boundary of a target customer, three algorithms are developed in this research: one is using the centroid of a target customer only (TC), the other is using centroid of a (dummy) big target customer that is composed of a target customer and his/her neighbors (BC), and another is using centroids of a target customer and his/her neighbors (NC). The third step is to determine the range of the preference boundary, the radius. The suggested algorithm Is using the average distance (AD) between the centroid and all purchased items. We test whether the CF-based approach to determine the centroid of the preference boundary improves the recommendation quality or not. For this purpose, we develop two hybrid algorithms, BC and NC, which use neighbors when deciding centroid of the preference boundary. To test the validity of hybrid algorithms, BC and NC, we developed CB-algorithm, TC, which uses target customers only. We measured effectiveness scores of suggested algorithms and compared them through a series of experiments with a set of real mobile image transaction data. We spilt the period between 1st June 2004 and 31st July and the period between 1st August and 31st August 2004 as a training set and a test set, respectively. The training set Is used to make the preference boundary, and the test set is used to evaluate the performance of the suggested hybrid recommendation procedure. The main aim of this research Is to compare the hybrid recommendation algorithm with the CB algorithm. To evaluate the performance of each algorithm, we compare the purchased new item list in test period with the recommended item list which is recommended by suggested algorithms. So we employ the evaluation metric to hit the ratio for evaluating our algorithms. The hit ratio is defined as the ratio of the hit set size to the recommended set size. The hit set size means the number of success of recommendations in our experiment, and the test set size means the number of purchased items during the test period. Experimental test result shows the hit ratio of BC and NC is bigger than that of TC. This means using neighbors Is more effective to recommend new items. That is hybrid algorithm using CF is more effective when recommending to consumers new items than the algorithm using only CB. The reason of the smaller hit ratio of BC than that of NC is that BC is defined as a dummy or virtual customer who purchased all items of target customers' and neighbors'. That is centroid of BC often shifts from that of TC, so it tends to reflect skewed characters of target customer. So the recommendation algorithm using NC shows the best hit ratio, because NC has sufficient information about target customers and their neighbors without damaging the information about the target customers.

A Study On Web Shopping Attitude and Purchasing Intention of Internet Self-Efficacy -Focus on Intrinsic and Extrinsic Motivation- (인터넷 자기효능감으로 인한 웹쇼핑에 대한 태도와 구매행동의도에 관한 연구 -내재적 동기와 외재적 동기를 중심으로-)

  • Lee, Jong-Ho;Sin, Jong-Kuk;Kim, Mi-Hye;Kong, Hye-Kyung
    • Journal of Global Scholars of Marketing Science
    • /
    • v.10
    • /
    • pp.1-26
    • /
    • 2002
  • The present study examines the role of subjectively perceived factors of the attitude toward web shopping in forming an intention to use a web shopping intention. An integrative research model is presented and tested empirically. It includes the following three aspects of belief in Davis' TAM: perceived usefulness, perceived ease of use, perceived enjoyment. Specially, internet self-efficacy, or the belief in one's capabilities to organize and execute courses of Internet actions required to produce given attainments, is a potentially important factor in efforts to gain more favorable attitude toward web shopping close the digital divide that separates experienced Internet users from novices. Prior research on Internet self-efficacy has been limited to examining specific task performance and narrow behavioral domains rather than overall attainments in relation to general Internet use, and has not yielded evidence of reliability and construct validity. Survey data were collected to develop a reliable operational measure of Internet self-efficacy and to examine its construct validity. Also, much previous research has established that perceived ease of use is an important factor influencing user acceptance and usage behavior of information technologies. However, very little research has been conducted to understand how that perception forms and changes over time. The present study examines that higher internet self-efficacy is more getting favorable web shopping attitude, and web shopping intention as more as usefulness, enjoyment through the internet.

  • PDF

Enhancing Query Efficiency for Huge 3D Point Clouds Based on Isometric Spatial Partitioning and Independent Octree Generation (등축형 공간 분할과 독립적 옥트리 생성을 통한 대용량 3차원 포인트 클라우드의 탐색 효율 향상)

  • Han, Soohee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.5
    • /
    • pp.481-486
    • /
    • 2014
  • This study aims at enhancing the performance of file-referring octree, suggested by Han(2014), for efficiently querying huge 3D point clouds, acquired by the 3D terrestrial laser scanning. Han's method(2014) has revealed a problem of heavy declining in query speed, when if it was applied on a very long tunnel, which is the lengthy and narrow shaped anisometric structure. Hereupon, the shape of octree has been analyzed of its influence on the query efficiency with the testing method of generating an independent octree in each isometric subdivision of 3D object boundary. This method tested query speed and main memory usage against the conventional single octree method by capturing about 300 million points in a very long tunnel. Finally, the testing method resulted in which twice faster query speed is taking similar size of memory. It is also approved that the conclusive factor influencing the query speed is the destination level, but the query speed can still increase with more proximity to isometric bounding shape of octree. While an excessive unbalance of octree shape along each axis can heavily degrade the query speed, the improvement of octree shape can be more effectively enhancing the query speed than increasement of destination level.

Adaptive Hard Decision Aided Fast Decoding Method using Parity Request Estimation in Distributed Video Coding (패리티 요구량 예측을 이용한 적응적 경판정 출력 기반 고속 분산 비디오 복호화 기술)

  • Shim, Hiuk-Jae;Oh, Ryang-Geun;Jeon, Byeung-Woo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.4
    • /
    • pp.635-646
    • /
    • 2011
  • In distributed video coding, low complexity encoder can be realized by shifting encoder-side complex processes to decoder-side. However, not only motion estimation/compensation processes but also complex LDPC decoding process are imposed to the Wyner-Ziv decoder, therefore decoder-side complexity has been one important issue to improve. LDPC decoding process consists of numerous iterative decoding processes, therefore complexity increases as the number of iteration increases. This iterative LDPC decoding process accounts for more than 60% of whole WZ decoding complexity, therefore it can be said to be a main target for complexity reduction. Previously, HDA (Hard Decision Aided) method is introduced for fast LDPC decoding process. For currently received parity bits, HDA method certainly reduces the complexity of decoding process, however, LDPC decoding process is still performed even with insufficient amount of parity request which cannot lead to successful LDPC decoding. Therefore, we can further reduce complexity by avoiding the decoding process for insufficient parity bits. In this paper, therefore, a parity request estimation method is proposed using bit plane-wise correlation and temporal correlation. Joint usage of HDA method and the proposed method achieves about 72% of complexity reduction in LDPC decoding process, while rate distortion performance is degraded only by -0.0275 dB in BDPSNR.

An Electric Load Forecasting Scheme for University Campus Buildings Using Artificial Neural Network and Support Vector Regression (인공 신경망과 지지 벡터 회귀분석을 이용한 대학 캠퍼스 건물의 전력 사용량 예측 기법)

  • Moon, Jihoon;Jun, Sanghoon;Park, Jinwoong;Choi, Young-Hwan;Hwang, Eenjun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.10
    • /
    • pp.293-302
    • /
    • 2016
  • Since the electricity is produced and consumed simultaneously, predicting the electric load and securing affordable electric power are necessary for reliable electric power supply. In particular, a university campus is one of the highest power consuming institutions and tends to have a wide variation of electric load depending on time and environment. For these reasons, an accurate electric load forecasting method that can predict power consumption in real-time is required for efficient power supply and management. Even though various influencing factors of power consumption have been discovered for the educational institutions by analyzing power consumption patterns and usage cases, further studies are required for the quantitative prediction of electric load. In this paper, we build an electric load forecasting model by implementing and evaluating various machine learning algorithms. To do that, we consider three building clusters in a campus and collect their power consumption every 15 minutes for more than one year. In the preprocessing, features are represented by considering periodic characteristic of the data and principal component analysis is performed for the features. In order to train the electric load forecasting model, we employ both artificial neural network and support vector machine. We evaluate the prediction performance of each forecasting model by 5-fold cross-validation and compare the prediction result to real electric load.