• Title/Summary/Keyword: time domain data

Search Result 1,310, Processing Time 0.03 seconds

Social Network Analysis for the Effective Adoption of Recommender Systems (추천시스템의 효과적 도입을 위한 소셜네트워크 분석)

  • Park, Jong-Hak;Cho, Yoon-Ho
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.305-316
    • /
    • 2011
  • Recommender system is the system which, by using automated information filtering technology, recommends products or services to the customers who are likely to be interested in. Those systems are widely used in many different Web retailers such as Amazon.com, Netfix.com, and CDNow.com. Various recommender systems have been developed. Among them, Collaborative Filtering (CF) has been known as the most successful and commonly used approach. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. However, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting in advance whether the performance of CF recommender system is acceptable or not is practically important and needed. In this study, we propose a decision making guideline which helps decide whether CF is adoptable for a given application with certain transaction data characteristics. Several previous studies reported that sparsity, gray sheep, cold-start, coverage, and serendipity could affect the performance of CF, but the theoretical and empirical justification of such factors is lacking. Recently there are many studies paying attention to Social Network Analysis (SNA) as a method to analyze social relationships among people. SNA is a method to measure and visualize the linkage structure and status focusing on interaction among objects within communication group. CF analyzes the similarity among previous ratings or purchases of each customer, finds the relationships among the customers who have similarities, and then uses the relationships for recommendations. Thus CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. Under the assumption that SNA could facilitate an exploration of the topological properties of the network structure that are implicit in transaction data for CF recommendations, we focus on density, clustering coefficient, and centralization which are ones of the most commonly used measures to capture topological properties of the social network structure. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. We explore how these SNA measures affect the performance of CF performance and how they interact to each other. Our experiments used sales transaction data from H department store, one of the well?known department stores in Korea. Total 396 data set were sampled to construct various types of social networks. The dependant variable measuring process consists of three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used UCINET 6.0 for SNA. The experiments conducted the 3-way ANOVA which employs three SNA measures as dependant variables, and the recommendation accuracy measured by F1-measure as an independent variable. The experiments report that 1) each of three SNA measures affects the recommendation accuracy, 2) the density's effect to the performance overrides those of clustering coefficient and centralization (i.e., CF adoption is not a good decision if the density is low), and 3) however though the density is low, the performance of CF is comparatively good when the clustering coefficient is low. We expect that these experiment results help firms decide whether CF recommender system is adoptable for their business domain with certain transaction data characteristics.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

Vibration-based structural health monitoring for offshore wind turbines - Experimental validation of stochastic subspace algorithms

  • Kraemer, Peter;Friedmanna, Herbert
    • Wind and Structures
    • /
    • v.21 no.6
    • /
    • pp.693-707
    • /
    • 2015
  • The efficiency of wind turbines (WT) is primarily reflected in their ability to generate electricity at any time. Downtimes of WTs due to "conventional" inspections are cost-intensive and undesirable for investors. For this reason, there is a need for structural health monitoring (SHM) systems, to enable service and maintenance on demand and to increase the inspection intervals. In general, monitoring increases the cost effectiveness of WTs. This publication concentrates on the application of two vibration-based SHM algorithms for stability and structural change monitoring of offshore WTs. Only data driven, output-only algorithms based on stochastic subspace identification (SSI) in time domain are considered. The centerpiece of this paper deals with the rough mathematical description of the dynamic behavior of offshore WTs and with the basic presentation of stochastic subspace-based algorithms and their application to these structures. Due to the early stage of the industrial application of SHM on offshore WT on the one side and the required confidentiality to the plant manufacturer and operator on the other side, up to now it is not possible to analyze different isolated structural damages resp. changes in a systematic manner, directly by means of in-situ measurement and to make these "acknowledgements" publicly available. For this reason, the sensitivity of the methods for monitoring purposes are demonstrated through their application on long time measurements from a 1:10 large scale test rig of an offshore WT under different conditions: undamaged, different levels of loosened bolt connections between tower parts, different levels of fouling, scouring and structure inclination. The limitation and further requirements for the approaches and their applicability on real foundations are discussed along the paper.

Gunnery Detection Method Using Reference Frame Modeling and Frame Difference (참조 프레임 모델링과 차영상을 이용한 포격 탐지 기법)

  • Kim, Jae-Hyup;Song, Tae-Eun;Ko, Jin-Shin;Moon, Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.4
    • /
    • pp.62-70
    • /
    • 2012
  • In this paper, we propose the gunnery detection method based on reference frame modeling and frame difference method. The frame difference method is basic method in target detection, and it's applicable to the detection of moving targets. The goal of proposed method is the detection of gunnery target which has huge variation of energy and size in the time domain. So, proposed method is based on frame difference, and it guarantee real-time processing and high detection performance. In the method of frame difference, it's important to generate reference frame. In the proposed method, reference frame is modeled and updated in real time processing using statistical values for each pixels. We performed the simulation on 73 IR video data that has gunnery targets, and the experimental results showed that the proposed method has 95.7% detection ratio under condition that false alarm is 1 per hour.

Division of Household Labor between Married Female Clerical Workers and Their Husbands (사무직 기혼여성 부부의 가사노동 분담 실태 및 영향요인)

  • 조희금
    • Journal of Family Resource Management and Policy Review
    • /
    • v.2 no.2
    • /
    • pp.147-159
    • /
    • 1998
  • Given the dramatic increase in the percentage of married women working in clercial occupations and the inflexibility of work commitments for employees working in this domain, this paper investigates the division of household labor between married female clerical workers and their husbands, and their sources of external help. The total housework time of couples, the percent of total housework done by husbands, and a scale measuring the wife’s perception of the frequency with which her husband does specific household tasks are all used to measure the division of household labor between couples. Data for 143 couples were gathered from using structured questionares and the time dairies that included one weekday and one weekend day. The findings of this study are as follows; 1) The couples receive substantial support in housework from their mothers. 2) Wives spend an average of 23 hours and 26 minutes per week on household labor, whereas husbands spend an average 7 hours and 7 minutes per week. Husbands do an average of 20.9% of all housework done by the couples. Wives typically perceive that their husbands are not frequently participating in a variety of household tasks(mean = 2.88 on a 5-point Likert scale where 1=never and 5=always). 3) Multivariate analysis reveal that working hours is negatively related to while the presence of child under 6 years old is positively related to total housework. Time availability variables(e.g. working hours and the presence of child under 6 years old) and relative resource variables(e.g. the rate of wife’s income on that of husband) are related to the percent of total housework done by husbands. The sex-role attitude variables are related to the wife’s perceptions of the frequency with which her husband does specific household tasks.

  • PDF

A Fast Multiresolution Motion Estimation Algorithm in the Adaptive Wavelet Transform Domain (적응적 웨이브렛 영역에서의 고속의 다해상도 움직임 예측방법)

  • 신종홍;김상준;지인호
    • Journal of Broadcast Engineering
    • /
    • v.7 no.1
    • /
    • pp.55-65
    • /
    • 2002
  • Wavelet transform has recently emerged as a promising technique for video processing applications due to its flexibility in representing non-stationary video signals. Motion estimation which uses wavelet transform of octave band division method is applied In many places but if motion estimation error happens in the lowest frequency band. motion estimation error is accumulated by next time strep and there has the Problem that time and the data amount that are cost In calculation at each steps are increased. On the other hand. wavelet packet that achieved the best image quality in a given bit rate from a rate-distortion sense is suggested. But, this method has the disadvantage of time costs on designing wavelet packet. In order to solve this problem we solved this problem by introducing Top_down method. But we did not find the optimum solution in a given butt rate. That image variance can represent image complexity is considered in this paper. In this paper. we propose a fast multiresolution motion estimation scheme based on the adaptive wavelet transform for video compression.

Strongly Coupled Method for 2DOF Flutter Analysis (강성 결합 기법을 통한 2계 자유도 플러터 해석)

  • Ju, Wan-Don;Lee, Gwan-Jung;Lee, Dong-Ho;Lee, Gi-Hak
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.34 no.1
    • /
    • pp.24-31
    • /
    • 2006
  • In the present study, a strongly coupled analysis code is developed for transonic flutter analysis. For aerodynamic analysis, two dimensional Reynolds-Averaged Navier-Stokes equation was used for governing equation, and ε-SST for turbulence model, DP-SGS(Data Parallel Symmetric Gauss Seidel) Algorithm for parallelization algorithm. 2 degree-of-freedom pitch and plunge model was used for structural analysis. To obtain flutter response in the time domain, dual time stepping method was applied to both flow and structure solver. Strongly coupled method was implemented by successive iteration of fluid-structure interaction in pseudo time step. Computed results show flutter speed boundaries and limit cycle oscillation phenomena in addition to typical flutter responses - damped, divergent and neutral responses. It is also found that the accuracy of transonic flutter analysis is strongly dependent on the methodology of fluid-structure interaction as well as on the choice of turbulence model.

Mapping Tool for Semantic Interoperability of Clinical Terms (임상용어의 의미적 상호운영성을 위한 매핑 도구)

  • Lee, In-Keun;Hong, Sung-Jung;Cho, Hune;Kim, Hwa-Sun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.1
    • /
    • pp.167-173
    • /
    • 2011
  • Most of the terminologies used in medical domain is not intended to be applied directly in clinical setting but is developed to integrate the terms by defining the reference terminology or concept relations between the terms. Therefore, it is needed to develop the subsets of the terminology which classify categories properly for the purpose of use and extract and organize terms with high utility based on the classified categories in order to utilize the clinical terms conveniently as well as efficiently. Moreover, it is also necessary to develop and upgrade the terminology constantly to meet user's new demand by changing or correcting the system. This study has developed a mapping tool that allows accurate expression and interpretation of clinical terms used for medical records in electronic medical records system and can furthermore secure semantic interoperability among the terms used in the medical information model and generate common terms as well. The system is designed to execute both 1:1 and N:M mapping between the concepts of terms at a time and search for and compare various terms at a time, too. Also, in order to enhance work consistency and work reliability between the task performers, it allows work in parallel and the observation of work processes. Since it is developed with Java, it adds new terms in the form of plug-in to be used. It also reinforce database access security with Remote Method Invocation (RMI). This research still has tasks to be done such as complementing and refining and also establishing management procedures for registered data. However, it will be effectively used to reduce the time and expenses to generate terms in each of the medical institutions and improve the quality of medicine by providing consistent concepts and representative terms for the terminologies used for medical records and inducing proper selection of the terms according to their meaning.

A Study on Reliability Prediction of System with Degrading Performance Parameter (열화되는 성능 파라메터를 가지는 시스템의 신뢰성 예측에 관한 연구)

  • Kim, Yon Soo;Chung, Young-Bae
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.38 no.4
    • /
    • pp.142-148
    • /
    • 2015
  • Due to advancements in technology and manufacturing capability, it is not uncommon that life tests yield no or few failures at low stress levels. In these situations it is difficult to analyse lifetime data and make meaningful inferences about product or system reliability. For some products or systems whose performance characteristics degrade over time, a failure is said to have occurred when a performance characteristic crosses a critical threshold. The measurements of the degradation characteristic contain much useful and credible information about product or system reliability. Degradation measurements of the performance characteristics of an unfailed unit at different times can directly relate reliability measures to physical characteristics. Reliability prediction based on physical performance measures can be an efficient and alternative method to estimate for some highly reliable parts or systems. If the degradation process and the distance between the last measurement and a specified threshold can be established, the remaining useful life is predicted in advance. In turn, this prediction leads to just in time maintenance decision to protect systems. In this paper, we describe techniques for mapping product or system which has degrading performance parameter to the associated classical reliability measures in the performance domain. This paper described a general modeling and analysis procedure for reliability prediction based on one dominant degradation performance characteristic considering pseudo degradation performance life trend model. This pseudo degradation trend model is based on probability modeling of a failure mechanism degradation trend and comparison of a projected distribution to pre-defined critical soft failure point in time or cycle.

High Quality Multi-Channel Audio System for Karaoke Using DSP (DSP를 이용한 가라오케용 고음질 멀티채널 오디오 시스템)

  • Kim, Tae-Hoon;Park, Yang-Su;Shin, Kyung-Chul;Park, Jong-In;Moon, Tae-Jung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.1
    • /
    • pp.1-9
    • /
    • 2009
  • This paper deals with the realization of multi-channel live karaoke. In this study, 6-channel MP3 decoding and tempo/key scaling was operated in real time by using the TMS320C6713 DSP, which is 32 bit floating-point DSP made by TI Co. The 6 channel consists of front L/R instrument, rear L/R instrument, melody, and woofer. In case of the 4 channel, rear L/R instrument can be replaced with drum L/R channel. And the final output data is generated as adjusted to a 5.1 channel speaker. The SOLA algorithm was applied for tempo scaling, and key scaling was done with interpolation and decimation in the time domain. Drum channel was excluded in key scaling by separating instruments into drums and non-drums, and in processing SOLA, high-quality tempo scaling was made possible by differentiating SOLA frame size, which was optimized for real-time process. The use of 6 channels allows the composition of various channels, and the multi-channel audio system of this study can be effectively applied at any place where live music is needed.