• Title/Summary/Keyword: dynamic analysis method

Search Result 5,934, Processing Time 0.035 seconds

SLUMPING TENDENCY AND RHEOLOGICAL PROPERTY OF FLOWABLE COMPOSITES (Flowable 복합레진의 slumping 경향과 유변학적 성질)

  • Lee, In-Bog;Min, Sun-Hong;Kim, Sun-Young;Cho, Byung-Hoon;Back, Seung-Ho
    • Restorative Dentistry and Endodontics
    • /
    • v.34 no.2
    • /
    • pp.130-136
    • /
    • 2009
  • The aim of this study was to develop a method for measuring the slumping resistance of flowable resin composites and to evaluate the efficacy using rheological methodology. Five commercial flowable composites (Aelitefil flow:AF, Filtek flow:FF, DenFil flow:DF, Tetric flow:TF and Revolution:RV) were used. Same volume of composites in a syringe was extruded on a glass slide using a custom-made loading device. The resin composites were allowed to slump for 10 seconds at $25^{\circ}C$ and light cured. The aspect ratio (height/diameter) of cone or dome shaped specimen was measured for estimating the slumping tendency of composites. The complex viscosity of each composite was measured by a dynamic oscillatory shear test as a function of angular frequency using a rheometer. To compare the slumping tendency of composites, one way-ANOVA and Turkey's post hoc test was performed for the aspect ratio at 95% confidence level. Regression analysis was performed to investigate the relationship between the complex viscosity and the aspect ratio. The results were as follows. 1. Slumping tendency based on the aspect ratio varied among the five materials (AF

Estimation of the Korean Yield Curve via Bayesian Variable Selection (베이지안 변수선택을 이용한 한국 수익률곡선 추정)

  • Koo, Byungsoo
    • Economic Analysis
    • /
    • v.26 no.1
    • /
    • pp.84-132
    • /
    • 2020
  • A central bank infers market expectations of future yields based on yield curves. The central bank needs to precisely understand the changes in market expectations of future yields in order to have a more effective monetary policy. This need explains why a range of models have attempted to produce yield curves and market expectations that are as accurate as possible. Alongside the development of bond markets, the interconnectedness between them and macroeconomic factors has deepened, and this has rendered understanding of what macroeconomic variables affect yield curves even more important. However, the existence of various theories about determinants of yields inevitably means that previous studies have applied different macroeconomics variables when estimating yield curves. This indicates model uncertainties and naturally poses a question: Which model better estimates yield curves? Put differently, which variables should be applied to better estimate yield curves? This study employs the Dynamic Nelson-Siegel Model and takes the Bayesian approach to variable selection in order to ensure precision in estimating yield curves and market expectations of future yields. Bayesian variable selection may be an effective estimation method because it is expected to alleviate problems arising from a priori selection of the key variables comprising a model, and because it is a comprehensive approach that efficiently reflects model uncertainties in estimations. A comparison of Bayesian variable selection with the models of previous studies finds that the question of which macroeconomic variables are applied to a model has considerable impact on market expectations of future yields. This shows that model uncertainties exert great influence on the resultant estimates, and that it is reasonable to reflect model uncertainties in the estimation. Those implications are underscored by the superior forecasting performance of Bayesian variable selection models over those models used in previous studies. Therefore, the use of a Bayesian variable selection model is advisable in estimating yield curves and market expectations of yield curves with greater exactitude in consideration of the impact of model uncertainties on the estimation.

Technical Efficiency in Korea: Interindustry Determinants and Dynamic Stability (기술적(技術的) 효율성(效率性)의 결정요인(決定要因)과 동태적(動態的) 변화(變化))

  • Yoo, Seong-min
    • KDI Journal of Economic Policy
    • /
    • v.12 no.4
    • /
    • pp.21-46
    • /
    • 1990
  • This paper, a sequel to Yoo and Lee (1990), attempts to investigate the interindustry determinants of technical efficiency in Korea's manufacturing industries, and also to conduct an exploratory analysis on the stability of technical efficiency over time. The hypotheses set forth in this paper are most found in the existing literature on technical efficiency. They are, however, revised and shed a new light upon, whenever possible, to accommodate any Korea-specific conditions. The set of regressors used in the cross-sectional analysis are chosen and the hypotheses are posed in such a way that our result can be made comparable to those of similar studies conducted for the U.S. and Japan by Caves and Barton (1990) and Uekusa and Torii (1987), respectively. It is interesting to observe a certain degree of similarity as well as differentiation between the cross-section evidence on Korea's manufacturing industries and that on the U.S. and Japanese industries. As for the similarities, we can find positive and significant effects on technical efficiency of relative size of production and the extent of specialization in production, and negative and significant effect of the variations in capital-labor ratio within industries. The curvature influence of concentration ratio on technical efficiency is also confirmed in the Korean case. There are differences, too. We cannot find any significant effects of capital vintage, R&D and foreign competition on technical efficiency, all of which were shown to be robust determinants of technical efficiency in the U.S. case. We note, however, that the variables measuring capital vintage effect, R&D and the degree of foreign competition in Korean markets are suspected to suffer from serious measurement errors incurred in data collection and/or conversion of industrial classification system into the KSIC (Korea Standard Industrial Classification) system. Thus, we are reluctant to accept the findings on the effects of these variables as definitive conclusions on Korea's industrial organization. Another finding that interests us is that the cross-industry evidence becomes consistently strong when we use the efficiency estimates based on gross output instead of value added, which provides us with an ex post empirical criterion to choose an output measure between the two in estimating the production frontier. We also conduct exploratory analyses on the stability of the estimates of technical efficiency in Korea's manufacturing industries. Though the method of testing stability employed in this paper is never a complete one, we cannot find strong evidence that our efficiency estimates are stable over time. The outcome is both surprising and disappointing. We can also show that the instability of technical efficiency over time is partly explained by the way we constructed our measures of technical efficiency. To the extent that our efficiency estimates depend on the shape of the empirical distribution of plants in the input-output space, any movements of the production frontier over time are not reflected in the estimates, and possibilities exist of associating a higher level of technical efficiency with a downward movement of the production frontier over time, and so on. Thus, we find that efficiency measures that take into account not only the distributional changes, but also the shifts of the production frontier over time, increase the extent of stability, and are more appropriate for use in a dynamic context. The remaining portion of the instability of technical efficiency over time is not explained satisfactorily in this paper, and future research should address this question.

  • PDF

Improved Original Entry Point Detection Method Based on PinDemonium (PinDemonium 기반 Original Entry Point 탐지 방법 개선)

  • Kim, Gyeong Min;Park, Yong Su
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.6
    • /
    • pp.155-164
    • /
    • 2018
  • Many malicious programs have been compressed or encrypted using various commercial packers to prevent reverse engineering, So malicious code analysts must decompress or decrypt them first. The OEP (Original Entry Point) is the address of the first instruction executed after returning the encrypted or compressed executable file back to the original binary state. Several unpackers, including PinDemonium, execute the packed file and keep tracks of the addresses until the OEP appears and find the OEP among the addresses. However, instead of finding exact one OEP, unpackers provide a relatively large set of OEP candidates and sometimes OEP is missing among candidates. In other words, existing unpackers have difficulty in finding the correct OEP. We have developed new tool which provides fewer OEP candidate sets by adding two methods based on the property of the OEP. In this paper, we propose two methods to provide fewer OEP candidate sets by using the property that the function call sequence and parameters are same between packed program and original program. First way is based on a function call. Programs written in the C/C++ language are compiled to translate languages into binary code. Compiler-specific system functions are added to the compiled program. After examining these functions, we have added a method that we suggest to PinDemonium to detect the unpacking work by matching the patterns of system functions that are called in packed programs and unpacked programs. Second way is based on parameters. The parameters include not only the user-entered inputs, but also the system inputs. We have added a method that we suggest to PinDemonium to find the OEP using the system parameters of a particular function in stack memory. OEP detection experiments were performed on sample programs packed by 16 commercial packers. We can reduce the OEP candidate by more than 40% on average compared to PinDemonium except 2 commercial packers which are can not be executed due to the anti-debugging technique.

Context Prediction Using Right and Wrong Patterns to Improve Sequential Matching Performance for More Accurate Dynamic Context-Aware Recommendation (보다 정확한 동적 상황인식 추천을 위해 정확 및 오류 패턴을 활용하여 순차적 매칭 성능이 개선된 상황 예측 방법)

  • Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.51-67
    • /
    • 2009
  • Developing an agile recommender system for nomadic users has been regarded as a promising application in mobile and ubiquitous settings. To increase the quality of personalized recommendation in terms of accuracy and elapsed time, estimating future context of the user in a correct way is highly crucial. Traditionally, time series analysis and Makovian process have been adopted for such forecasting. However, these methods are not adequate in predicting context data, only because most of context data are represented as nominal scale. To resolve these limitations, the alignment-prediction algorithm has been suggested for context prediction, especially for future context from the low-level context. Recently, an ontological approach has been proposed for guided context prediction without context history. However, due to variety of context information, acquiring sufficient context prediction knowledge a priori is not easy in most of service domains. Hence, the purpose of this paper is to propose a novel context prediction methodology, which does not require a priori knowledge, and to increase accuracy and decrease elapsed time for service response. To do so, we have newly developed pattern-based context prediction approach. First of ail, a set of individual rules is derived from each context attribute using context history. Then a pattern consisted of results from reasoning individual rules, is developed for pattern learning. If at least one context property matches, say R, then regard the pattern as right. If the pattern is new, add right pattern, set the value of mismatched properties = 0, freq = 1 and w(R, 1). Otherwise, increase the frequency of the matched right pattern by 1 and then set w(R,freq). After finishing training, if the frequency is greater than a threshold value, then save the right pattern in knowledge base. On the other hand, if at least one context property matches, say W, then regard the pattern as wrong. If the pattern is new, modify the result into wrong answer, add right pattern, and set frequency to 1 and w(W, 1). Or, increase the matched wrong pattern's frequency by 1 and then set w(W, freq). After finishing training, if the frequency value is greater than a threshold level, then save the wrong pattern on the knowledge basis. Then, context prediction is performed with combinatorial rules as follows: first, identify current context. Second, find matched patterns from right patterns. If there is no pattern matched, then find a matching pattern from wrong patterns. If a matching pattern is not found, then choose one context property whose predictability is higher than that of any other properties. To show the feasibility of the methodology proposed in this paper, we collected actual context history from the travelers who had visited the largest amusement park in Korea. As a result, 400 context records were collected in 2009. Then we randomly selected 70% of the records as training data. The rest were selected as testing data. To examine the performance of the methodology, prediction accuracy and elapsed time were chosen as measures. We compared the performance with case-based reasoning and voting methods. Through a simulation test, we conclude that our methodology is clearly better than CBR and voting methods in terms of accuracy and elapsed time. This shows that the methodology is relatively valid and scalable. As a second round of the experiment, we compared a full model to a partial model. A full model indicates that right and wrong patterns are used for reasoning the future context. On the other hand, a partial model means that the reasoning is performed only with right patterns, which is generally adopted in the legacy alignment-prediction method. It turned out that a full model is better than a partial model in terms of the accuracy while partial model is better when considering elapsed time. As a last experiment, we took into our consideration potential privacy problems that might arise among the users. To mediate such concern, we excluded such context properties as date of tour and user profiles such as gender and age. The outcome shows that preserving privacy is endurable. Contributions of this paper are as follows: First, academically, we have improved sequential matching methods to predict accuracy and service time by considering individual rules of each context property and learning from wrong patterns. Second, the proposed method is found to be quite effective for privacy preserving applications, which are frequently required by B2C context-aware services; the privacy preserving system applying the proposed method successfully can also decrease elapsed time. Hence, the method is very practical in establishing privacy preserving context-aware services. Our future research issues taking into account some limitations in this paper can be summarized as follows. First, user acceptance or usability will be tested with actual users in order to prove the value of the prototype system. Second, we will apply the proposed method to more general application domains as this paper focused on tourism in amusement park.

Simultaneous Multiple Transmit Focusing Method with Orthogonal Chirp Signal for Ultrasound Imaging System (초음파 영상 장치에서 직교 쳐프 신호를 이용한 동시 다중 송신집속 기법)

  • 정영관;송태경
    • Journal of Biomedical Engineering Research
    • /
    • v.23 no.1
    • /
    • pp.49-60
    • /
    • 2002
  • Receive dynamic focusing with an array transducer can provide near optimum resolution only in the vicinity of transmit focal depth. A customary method to increase the depth of field is to combine several beams with different focal depths, with an accompanying decrease in the frame rate. In this Paper. we Present a simultaneous multiple transmit focusing method in which chirp signals focused at different depths are transmitted at the same time. These chirp signals are mutually orthogonal in a sense that the autocorrelation function of each signal has a narrow mainlobe width and low sidelobe levels. and the crossorelation function of any Pair of the signals has values smaller than the sidelobe levels of each autocorrelation function. This means that each chirp signal can be separated from the combined received signals and compressed into a short pulse. which is then individually focused on a separate receive beamformer. Next. the individually focused beams are combined to form a frame of image. Theoretically, any two chirp signals defined over two nonoverlapped frequency bands are mutually orthogonal In the present work. however, a tractional overlap of adjacent frequency bands is permitted to design more chirp signals within a given transducer bandwidth. The elevation of the rosscorrelation values due to the frequency overlap could be reduced by alternating the direction of frequency sweep of the adjacent chirp signals We also observe that the Proposed method provides better images when the low frequency chirp is focused at a near Point and the high frequency chirp at a far point along the depth. better lateral resolution is obtained at the far field with reasonable SNR due to the SNR gain in Pulse compression Imaging .

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

Spatio-temporal Analysis of Population Distribution in Seoul via Integrating Transportation and Land Use Information, Based on Four-Dimensional Visualization Methods (교통과 토지이용 정보를 결합한 서울 인구분포의 시공간적 분석: 4차원 시각화 방법을 토대로)

  • Lee, Keumsook;Kim, Ho Sung
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.21 no.1
    • /
    • pp.20-33
    • /
    • 2018
  • Population distribution in urban space varies with transportation flow changing along time of day. Transportation flow is directly affected by the activities of urbanites and the distribution of related facilities, since the flow is the result of moving to the point where the facilities associated with their activities are located. It is thus necessary to analyze the spatio-temporal characteristics of the urban population distribution by integrating the distribution of activity spaces related to the daily life of urbanites and the flow of transportation. The purpose of this study is to analyze the population distribution in urban space with daily and weekly time bases using the building database and T-card database in the city of Seoul, which is rich in information on land use and transportation flow. For a time-based analysis that is difficult to grasp by general statistical techniques, a four-dimensional visualization method combining time and space using a Java program is devised. Dynamic visualization in the four-dimensional space and time allows intuitive analysis and makes it possible to understand more effectively the spatio-temporal characteristics of population distribution. For this purpose, buildings are classified into three activity groups: residential, working, and commercial according to their purpose, and the number of passengers traveling to and from each stop site of bus and subway networks in the T-card database for one week is calculated in one-minute increments, Visualizing these and integrating transportation and land use, we analyze spatio-temporal characteristics of the population distribution in Seoul. As a result, it is found that the population distribution of Seoul displays distinct spatio-temporal characteristics according to land use. In particular, there is a clear difference in the population distribution pattern along the time axis according to the mixed aspects of working, commercial, and residential activities. The results of this study can be very useful for transportation and location planning of city facilities.

Analysis of Landslide Occurrence Characteristics Based on the Root Cohesion of Vegetation and Flow Direction of Surface Runoff: A Case Study of Landslides in Jecheon-si, Chungcheongbuk-do, South Korea (식생의 뿌리 점착력과 지표유출의 흐름 조건을 고려한 산사태의 발생 특성 분석: 충청북도 제천지역의 사례를 중심으로)

  • Jae-Uk Lee;Yong-Chan Cho;Sukwoo Kim;Minseok Kim;Hyun-Joo Oh
    • Journal of Korean Society of Forest Science
    • /
    • v.112 no.4
    • /
    • pp.426-441
    • /
    • 2023
  • This study investigated the predictive accuracy of a model of landslide displacement in Jecheon-si, where a great number of landslides were triggered by heavy rain on both natural (non-clear-cut) and clear-cut slopes during August 2020. This was accomplished by applying three flow direction methods (single flow direction, SFD; multiple flow direction, MFD; infinite flow direction, IFD) and the degree of root cohesion to an infinite slope stability equation. The application assumed that the soil saturation and any changes in root cohesion occurred following the timber harvest (clear-cutting). In the study area, 830 landslide locations were identified via landslide inventory mapping from satellite images and 25 cm resolution aerial photographs. The results of the landslide modeling comparison showed the accuracy of the models that considered changes in the root cohesion following clear-cutting to be improved by 1.3% to 2.6% when compared with those not considered in the area under the receiver operating characteristics (AUROC) analysis. Furthermore, the accuracy of the models that used the MFD algorithm improved by up to 1.3% when compared with the models that used the other algorithms in the AUROC analysis. These results suggest that the discriminatory application of the root cohesion, which considers changes in the vegetation condition, and the selection of the flow direction method may influence the accuracy of landslide predictive modeling. In the future, the results of this study should be verified by examining the root cohesion and its dynamic changes according to the tree species using the field hydrological monitoring technique.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.