• Title/Summary/Keyword: fuzzy number data

Search Result 342, Processing Time 0.021 seconds

Real-time Fault Detection and Classification of Reactive Ion Etching Using Neural Networks (Neural Networks을 이용한 Reactive Ion Etching 공정의 실시간 오류 검출에 관한 연구)

  • Ryu Kyung-Han;Lee Song-Jae;Soh Dea-Wha;Hong Sang-Jeen
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.7
    • /
    • pp.1588-1593
    • /
    • 2005
  • In coagulant control of water treatment plants, rule extraction, one of datamining categories, was performed for coagulant control of a water treatment plant. Clustering methods were applied to extract control rules from data. These control rules can be used for fully automation of water treatment plants instead of operator's knowledge for plant control. To perform fuzzy clustering, there are some coefficients to be determined and these kinds of studies have been performed over decades such as clustering indices. In this study, statistical indices were taken to calculate the number of clusters. Simultaneously, seed points were found out based on hierarchical clustering. These statistical approaches give information about features of clusters, so it can reduce computing cost and increase accuracy of clustering. The proposed algorithm can play an important role in datamining and knowledge discovery.

Personal Information Leakage Prevention Scheme of Smartphone Users in the Mobile Office Environment (모바일 오피스 환경에서 스마트폰 사용자의 개인정보 유출 방지 기법)

  • Jeong, Yoon-Su;Lee, Sang-Ho
    • Journal of Digital Convergence
    • /
    • v.13 no.5
    • /
    • pp.205-211
    • /
    • 2015
  • Recently, a mobile communication network and the wireless terminal is suddenly develop, mobile office service is more and more the sportlight. However, the user may receive an attack from a malicious third party if the up/download the data in the remote to perform the work in a mobile office environment. In this paper, we propose scheme to manage the information lost due to theft smartphone that contain spill prevention personal information and company information from the mobile office environment (call history, incoming messages, phonebook, calendar, location information, banking information, documents, etc.). The proposed scheme using the number of triangular fuzzy information about the state of the personal information and business intelligence to implement a pair-wise comparison matrix. In particular, the proposed scheme is to prevent the value obtained by constructing a pair-wise comparison matrix for personal information and business intelligence and pair your smartphone is lost when a third party not allow access to personal information and corporate information is leaked to the outside.

Parameter Extraction for Based on AR and Arrhythmia Classification through Deep Learning (AR 기반의 특징점 추출과 딥러닝을 통한 부정맥 분류)

  • Cho, Ik-sung;Kwon, Hyeog-soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.10
    • /
    • pp.1341-1347
    • /
    • 2020
  • Legacy studies for classifying arrhythmia have been studied in order to improve the accuracy of classification, Neural Network, Fuzzy, Machine Learning, etc. In particular, deep learning is most frequently used for arrhythmia classification using error backpropagation algorithm by solving the limit of hidden layer number, which is a problem of neural network. In order to apply a deep learning model to an ECG signal, it is necessary to select an optimal model and parameters. In this paper, we propose parameter extraction based on AR and arrhythmia classification through a deep learning. For this purpose, the R-wave is detected in the ECG signal from which noise has been removed, QRS and RR interval is modelled. And then, the weights were learned by supervised learning method through deep learning and the model was evaluated by the verification data. The classification rate of PVC is evaluated through MIT-BIH arrhythmia database. The achieved scores indicate arrhythmia classification rate of over 97%.

A Study on the Development of Dynamic Models under Inter Port Competition (항만의 경쟁상황을 고려한 동적모형 개발에 관한 연구)

  • 여기태;이철영
    • Journal of the Korean Institute of Navigation
    • /
    • v.23 no.1
    • /
    • pp.75-84
    • /
    • 1999
  • Although many studies on modelling of port competitive situation have been conducted, both theoretical frame and methodology are still very weak. In this study, therefore, a new algorithm called ESD (Extensional System Dynamics) for the evaluation of port competition was presented, and applied to simulate port systems in northeast asia. The detailed objectives of this paper are to develop Unit fort Model by using SD(System Dynamics) method; to develop Competitive Port Model by ESD method; to perform sensitivity analysis by altering parameters, and to propose port development strategies. For these the algorithm for the evaluation of part's competition was developed in two steps. Firstly, SD method was adopted to develop the Unit Port models, and secondly HFP(Hierarchical Fuzzy Process) method was introduced to expand previous SD method. The proposed models were then developed and applied to the five ports - Pusan, Kobe, Yokohama, Kaoshiung, Keelung - with real data on each ports, and several findings were derived. Firstly, the extraction of factors for Unit Port was accomplished by consultation of experts such as research worker, professor, research fellows related to harbor, and expert group, and finally, five factor groups - location, facility, service, cargo volumes, and port charge - were obtained. Secondly, system's structure consisting of feedback loop was found easily by location of representative and detailed factors on keyword network of STGB map. Using these keyword network, feedback loop was found. Thirdly, for the target year of 2003, the simulation for Pusan port revealed that liner's number would be increased from 829 ships to 1,450 ships and container cargo volumes increased from 4.56 million TEU to 7.74 million TEU. It also revealed that because of increased liners and container cargo volumes, length of berth should be expanded from 2,162m to 4,729m. This berth expansion was resulted in the decrease of congested ship's number from 97 to 11. It was also found that port's charge had a fluctuation. Results of simulation for Kobe, Yokohama, Kaoshiung, Keelung in northeast asia were also acquired. Finally, the inter port competition models developed by ESB method were used to simulate container cargo volumes for Pusan port. The results revealed that under competitive situation container cargo volume was smaller than non-competitive situation, which means Pusan port is lack of competitive power to other ports. Developed models in this study were then applied to estimate change of container cargo volumes in competitive relation by altering several parameters. And, the results were found to be very helpful for port mangers who are in charge of planning of port development.

  • PDF

A Strategic Approach to Competitiveness of ASEAN's Container Ports in International Logistics (국제물류전략에 있어서 ASEAN의 컨데이너항만 경쟁력에 관한 연구)

  • 김진구;이종인
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2003.05a
    • /
    • pp.273-280
    • /
    • 2003
  • The purpose of this study is to identify and evaluate the competitiveness of ports in ASEAN(Association of Southeast Asian Nations), which plays a leading role in basing the hub of international logistics strategies as a countermeasure in changes of international logistics environments. This region represents most severe competition among Mega hub ports in the world in terms of container cargo throughput at the onset of the 21 st century. The research method in this study accounted for overlapping between attributes, and introduced the HFP method that can perform mathematical operations. The scope of this study was strictly confined to the ports of ASEAN. which cover the top 100 of 350 container ports that were presented in Containerization International Yearbook 2002 with reference to container throughput. The results of this study show Singapore in the number one position. Even compared with major ports in Korea (after getting comparative ratings and applying the same data and evaluation structure), the number one position still goes to Singapore and then Busan(2) and Manila(2), followed by Port Klang(4), Tanjugn Priok(5), Tanjung Perak(6), Bangkok(7), Inchon(8), Laem Chabang(9) and Penang(9). In terms of the main contributions of this study, it is the first empirical study to apply the combined attributes of detailed and representative attributes into the advanced HFP model which was enhanced by the KJ method to evaluate the port competitiveness in ASEAN. Up-to-now, none have comprehensively conducted researches with sophisticated port methodology that has discussed a variety of changes in port development and terminal transfers of major shipping lines. Moreover, through the comparative evaluation between major ports in Korea and ASEAN, the presentation of comparative competitiveness for Korea ports is a great achievement in this study. In order to reinforce this study, it needs further compensative research, including cost factors which could not be applied to modeling the subject ports by lack of consistently qualified in ASEAN.

  • PDF

A Desirability Function-Based Multi-Characteristic Robust Design Optimization Technique (호감도 함수 기반 다특성 강건설계 최적화 기법)

  • Jong Pil Park;Jae Hun Jo;Yoon Eui Nahm
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.199-208
    • /
    • 2023
  • Taguchi method is one of the most popular approaches for design optimization such that performance characteristics become robust to uncontrollable noise variables. However, most previous Taguchi method applications have addressed a single-characteristic problem. Problems with multiple characteristics are more common in practice. The multi-criteria decision making(MCDM) problem is to select the optimal one among multiple alternatives by integrating a number of criteria that may conflict with each other. Representative MCDM methods include TOPSIS(Technique for Order of Preference by Similarity to Ideal Solution), GRA(Grey Relational Analysis), PCA(Principal Component Analysis), fuzzy logic system, and so on. Therefore, numerous approaches have been conducted to deal with the multi-characteristic design problem by combining original Taguchi method and MCDM methods. In the MCDM problem, multiple criteria generally have different measurement units, which means that there may be a large difference in the physical value of the criteria and ultimately makes it difficult to integrate the measurements for the criteria. Therefore, the normalization technique is usually utilized to convert different units of criteria into one identical unit. There are four normalization techniques commonly used in MCDM problems, including vector normalization, linear scale transformation(max-min, max, or sum). However, the normalization techniques have several shortcomings and do not adequately incorporate the practical matters. For example, if certain alternative has maximum value of data for certain criterion, this alternative is considered as the solution in original process. However, if the maximum value of data does not satisfy the required degree of fulfillment of designer or customer, the alternative may not be considered as the solution. To solve this problem, this paper employs the desirability function that has been proposed in our previous research. The desirability function uses upper limit and lower limit in normalization process. The threshold points for establishing upper or lower limits let us know what degree of fulfillment of designer or customer is. This paper proposes a new design optimization technique for multi-characteristic design problem by integrating the Taguchi method and our desirability functions. Finally, the proposed technique is able to obtain the optimal solution that is robust to multi-characteristic performances.

A Brief Empirical Verification Using Multiple Regression Analysis on the Measurement Results of Seaport Efficiency of AHP/DEA-AR (다중회귀분석을 이용한 AHP/DEA-AR 항만효율성 측정결과의 실증적 검증소고)

  • Park, Ro-kyung
    • Journal of Korea Port Economic Association
    • /
    • v.32 no.4
    • /
    • pp.73-87
    • /
    • 2016
  • The purpose of this study is to investigate the empirical results of Analytic Hierarchy Process/Data Envelopment Analysis-Assurance Region(AHP/DEA-AR) by using multiple regression analysis during the period of 2009-2012 with 5 inputs (number of gantry cranes, number of berth, berth length, terminal yard, and mean depth) and 2 outputs (container TEU, and number of direct calling shipping companies). Assurance Region(AR) is the most important tool to measure the efficiency of seaports, because individual seaports are characterized in terms of inputs and outputs. Traditional AHP and multiple regression analysis techniques have been used for measuring the AR. However, few previous studies exist in the field of seaport efficiency measurement. The main empirical results of this study are as follows. First, the efficiency ranking comparison between the two models (AHP/DEA-AR and multiple regression) using the Wilcoxon signed-rank test and Mann-Whitney signed-rank sum test were matched with the average level of 84.5 % and 96.3% respectively. When data for four years are used, the ratios of the significant probability are decreased to 61.4% and 92.5%. The policy implication of this study is that the policy planners of Korean port should introduce AHP/DEA-AR and multiple regression analysis when they measure the seaport efficiency and consider the port investment for enhancing the efficiency of inputs and outputs. The next study will deal with the subjects introducing the Fuzzy method, non-radial DEA, and the mixed analysis between AHP/DEA-AR and multiple regression analysis.

Predictive Clustering-based Collaborative Filtering Technique for Performance-Stability of Recommendation System (추천 시스템의 성능 안정성을 위한 예측적 군집화 기반 협업 필터링 기법)

  • Lee, O-Joun;You, Eun-Soon
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.119-142
    • /
    • 2015
  • With the explosive growth in the volume of information, Internet users are experiencing considerable difficulties in obtaining necessary information online. Against this backdrop, ever-greater importance is being placed on a recommender system that provides information catered to user preferences and tastes in an attempt to address issues associated with information overload. To this end, a number of techniques have been proposed, including content-based filtering (CBF), demographic filtering (DF) and collaborative filtering (CF). Among them, CBF and DF require external information and thus cannot be applied to a variety of domains. CF, on the other hand, is widely used since it is relatively free from the domain constraint. The CF technique is broadly classified into memory-based CF, model-based CF and hybrid CF. Model-based CF addresses the drawbacks of CF by considering the Bayesian model, clustering model or dependency network model. This filtering technique not only improves the sparsity and scalability issues but also boosts predictive performance. However, it involves expensive model-building and results in a tradeoff between performance and scalability. Such tradeoff is attributed to reduced coverage, which is a type of sparsity issues. In addition, expensive model-building may lead to performance instability since changes in the domain environment cannot be immediately incorporated into the model due to high costs involved. Cumulative changes in the domain environment that have failed to be reflected eventually undermine system performance. This study incorporates the Markov model of transition probabilities and the concept of fuzzy clustering with CBCF to propose predictive clustering-based CF (PCCF) that solves the issues of reduced coverage and of unstable performance. The method improves performance instability by tracking the changes in user preferences and bridging the gap between the static model and dynamic users. Furthermore, the issue of reduced coverage also improves by expanding the coverage based on transition probabilities and clustering probabilities. The proposed method consists of four processes. First, user preferences are normalized in preference clustering. Second, changes in user preferences are detected from review score entries during preference transition detection. Third, user propensities are normalized using patterns of changes (propensities) in user preferences in propensity clustering. Lastly, the preference prediction model is developed to predict user preferences for items during preference prediction. The proposed method has been validated by testing the robustness of performance instability and scalability-performance tradeoff. The initial test compared and analyzed the performance of individual recommender systems each enabled by IBCF, CBCF, ICFEC and PCCF under an environment where data sparsity had been minimized. The following test adjusted the optimal number of clusters in CBCF, ICFEC and PCCF for a comparative analysis of subsequent changes in the system performance. The test results revealed that the suggested method produced insignificant improvement in performance in comparison with the existing techniques. In addition, it failed to achieve significant improvement in the standard deviation that indicates the degree of data fluctuation. Notwithstanding, it resulted in marked improvement over the existing techniques in terms of range that indicates the level of performance fluctuation. The level of performance fluctuation before and after the model generation improved by 51.31% in the initial test. Then in the following test, there has been 36.05% improvement in the level of performance fluctuation driven by the changes in the number of clusters. This signifies that the proposed method, despite the slight performance improvement, clearly offers better performance stability compared to the existing techniques. Further research on this study will be directed toward enhancing the recommendation performance that failed to demonstrate significant improvement over the existing techniques. The future research will consider the introduction of a high-dimensional parameter-free clustering algorithm or deep learning-based model in order to improve performance in recommendations.

Analysis of Trading Performance on Intelligent Trading System for Directional Trading (방향성매매를 위한 지능형 매매시스템의 투자성과분석)

  • Choi, Heung-Sik;Kim, Sun-Woong;Park, Sung-Cheol
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.187-201
    • /
    • 2011
  • KOSPI200 index is the Korean stock price index consisting of actively traded 200 stocks in the Korean stock market. Its base value of 100 was set on January 3, 1990. The Korea Exchange (KRX) developed derivatives markets on the KOSPI200 index. KOSPI200 index futures market, introduced in 1996, has become one of the most actively traded indexes markets in the world. Traders can make profit by entering a long position on the KOSPI200 index futures contract if the KOSPI200 index will rise in the future. Likewise, they can make profit by entering a short position if the KOSPI200 index will decline in the future. Basically, KOSPI200 index futures trading is a short-term zero-sum game and therefore most futures traders are using technical indicators. Advanced traders make stable profits by using system trading technique, also known as algorithm trading. Algorithm trading uses computer programs for receiving real-time stock market data, analyzing stock price movements with various technical indicators and automatically entering trading orders such as timing, price or quantity of the order without any human intervention. Recent studies have shown the usefulness of artificial intelligent systems in forecasting stock prices or investment risk. KOSPI200 index data is numerical time-series data which is a sequence of data points measured at successive uniform time intervals such as minute, day, week or month. KOSPI200 index futures traders use technical analysis to find out some patterns on the time-series chart. Although there are many technical indicators, their results indicate the market states among bull, bear and flat. Most strategies based on technical analysis are divided into trend following strategy and non-trend following strategy. Both strategies decide the market states based on the patterns of the KOSPI200 index time-series data. This goes well with Markov model (MM). Everybody knows that the next price is upper or lower than the last price or similar to the last price, and knows that the next price is influenced by the last price. However, nobody knows the exact status of the next price whether it goes up or down or flat. So, hidden Markov model (HMM) is better fitted than MM. HMM is divided into discrete HMM (DHMM) and continuous HMM (CHMM). The only difference between DHMM and CHMM is in their representation of state probabilities. DHMM uses discrete probability density function and CHMM uses continuous probability density function such as Gaussian Mixture Model. KOSPI200 index values are real number and these follow a continuous probability density function, so CHMM is proper than DHMM for the KOSPI200 index. In this paper, we present an artificial intelligent trading system based on CHMM for the KOSPI200 index futures system traders. Traders have experienced on technical trading for the KOSPI200 index futures market ever since the introduction of the KOSPI200 index futures market. They have applied many strategies to make profit in trading the KOSPI200 index futures. Some strategies are based on technical indicators such as moving averages or stochastics, and others are based on candlestick patterns such as three outside up, three outside down, harami or doji star. We show a trading system of moving average cross strategy based on CHMM, and we compare it to a traditional algorithmic trading system. We set the parameter values of moving averages at common values used by market practitioners. Empirical results are presented to compare the simulation performance with the traditional algorithmic trading system using long-term daily KOSPI200 index data of more than 20 years. Our suggested trading system shows higher trading performance than naive system trading.

Web Cogmulator : The Web Design Simulator Using Fuzzy Cognitive Map (Web Cogmulator : 퍼지 인식도를 이용한 웹 디자인 시뮬레이터에 관한 연구)

  • 이건창;정남호;조형래
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2000.04a
    • /
    • pp.357-364
    • /
    • 2000
  • 기존의 웹 디자인은 웹이라는 매체의 특성 상 디자인적인 요소가 매우 중요함에도 불구하고 디자인은 위한 구체적인 방법론이 미약하다. 특히, 많은 소비자들을 유인하고 구매를 촉발시켜야 하는 인터넷 쇼핑몰의 경우에는 더욱 더 그럼하에도 불구하고 이를 위한 전략적인 방법론이 부족하다. 즉, 기존 연구들은 제품의 다양성, 서비스, 촉진, 항해량, 편리성, 사용자 인터페이스 등이 중요하다고 하였지만 실제 인터넷 쇼핑몰을 디자인하는 입장에서는 활용하기가 상당히 애매하다. 그 이유는 이들 요인들은 서로 영향관계를 가지고 있어서 사용자 인터페이스가 복잡하면 항해량이 늘어나 편리성이 감소하고, 제품이 늘어나더라도 검색엔진을 사용하면 상대적으로 항해량이 감소하게 되어 편리성이 증가한다. 따라서, 이들 요인을 활용하여 인터넷 쇼핑몰을 구축하려면 요인간의 영향관계를 면밀히 파악하고 이 영향요인이 소비자의 구매행동에 어떠한 영향을 주는지가 충분히 검토되어야 한다.이에 본 연구에서는 퍼지인식도를 이용하여 인터넷 쇼핑몰 상에서 소비자의 구매행동에 영향을 주는 요인을 추출하고 이들 요인간의 인과관계를 도출하여 보다 구체적이고 전략적으로 인터넷 쇼핑몰을 디자인할 수 있는 방법으로 web-Cogmulator를 제시한다. Web-Cogmulator는 소비자의 쇼핑몰에 대한 암묵지식 형태의 구매행동을 형태지식화하여 지식베이스 형태로 가지고 있기 때문에 인터넷 쇼핑몰의 다양한 요인의 변화에 따른 소비자의 구매행동을 추론 시뮬레이션하는 것이 가능하다. 이에 본 연구에서는 기본적인 인터넷 쇼핑몰 시나리오를 바탕으로 추론 시뮬레이션을 실시하여 Web-Cogmulator의 유용성을 검증하였다.를, 지지도(support), 신뢰도(confidence), 리프트(lift), 컨빅션(conviction)등의 관계를 통해 다양한 방법으로 모색해본다. 이 연구에서 제안하는 이러한 개념계층상의 흥미로운 부분의 탐색은, 전자 상거래에서의 CRM(Customer Relationship Management)나 틈새시장(niche market) 마케팅 등에 적용가능하리라 여겨진다.선의 효과가 나타났다. 표본기업들을 훈련과 시험용으로 구분하여 분석한 결과는 전체적으로 재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of computati

  • PDF