• Title/Summary/Keyword: NN techniques

Search Result 118, Processing Time 0.038 seconds

Enhancement of Classification Accuracy and Environmental Information Extraction Ability for KOMPSAT-1 EOC using Image Fusion (영상합성을 통한 KOMPSAT-1 EOC의 분류정확도 및 환경정보 추출능력 향상)

  • Ha, Sung Ryong;Park, Dae Hee;Park, Sang Young
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.5 no.2
    • /
    • pp.16-24
    • /
    • 2002
  • Classification of the land cover characteristics is a major application of remote sensing. The goal of this study is to propose an optimal classification process for electro-optical camera(EOC) of Korea Multi-Purpose Satellite(KOMPSAT). The study was carried out on Landsat TM, high spectral resolution image and KOMPSAT EOC, high spatial resolution image of Miho river basin, Korea. The study was conducted in two stages: one was image fusion of TM and EOC to gain high spectral and spatial resolution image, the other was land cover classification on fused image. Four fusion techniques were applied and compared for its topographic interpretation such as IHS, HPF, CN and wavelet transform. The fused images were classified by radial basis function neural network(RBF-NN) and artificial neural network(ANN) classification model. The proposed RBF-NN was validated for the study area and the optimal model structure and parameter were respectively identified for different input band combinations. The results of the study propose an optimal classification process of KOMPSAT EOC to improve the thematic mapping and extraction of environmental information.

  • PDF

A New Memory-based Learning using Dynamic Partition Averaging (동적 분할 평균을 이용한 새로운 메모리 기반 학습기법)

  • Yih, Hyeong-Il
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.456-462
    • /
    • 2008
  • The classification is that a new data is classified into one of given classes and is one of the most generally used data mining techniques. Memory-Based Reasoning (MBR) is a reasoning method for classification problem. MBR simply keeps many patterns which are represented by original vector form of features in memory without rules for reasoning, and uses a distance function to classify a test pattern. If training patterns grows in MBR, as well as size of memory great the calculation amount for reasoning much have. NGE, FPA, and RPA methods are well-known MBR algorithms, which are proven to show satisfactory performance, but those have serious problems for memory usage and lengthy computation. In this paper, we propose DPA (Dynamic Partition Averaging) algorithm. it chooses partition points by calculating GINI-Index in the entire pattern space, and partitions the entire pattern space dynamically. If classes that are included to a partition are unique, it generates a representative pattern from partition, unless partitions relevant partitions repeatedly by same method. The proposed method has been successfully shown to exhibit comparable performance to k-NN with a lot less number of patterns and better result than EACH system which implements the NGE theory and FPA, and RPA.

Advancing Process Plant Design: A Framework for Design Automation Using Generative Neural Network Models

  • Minhyuk JUNG;Jaemook CHOI;Seonu JOO;Wonseok CHOI;Hwikyung Chun
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1285-1285
    • /
    • 2024
  • In process plant construction, the implementation of design automation technologies is pivotal in reducing the timeframes associated with the design phase and in enabling the generation and evaluation of a variety of design alternatives, thereby facilitating the identification of optimal solutions. These technologies can play a crucial role in ensuring the successful delivery of projects. Previous research in the domain of design automation has primarily focused on parametric design in architectural contexts and on the automation of equipment layout and pipe routing within plant engineering, predominantly employing rule-based algorithms. Nevertheless, these studies are constrained by the limited flexibility of their models, which narrows the scope for generating alternative solutions and complicates the process of exploring comprehensive solutions using nonlinear optimization techniques as the number of design and engineering parameters increases. This research introduces a framework for automating plant design through the use of generative neural network models to overcome these challenges. The framework is applicable to the layout problems of process plants, covering the equipment necessary for production processes and the facilities for essential resources and their interconnections. The development of the proposed Neural-network (NN) based Generative Design Model unfolds in four stages: (a) Rule-based Model Development: This initial phase involves the development of rule-based models for layout generation and evaluation, where the generation model produces layouts based on predefined parameters, and the evaluation model assesses these layouts using various performance metrics. (b) Neural Network Model Development: This phase transitions towards neural network models, establishing a NN-based layout generation model utilizing Generative Adversarial Network (GAN)-based methods and a NN-based layout evaluation model. (c) Model Optimization: The third phase is dedicated to optimizing the models through Bayesian Optimization, aiming to extend the exploration space beyond the limitations of rule-based models. (d) Inverse Design Model Development: The concluding phase employs an inverse design method to merge the generative and evaluative networks, resulting in a model that outputs layout designs to meet specific performance objectives. This study aims to augment the efficiency and effectiveness of the design process in process plant construction, transcending the limitations of conventional rule-based approaches and contributing to the achievement of successful project outcomes.

Estimation of Creep Cavities Using Neural Network and Progressive Damage Modeling (신경회로망과 점진적 손상 모델링을 이용한 크리프 기공의 평가)

  • Jo, Seok-Je;Jeong, Hyeon-Jo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.24 no.2 s.173
    • /
    • pp.455-463
    • /
    • 2000
  • In order to develop nondestructive techniques for the quantitative estimation of creep damage a series of crept copper samples were prepared and their ultrasonic velocities were measured. Velocities measured in three directions with respect to the loading axis decreased nonlinearly and their anisotropy increased as a function of creep-induced porosity. A progressive damage model was described to explain the void-velocity relationship, including the anisotropy. The comparison of modeling study showed that the creep voids evolved from sphere toward flat oblate spheroid with its minor axis aligned along the stress direction. This model allowed us to determine the average aspect ratio of voids for a given porosity content. A novel technique, the back propagation neural network (BPNN), was applied for estimating the porosity content due to the creep damage. The measured velocities were used to train the BP classifier, and its accuracy was tested on another set of creep samples containing 0 to 0.7 % void content. When the void aspect ratio was used as input parameter together with the velocity data, the NN algorithm provided much better estimation of void content.

Deep learning-based sensor fault detection using S-Long Short Term Memory Networks

  • Li, Lili;Liu, Gang;Zhang, Liangliang;Li, Qing
    • Structural Monitoring and Maintenance
    • /
    • v.5 no.1
    • /
    • pp.51-65
    • /
    • 2018
  • A number of sensing techniques have been implemented for detecting defects in civil infrastructures instead of onsite human inspections in structural health monitoring. However, the issue of faults in sensors has not received much attention. This issue may lead to incorrect interpretation of data and false alarms. To overcome these challenges, this article presents a deep learning-based method with a new architecture of Stateful Long Short Term Memory Neural Networks (S-LSTM NN) for detecting sensor fault without going into details of the fault features. As LSTMs are capable of learning data features automatically, and the proposed method works without an accurate mathematical model. The detection of four types of sensor faults are studied in this paper. Non-stationary acceleration responses of a three-span continuous bridge when under operational conditions are studied. A deep network model is applied to the measured bridge data with estimation to detect the sensor fault. Another set of sensor output data is used to supervise the network parameters and backpropagation algorithm to fine tune the parameters to establish a deep self-coding network model. The response residuals between the true value and the predicted value of the deep S-LSTM network was statistically analyzed to determine the fault threshold of sensor. Experimental study with a cable-stayed bridge further indicated that the proposed method is robust in the detection of the sensor fault.

Application of an Adaptive Autopilot Design and Stability Analysis to an Anti-Ship Missile

  • Han, Kwang-Ho;Sung, Jae-Min;Kim, Byoung-Soo
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.1
    • /
    • pp.78-83
    • /
    • 2011
  • Traditional autopilot design requires an accurate aerodynamic model and relies on a gain schedule to account for system nonlinearities. This paper presents the control architecture applied to a dynamic model inversion at a single flight condition with an on-line neural network (NN) in order to regulate errors caused by approximate inversion. This eliminates the need for an extensive design process and accurate aerodynamic data. The simulation results using a developed full nonlinear 6 degree of freedom model are presented. This paper also presents the stability evaluation for control systems to which NNs were applied. Although feedback can accommodate uncertainty to meet system performance specifications, uncertainty can also affect the stability of the control system. The importance of robustness has long been recognized and stability margins were developed to quantify it. However, the traditional stability margin techniques based on linear control theory can not be applied to control systems upon which a representative non-linear control method, such as NNs, has been applied. This paper presents an alternative stability margin technique for NNs applied to control systems based on the system responses to an inserted gain multiplier or time delay element.

Alphabetical Gesture Recognition using HMM (HMM을 이용한 알파벳 제스처 인식)

  • Yoon, Ho-Sub;Soh, Jung;Min, Byung-Woo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10c
    • /
    • pp.384-386
    • /
    • 1998
  • The use of hand gesture provides an attractive alternative to cumbersome interface devices for human-computer interaction(HCI). Many methods hand gesture recognition using visual analysis have been proposed such as syntactical analysis, neural network(NN), Hidden Markov Model(HMM) and so on. In our research, a HMMs is proposed for alphabetical hand gesture recognition. In the preprocessing stage, the proposed approach consists of three different procedures for hand localization, hand tracking and gesture spotting. The hand location procedure detects the candidated regions on the basis of skin-color and motion in an image by using a color histogram matching and time-varying edge difference techniques. The hand tracking algorithm finds the centroid of a moving hand region, connect those centroids, and thus, produces a trajectory. The spotting a feature database, the proposed approach use the mesh feature code for codebook of HMM. In our experiments, 1300 alphabetical and 1300 untrained gestures are used for training and testing, respectively. Those experimental results demonstrate that the proposed approach yields a higher and satisfying recognition rate for the images with different sizes, shapes and skew angles.

  • PDF

Pruning and Matching Scheme for Rotation Invariant Leaf Image Retrieval

  • Tak, Yoon-Sik;Hwang, Een-Jun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.6
    • /
    • pp.280-298
    • /
    • 2008
  • For efficient content-based image retrieval, diverse visual features such as color, texture, and shape have been widely used. In the case of leaf images, further improvement can be achieved based on the following observations. Most plants have unique shape of leaves that consist of one or more blades. Hence, blade-based matching can be more efficient than whole shape-based matching since the number and shape of blades are very effective to filtering out dissimilar leaves. Guaranteeing rotational invariance is critical for matching accuracy. In this paper, we propose a new shape representation, indexing and matching scheme for leaf image retrieval. For leaf shape representation, we generated a distance curve that is a sequence of distances between the leaf’s center and all the contour points. For matching, we developed a blade-based matching algorithm called rotation invariant - partial dynamic time warping (RI-PDTW). To speed up the matching, we suggest two additional techniques: i) priority queue-based pruning of unnecessary blade sequences for rotational invariance, and ii) lower bound-based pruning of unnecessary partial dynamic time warping (PDTW) calculations. We implemented a prototype system on the GEMINI framework [1][2]. Using experimental results, we showed that our scheme achieves excellent performance compared to competitive schemes.

Modeling with Thin Film Thickness using Machine Learning

  • Kim, Dong Hwan;Choi, Jeong Eun;Ha, Tae Min;Hong, Sang Jeen
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.2
    • /
    • pp.48-52
    • /
    • 2019
  • Virtual metrology, which is one of APC techniques, is a method to predict characteristics of manufactured films using machine learning with saving time and resources. As the photoresist is no longer a mask material for use in high aspect ratios as the CD is reduced, hard mask is introduced to solve such problems. Among many types of hard mask materials, amorphous carbon layer(ACL) is widely investigated due to its advantages of high etch selectivity than conventional photoresist, high optical transmittance, easy deposition process, and removability by oxygen plasma. In this study, VM using different machine learning algorithms is applied to predict the thickness of ACL and trained models are evaluated which model shows best prediction performance. ACL specimens are deposited by plasma enhanced chemical vapor deposition(PECVD) with four different process parameters(Pressure, RF power, $C_3H_6$ gas flow, $N_2$ gas flow). Gradient boosting regression(GBR) algorithm, random forest regression(RFR) algorithm, and neural network(NN) are selected for modeling. The model using gradient boosting algorithm shows most proper performance with higher R-squared value. A model for predicting the thickness of the ACL film within the abovementioned conditions has been successfully constructed.

A Review of Machine Learning Algorithms for Fraud Detection in Credit Card Transaction

  • Lim, Kha Shing;Lee, Lam Hong;Sim, Yee-Wai
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.31-40
    • /
    • 2021
  • The increasing number of credit card fraud cases has become a considerable problem since the past decades. This phenomenon is due to the expansion of new technologies, including the increased popularity and volume of online banking transactions and e-commerce. In order to address the problem of credit card fraud detection, a rule-based approach has been widely utilized to detect and guard against fraudulent activities. However, it requires huge computational power and high complexity in defining and building the rule base for pattern matching, in order to precisely identifying the fraud patterns. In addition, it does not come with intelligence and ability in predicting or analysing transaction data in looking for new fraud patterns and strategies. As such, Data Mining and Machine Learning algorithms are proposed to overcome the shortcomings in this paper. The aim of this paper is to highlight the important techniques and methodologies that are employed in fraud detection, while at the same time focusing on the existing literature. Methods such as Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), naïve Bayesian, k-Nearest Neighbour (k-NN), Decision Tree and Frequent Pattern Mining algorithms are reviewed and evaluated for their performance in detecting fraudulent transaction.