• Title/Summary/Keyword: hybrid techniques

Search Result 746, Processing Time 0.029 seconds

Combination of Brain Cancer with Hybrid K-NN Algorithm using Statistical of Cerebrospinal Fluid (CSF) Surgery

  • Saeed, Soobia;Abdullah, Afnizanfaizal;Jhanjhi, NZ
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.2
    • /
    • pp.120-130
    • /
    • 2021
  • The spinal cord or CSF surgery is a very complex process. It requires continuous pre and post-surgery evaluation to have a better ability to diagnose the disease. To detect automatically the suspected areas of tumors and symptoms of CSF leakage during the development of the tumor inside of the brain. We propose a new method based on using computer software that generates statistical results through data gathered during surgeries and operations. We performed statistical computation and data collection through the Google Source for the UK National Cancer Database. The purpose of this study is to address the above problems related to the accuracy of missing hybrid KNN values and finding the distance of tumor in terms of brain cancer or CSF images. This research aims to create a framework that can classify the damaged area of cancer or tumors using high-dimensional image segmentation and Laplace transformation method. A high-dimensional image segmentation method is implemented by software modelling techniques with measures the width, percentage, and size of cells within the brain, as well as enhance the efficiency of the hybrid KNN algorithm and Laplace transformation make it deal the non-zero values in terms of missing values form with the using of Frobenius Matrix for deal the space into non-zero values. Our proposed algorithm takes the longest values of KNN (K = 1-100), which is successfully demonstrated in a 4-dimensional modulation method that monitors the lighting field that can be used in the field of light emission. Conclusion: This approach dramatically improves the efficiency of hybrid KNN method and the detection of tumor region using 4-D segmentation method. The simulation results verified the performance of the proposed method is improved by 92% sensitivity of 60% specificity and 70.50% accuracy respectively.

Using Bayesian tree-based model integrated with genetic algorithm for streamflow forecasting in an urban basin

  • Nguyen, Duc Hai;Bae, Deg-Hyo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.140-140
    • /
    • 2021
  • Urban flood management is a crucial and challenging task, particularly in developed cities. Therefore, accurate prediction of urban flooding under heavy precipitation is critically important to address such a challenge. In recent years, machine learning techniques have received considerable attention for their strong learning ability and suitability for modeling complex and nonlinear hydrological processes. Moreover, a survey of the published literature finds that hybrid computational intelligent methods using nature-inspired algorithms have been increasingly employed to predict or simulate the streamflow with high reliability. The present study is aimed to propose a novel approach, an ensemble tree, Bayesian Additive Regression Trees (BART) model incorporating a nature-inspired algorithm to predict hourly multi-step ahead streamflow. For this reason, a hybrid intelligent model was developed, namely GA-BART, containing BART model integrating with Genetic algorithm (GA). The Jungrang urban basin located in Seoul, South Korea, was selected as a case study for the purpose. A database was established based on 39 heavy rainfall events during 2003 and 2020 that collected from the rain gauges and monitoring stations system in the basin. For the goal of this study, the different step ahead models will be developed based in the methods, including 1-hour, 2-hour, 3-hour, 4-hour, 5-hour, and 6-hour step ahead streamflow predictions. In addition, the comparison of the hybrid BART model with a baseline model such as super vector regression models is examined in this study. It is expected that the hybrid BART model has a robust performance and can be an optional choice in streamflow forecasting for urban basins.

  • PDF

Developing a Book Recommendation System Using Filtering Techniques (필터링 기법을 이용한 도서 추천 시스템 구축)

  • Chung, Young-Mee;Lee, Yong-Gu
    • Journal of Information Management
    • /
    • v.33 no.1
    • /
    • pp.1-17
    • /
    • 2002
  • This study examined several recommendation techniques to construct an effective book recommender system in a library. Experiments revealed that a hybrid recommendation technique is more effective than either collaborative filtering or content-based filtering technique in recommending books to be borrowed in an academic library setting. The recommendation technique based on association rule turned out the lowest in performance.

A Study on the Performance Comparison of Optimization Techniques on the Selection of Control Source Positions in an Active Noise Barrier System (능동방음벽 시스템의 제어 음원 위치 선정에 미치는 최적화 기법 성능 비교 연구)

  • Im, Hyoung-Jin;Baek, Kwang-Hyun
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.15 no.8 s.101
    • /
    • pp.911-917
    • /
    • 2005
  • There were many attempts to reduce noise behind the noise barrier using active control techniques. Omoto(1993) Shao(1997) and Yang(2001) tried to actively control the diffracted noise behind the barrier and main concerns were about the arrangement methods for the control sources. Baek (2004) tried to get better results using the simulated annealing method and the sequential searching technique. The main goal of this study is to develop and compare the performance of several optimization techniques including those mentioned above, hybrid version of simulated annealing and genetic algorithm for the optimal control source positions of active noise barrier system. The simulation results show fairly similar performance lot the small size of searching problem. However, as the number of control sources are increased, the performance of simulated annealing algorithm and genetic algorithm are better than the others. Simulations are also made to show the performance of the selected optimal control source positions not only at the receiver position but at the surrounding volume of the receiver position and plotted the noise reduction level in 3-D.

Computing and Reducing Transient Error Propagation in Registers

  • Yan, Jun;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.2
    • /
    • pp.121-130
    • /
    • 2011
  • Recent research indicates that transient errors will increasingly become a critical concern in microprocessor design. As embedded processors are widely used in reliability-critical or noisy environments, it is necessary to develop cost-effective fault-tolerant techniques to protect processors against transient errors. The register file is one of the critical components that can significantly affect microprocessor system reliability, since registers are typically accessed very frequently, and transient errors in registers can be easily propagated to functional units or the memory system, leading to silent data error (SDC) or system crash. This paper focuses on investigating the impact of register file soft errors on system reliability and developing cost-effective techniques to improve the register file immunity to soft errors. This paper proposes the register vulnerability factor (RVF) concept to characterize the probability that register transient errors can escape the register file and thus potentially affect system reliability. We propose an approach to compute the RVF based on register access patterns. In this paper, we also propose two compiler-directed techniques and a hybrid approach to improve register file reliability cost-effectively by lowering the RVF value. Our experiments indicate that on average, RVF can be reduced to 9.1% and 9.5% by the hyperblock-based instruction re-scheduling and the reliability-oriented register assignment respectively, which can potentially lower the reliability cost significantly, without sacrificing the register value integrity.

An Algorithm of Short-Term Load Forecasting (단기수요예측 알고리즘)

  • Song Kyung-Bin;Ha Seong-Kwan
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.53 no.10
    • /
    • pp.529-535
    • /
    • 2004
  • Load forecasting is essential in the electricity market for the participants to manage the market efficiently and stably. A wide variety of techniques/algorithms for load forecasting has been reported in many literatures. These techniques are as follows: multiple linear regression, stochastic time series, general exponential smoothing, state space and Kalman filter, knowledge-based expert system approach (fuzzy method and artificial neural network). These techniques have improved the accuracy of the load forecasting. In recent 10 years, many researchers have focused on artificial neural network and fuzzy method for the load forecasting. In this paper, we propose an algorithm of a hybrid load forecasting method using fuzzy linear regression and general exponential smoothing and considering the sensitivities of the temperature. In order to consider the lower load of weekends and Monday than weekdays, fuzzy linear regression method is proposed. The temperature sensitivity is used to improve the accuracy of the load forecasting through the relation of the daily load and temperature. And the normal load of weekdays is easily forecasted by general exponential smoothing method. Test results show that the proposed algorithm improves the accuracy of the load forecasting in 1996.

Depth Acquisition Techniques for 3D Contents Generation (3차원 콘텐츠 제작을 위한 깊이 정보 획득 기술)

  • Jang, Woo-Seok;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.15-21
    • /
    • 2012
  • Depth information is necessary for various three dimensional contents generation. Depth acquisition techniques can be categorized broadly into two approaches: active, passive depth sensors depending on how to obtain depth information. In this paper, we take a look at several ways of depth acquirement. We present not only depth acquisition methods using discussed ways, but also hybrid methods which combine both approaches to compensate for drawbacks of each approach. Furthermore, we introduce several matching cost functions and post-processing techniques to enhance the temporal consistency and reduce flickering artifacts and discomforts of users caused by inaccurate depth estimation in 3D video.

  • PDF

A Comparative Study of Estimation by Analogy using Data Mining Techniques

  • Nagpal, Geeta;Uddin, Moin;Kaur, Arvinder
    • Journal of Information Processing Systems
    • /
    • v.8 no.4
    • /
    • pp.621-652
    • /
    • 2012
  • Software Estimations provide an inclusive set of directives for software project developers, project managers, and the management in order to produce more realistic estimates based on deficient, uncertain, and noisy data. A range of estimation models are being explored in the industry, as well as in academia, for research purposes but choosing the best model is quite intricate. Estimation by Analogy (EbA) is a form of case based reasoning, which uses fuzzy logic, grey system theory or machine-learning techniques, etc. for optimization. This research compares the estimation accuracy of some conventional data mining models with a hybrid model. Different data mining models are under consideration, including linear regression models like the ordinary least square and ridge regression, and nonlinear models like neural networks, support vector machines, and multivariate adaptive regression splines, etc. A precise and comprehensible predictive model based on the integration of GRA and regression has been introduced and compared. Empirical results have shown that regression when used with GRA gives outstanding results; indicating that the methodology has great potential and can be used as a candidate approach for software effort estimation.

Optimization of ferrochrome slag as coarse aggregate in concretes

  • Yaragal, Subhash C.;Kumar, B. Chethan;Mate, Krishna
    • Computers and Concrete
    • /
    • v.23 no.6
    • /
    • pp.421-431
    • /
    • 2019
  • The alarming rate of depletion of natural stone based coarse aggregates is a cause of great concern. The coarse aggregates occupy nearly 60-70% by volume of concrete being produced. Research efforts are on to look for alternatives to stone based coarse aggregates from sustainability point of view. Response surface methodology (RSM) is adopted to study and address the effect of ferrochrome slag (FCS) replacement to coarse aggregate replacement in the ordinary Portland cement (OPC) based concretes. RSM involves three different factors (ground granulated blast furnace slag (GGBS) as binder, flyash (FA) as binder, and FCS as coarse aggregate), with three different levels (GGBS (0, 15, and 30%), FA (0, 15, and 30%) and FCS (0, 50, and 100%)). Experiments were carried out to measure the responses like, workability, density, and compressive strength of FCS based concretes. In order to optimize FCS replacement in the OPC based concretes, three different traditional optimization techniques were used (grey relational analysis (GRA), technique for order of preference by similarity (TOPSIS), and desirability function approach (DFA)). Traditional optimization techniques were accompanied with principal component analysis (PCA) to calculate the weightage of responses measured to arrive at the final ranking of replacement levels of GGBS, FA, and FCS in OPC based concretes. Hybrid combination of PCA-TOPSIS technique is found to be significant when compared to other techniques used. 30% GGBS and 50% FCS replacement in OPC based concrete was arrived at, to be optimal.

An insight into the prediction of mechanical properties of concrete using machine learning techniques

  • Neeraj Kumar Shukla;Aman Garg;Javed Bhutto;Mona Aggarwal;M.Ramkumar Raja;Hany S. Hussein;T.M. Yunus Khan;Pooja Sabherwal
    • Computers and Concrete
    • /
    • v.32 no.3
    • /
    • pp.263-286
    • /
    • 2023
  • Experimenting with concrete to determine its compressive and tensile strengths is a laborious and time-consuming operation that requires a lot of attention to detail. Researchers from all around the world have spent the better part of the last several decades attempting to use machine learning algorithms to make accurate predictions about the technical qualities of various kinds of concrete. The research that is currently available on estimating the strength of concrete draws attention to the applicability and precision of the various machine learning techniques. This article provides a summary of the research that has previously been conducted on estimating the strength of concrete by making use of a variety of different machine learning methods. In this work, a classification of the existing body of research literature is presented, with the classification being based on the machine learning technique used by the researchers. The present review work will open the horizon for the researchers working on the machine learning based prediction of the compressive strength of concrete by providing the recommendations and benefits and drawbacks associated with each model as determining the compressive strength of concrete practically is a laborious and time-consuming task.