• Title/Summary/Keyword: artificial intelligence techniques

Search Result 689, Processing Time 0.023 seconds

Advances, challenges, and prospects of electroencephalography-based biomarkers for psychiatric disorders: a narrative review

  • Seokho Yun
    • Journal of Yeungnam Medical Science
    • /
    • v.41 no.4
    • /
    • pp.261-268
    • /
    • 2024
  • Owing to a lack of appropriate biomarkers for accurate diagnosis and treatment, psychiatric disorders cause significant distress and functional impairment, leading to social and economic losses. Biomarkers are essential for diagnosing, predicting, treating, and monitoring various diseases. However, their absence in psychiatry is linked to the complex structure of the brain and the lack of direct monitoring modalities. This review examines the potential of electroencephalography (EEG) as a neurophysiological tool for identifying psychiatric biomarkers. EEG noninvasively measures brain electrophysiological activity and is used to diagnose neurological disorders, such as depression, bipolar disorder (BD), and schizophrenia, and identify psychiatric biomarkers. Despite extensive research, EEG-based biomarkers have not been clinically utilized owing to measurement and analysis constraints. EEG studies have revealed spectral and complexity measures for depression, brainwave abnormalities in BD, and power spectral abnormalities in schizophrenia. However, no EEG-based biomarkers are currently used clinically for the treatment of psychiatric disorders. The advantages of EEG include real-time data acquisition, noninvasiveness, cost-effectiveness, and high temporal resolution. Challenges such as low spatial resolution, susceptibility to interference, and complexity of data interpretation limit its clinical application. Integrating EEG with other neuroimaging techniques, advanced signal processing, and standardized protocols is essential to overcome these limitations. Artificial intelligence may enhance EEG analysis and biomarker discovery, potentially transforming psychiatric care by providing early diagnosis, personalized treatment, and improved disease progression monitoring.

A Study on Improving License Plate Recognition Performance Using Super-Resolution Techniques

  • Kyeongseok JANG;Kwangchul SON
    • Korean Journal of Artificial Intelligence
    • /
    • v.12 no.3
    • /
    • pp.1-7
    • /
    • 2024
  • In this paper, we propose an innovative super-resolution technique to address the issue of reduced accuracy in license plate recognition caused by low-resolution images. Conventional vehicle license plate recognition systems have relied on images obtained from fixed surveillance cameras for traffic detection to perform vehicle detection, tracking, and license plate recognition. However, during this process, image quality degradation occurred due to the physical distance between the camera and the vehicle, vehicle movement, and external environmental factors such as weather and lighting conditions. In particular, the acquisition of low-resolution images due to camera performance limitations has been a major cause of significantly reduced accuracy in license plate recognition. To solve this problem, we propose a Single Image Super-Resolution (SISR) model with a parallel structure that combines Multi-Scale and Attention Mechanism. This model is capable of effectively extracting features at various scales and focusing on important areas. Specifically, it generates feature maps of various sizes through a multi-branch structure and emphasizes the key features of license plates using an Attention Mechanism. Experimental results show that the proposed model demonstrates significantly improved recognition accuracy compared to existing vehicle license plate super-resolution methods using Bicubic Interpolation.

A Study on the Development of LDA Algorithm-Based Financial Technology Roadmap Using Patent Data

  • Koopo KWON;Kyounghak LEE
    • Korean Journal of Artificial Intelligence
    • /
    • v.12 no.3
    • /
    • pp.17-24
    • /
    • 2024
  • This study aims to derive a technology development roadmap in related fields by utilizing patent documents of financial technology. To this end, patent documents are extracted by dragging technical keywords from prior research and related reports on financial technology. By applying the TF-IDF (Term Frequency-Inverse Document Frequency) technique in the extracted patent document, which is a text mining technique, to the extracted patent documents, the Latent Dirichlet Allocation (LDA) algorithm was applied to identify the keywords and identify the topics of the core technologies of financial technology. Based on the proportion of topics by year, which is the result of LDA, promising technology fields and convergence fields were identified through trend analysis and similarity analysis between topics. A first-stage technology development roadmap for technology field development and a second-stage technology development roadmap for convergence were derived through network analysis about the technology data-based integrated management system of the high-dimensional payment system using RF and intelligent cards, as well as the security processing methodology for data information and network payment, which are identified financial technology fields. The proposed method can serve as a sufficient reason basis for developing financial technology R&D strategies and technology roadmaps.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Predicting concrete's compressive strength through three hybrid swarm intelligent methods

  • Zhang Chengquan;Hamidreza Aghajanirefah;Kseniya I. Zykova;Hossein Moayedi;Binh Nguyen Le
    • Computers and Concrete
    • /
    • v.32 no.2
    • /
    • pp.149-163
    • /
    • 2023
  • One of the main design parameters traditionally utilized in projects of geotechnical engineering is the uniaxial compressive strength. The present paper employed three artificial intelligence methods, i.e., the stochastic fractal search (SFS), the multi-verse optimization (MVO), and the vortex search algorithm (VSA), in order to determine the compressive strength of concrete (CSC). For the same reason, 1030 concrete specimens were subjected to compressive strength tests. According to the obtained laboratory results, the fly ash, cement, water, slag, coarse aggregates, fine aggregates, and SP were subjected to tests as the input parameters of the model in order to decide the optimum input configuration for the estimation of the compressive strength. The performance was evaluated by employing three criteria, i.e., the root mean square error (RMSE), mean absolute error (MAE), and the determination coefficient (R2). The evaluation of the error criteria and the determination coefficient obtained from the above three techniques indicates that the SFS-MLP technique outperformed the MVO-MLP and VSA-MLP methods. The developed artificial neural network models exhibit higher amounts of errors and lower correlation coefficients in comparison with other models. Nonetheless, the use of the stochastic fractal search algorithm has resulted in considerable enhancement in precision and accuracy of the evaluations conducted through the artificial neural network and has enhanced its performance. According to the results, the utilized SFS-MLP technique showed a better performance in the estimation of the compressive strength of concrete (R2=0.99932 and 0.99942, and RMSE=0.32611 and 0.24922). The novelty of our study is the use of a large dataset composed of 1030 entries and optimization of the learning scheme of the neural prediction model via a data distribution of a 20:80 testing-to-training ratio.

Feature Selection Algorithms in Intrusion Detection System: A Survey

  • MAZA, Sofiane;TOUAHRIA, Mohamed
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.5079-5099
    • /
    • 2018
  • Regarding to the huge number of connections and the large flow of data on the Internet, Intrusion Detection System (IDS) has a difficulty to detect attacks. Moreover, irrelevant and redundant features influence on the quality of IDS precisely on the detection rate and processing cost. Feature Selection (FS) is the important technique, which gives the issue for enhancing the performance of detection. There are different works have been proposed, but a map for understanding and constructing a state of the FS in IDS is still need more investigation. In this paper, we introduce a survey of feature selection algorithms for intrusion detection system. We describe the well-known approaches that have been proposed in FS for IDS. Furthermore, we provide a classification with a comparative study between different contribution according to their techniques and results. We identify a new taxonomy for future trends and existing challenges.

Ensemble techniques and hybrid intelligence algorithms for shear strength prediction of squat reinforced concrete walls

  • Mohammad Sadegh Barkhordari;Leonardo M. Massone
    • Advances in Computational Design
    • /
    • v.8 no.1
    • /
    • pp.37-59
    • /
    • 2023
  • Squat reinforced concrete (SRC) shear walls are a critical part of the structure for both office/residential buildings and nuclear structures due to their significant role in withstanding seismic loads. Despite this, empirical formulae in current design standards and published studies demonstrate a considerable disparity in predicting SRC wall shear strength. The goal of this research is to develop and evaluate hybrid and ensemble artificial neural network (ANN) models. State-of-the-art population-based algorithms are used in this research for hybrid intelligence algorithms. Six models are developed, including Honey Badger Algorithm (HBA) with ANN (HBA-ANN), Hunger Games Search with ANN (HGS-ANN), fitness-distance balance coyote optimization algorithm (FDB-COA) with ANN (FDB-COA-ANN), Averaging Ensemble (AE) neural network, Snapshot Ensemble (SE) neural network, and Stacked Generalization (SG) ensemble neural network. A total of 434 test results of SRC walls is utilized to train and assess the models. The results reveal that the SG model not only minimizes prediction variance but also produces predictions (with R2= 0.99) that are superior to other models.

Data Preprocessing Techniques for Visualizing Gas Sensor Datasets (가스 센서 데이터셋 시각화를 위한 데이터 전처리 기법)

  • Kim, Junsu;Park, Kyungwon;Lim, Taebum;Park, Gooman
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.21-22
    • /
    • 2021
  • 최근 AI(Artificial Intelligence)를 기반으로 정밀한 가스 성분 감지를 위한 후각지능(Olfactory intelligence) 기술에 연구가 활발히 진행 중이다. 후각지능 학습데이터는 다른 감지 방식의 가스 센서들이 동시에 적용되는 멀티모달리티의 특성을 지니며 또한, 공간상에 분포된 센서 배열을 통해 획득된 다차원의 시계열 특성을 지닌다. 따라서 대량의 다차원 데이터에 대한 정확한 이해와 분석을 위해서는 데이터를 전처리하고 시각화할 수 있는 기술이 필요하다. 본 논문에서는 후각지능 학습을 위한 다차원의 복잡한 가스 데이터의 시각화를 위해 잡음 등의 불필요한 값을 제거하고, 데이터가 일관성을 가지도록 하며, 데이터의 차원을 시각화 가능하도록 축소하기 위한 전처리 방법을 제시한다.

  • PDF

An Integrated Model based on Genetic Algorithms for Implementing Cost-Effective Intelligent Intrusion Detection Systems (비용효율적 지능형 침입탐지시스템 구현을 위한 유전자 알고리즘 기반 통합 모형)

  • Lee, Hyeon-Uk;Kim, Ji-Hun;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.125-141
    • /
    • 2012
  • These days, the malicious attacks and hacks on the networked systems are dramatically increasing, and the patterns of them are changing rapidly. Consequently, it becomes more important to appropriately handle these malicious attacks and hacks, and there exist sufficient interests and demand in effective network security systems just like intrusion detection systems. Intrusion detection systems are the network security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. Conventional intrusion detection systems have generally been designed using the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. However, they cannot handle new or unknown patterns of the network attacks, although they perform very well under the normal situation. As a result, recent studies on intrusion detection systems use artificial intelligence techniques, which can proactively respond to the unknown threats. For a long time, researchers have adopted and tested various kinds of artificial intelligence techniques such as artificial neural networks, decision trees, and support vector machines to detect intrusions on the network. However, most of them have just applied these techniques singularly, even though combining the techniques may lead to better detection. With this reason, we propose a new integrated model for intrusion detection. Our model is designed to combine prediction results of four different binary classification models-logistic regression (LOGIT), decision trees (DT), artificial neural networks (ANN), and support vector machines (SVM), which may be complementary to each other. As a tool for finding optimal combining weights, genetic algorithms (GA) are used. Our proposed model is designed to be built in two steps. At the first step, the optimal integration model whose prediction error (i.e. erroneous classification rate) is the least is generated. After that, in the second step, it explores the optimal classification threshold for determining intrusions, which minimizes the total misclassification cost. To calculate the total misclassification cost of intrusion detection system, we need to understand its asymmetric error cost scheme. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, total misclassification cost is more affected by FNE rather than FPE. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 10,000 samples from them by using random sampling method. Also, we compared the results from our model with the results from single techniques to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell R4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on GA outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that the proposed model outperformed all the other comparative models in the total misclassification cost perspective. Consequently, it is expected that our study may contribute to build cost-effective intelligent intrusion detection systems.

An Efficient Artificial Intelligence Hybrid Approach for Energy Management in Intelligent Buildings

  • Wahid, Fazli;Ismail, Lokman Hakim;Ghazali, Rozaida;Aamir, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.12
    • /
    • pp.5904-5927
    • /
    • 2019
  • Many artificial intelligence (AI) techniques have been embedded into various engineering technologies to assist them in achieving different goals. The integration of modern technologies with energy consumption management system and occupant's comfort inside buildings results in the introduction of intelligent building concept. The major aim of this integration is to manage the energy consumption effectively and keeping the occupant satisfied with the internal environment of the building. The last few couple of years have seen many applications of AI techniques for optimizing the energy consumption with maximizing the user comfort in smart buildings but still there is much room for improvement in this area. In this paper, a hybrid of two AI algorithms called firefly algorithm (FA) and genetic algorithm (GA) has been used for user comfort maximization with minimum energy consumption inside smart building. A complete user friendly system with data from various sensors, user, processes, power control system and different actuators is developed in this work for reducing power consumption and increase the user comfort. The inputs of optimization algorithms are illumination, temperature and air quality sensors' data and the user set parameters whereas the outputs of the optimization algorithms are optimized parameters. These optimized parameters are the inputs of different fuzzy controllers which change the status of different actuators according to user satisfaction.