• Title/Summary/Keyword: artificial intelligence techniques

Search Result 689, Processing Time 0.023 seconds

Classification Model and Crime Occurrence City Forecasting Based on Random Forest Algorithm

  • KANG, Sea-Am;CHOI, Jeong-Hyun;KANG, Min-soo
    • Korean Journal of Artificial Intelligence
    • /
    • v.10 no.1
    • /
    • pp.21-25
    • /
    • 2022
  • Korea has relatively less crime than other countries. However, the crime rate is steadily increasing. Many people think the crime rate is decreasing, but the crime arrest rate has increased. The goal is to check the relationship between CCTV and the crime rate as a way to lower the crime rate, and to identify the correlation between areas without CCTV and areas without CCTV. If you see a crime that can happen at any time, I think you should use a random forest algorithm. We also plan to use machine learning random forest algorithms to reduce the risk of overfitting, reduce the required training time, and verify high-level accuracy. The goal is to identify the relationship between CCTV and crime occurrence by creating a crime prevention algorithm using machine learning random forest techniques. Assuming that no crime occurs without CCTV, it compares the crime rate between the areas where the most crimes occur and the areas where there are no crimes, and predicts areas where there are many crimes. The impact of CCTV on crime prevention and arrest can be interpreted as a comprehensive effect in part, and the purpose isto identify areas and frequency of frequent crimes by comparing the time and time without CCTV.

A Study on Image Labeling Technique for Deep-Learning-Based Multinational Tanks Detection Model

  • Kim, Taehoon;Lim, Dongkyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.58-63
    • /
    • 2022
  • Recently, the improvement of computational processing ability due to the rapid development of computing technology has greatly advanced the field of artificial intelligence, and research to apply it in various domains is active. In particular, in the national defense field, attention is paid to intelligent recognition among machine learning techniques, and efforts are being made to develop object identification and monitoring systems using artificial intelligence. To this end, various image processing technologies and object identification algorithms are applied to create a model that can identify friendly and enemy weapon systems and personnel in real-time. In this paper, we conducted image processing and object identification focused on tanks among various weapon systems. We initially conducted processing the tanks' image using a convolutional neural network, a deep learning technique. The feature map was examined and the important characteristics of the tanks crucial for learning were derived. Then, using YOLOv5 Network, a CNN-based object detection network, a model trained by labeling the entire tank and a model trained by labeling only the turret of the tank were created and the results were compared. The model and labeling technique we proposed in this paper can more accurately identify the type of tank and contribute to the intelligent recognition system to be developed in the future.

Performance Analysis of Speech Recognition Model based on Neuromorphic Architecture of Speech Data Preprocessing Technique (음성 데이터 전처리 기법에 따른 뉴로모픽 아키텍처 기반 음성 인식 모델의 성능 분석)

  • Cho, Jinsung;Kim, Bongjae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.3
    • /
    • pp.69-74
    • /
    • 2022
  • SNN (Spiking Neural Network) operating in neuromorphic architecture was created by mimicking human neural networks. Neuromorphic computing based on neuromorphic architecture requires relatively lower power than typical deep learning techniques based on GPUs. For this reason, research to support various artificial intelligence models using neuromorphic architecture is actively taking place. This paper conducted a performance analysis of the speech recognition model based on neuromorphic architecture according to the speech data preprocessing technique. As a result of the experiment, it showed up to 84% of speech recognition accuracy performance when preprocessing speech data using the Fourier transform. Therefore, it was confirmed that the speech recognition service based on the neuromorphic architecture can be effectively utilized.

Character Recognition and Search for Media Editing (미디어 편집을 위한 인물 식별 및 검색 기법)

  • Park, Yong-Suk;Kim, Hyun-Sik
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.519-526
    • /
    • 2022
  • Identifying and searching for characters appearing in scenes during multimedia video editing is an arduous and time-consuming process. Applying artificial intelligence to labor-intensive media editing tasks can greatly reduce media production time, improving the creative process efficiency. In this paper, a method is proposed which combines existing artificial intelligence based techniques to automate character recognition and search tasks for video editing. Object detection, face detection, and pose estimation are used for character localization and face recognition and color space analysis are used to extract unique representation information.

Efficient Large Dataset Construction using Image Smoothing and Image Size Reduction

  • Jaemin HWANG;Sac LEE;Hyunwoo LEE;Seyun PARK;Jiyoung LIM
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.1
    • /
    • pp.17-24
    • /
    • 2023
  • With the continuous growth in the amount of data collected and analyzed, deep learning has become increasingly popular for extracting meaningful insights from various fields. However, hardware limitations pose a challenge for achieving meaningful results with limited data. To address this challenge, this paper proposes an algorithm that leverages the characteristics of convolutional neural networks (CNNs) to reduce the size of image datasets by 20% through smoothing and shrinking the size of images using color elements. The proposed algorithm reduces the learning time and, as a result, the computational load on hardware. The experiments conducted in this study show that the proposed method achieves effective learning with similar or slightly higher accuracy than the original dataset while reducing computational and time costs. This color-centric dataset construction method using image smoothing techniques can lead to more efficient learning on CNNs. This method can be applied in various applications, such as image classification and recognition, and can contribute to more efficient and cost-effective deep learning. This paper presents a promising approach to reducing the computational load and time costs associated with deep learning and provides meaningful results with limited data, enabling them to apply deep learning to a broader range of applications.

Development and Evaluation of Flood Prediction Models Using Artificial Intelligence Techniques (인공지능 기법을 활용한 홍수예측모델 개발 및 평가 - 한강수계 댐을 중심으로 -)

  • Cho, Hemie;Uranchimeg, Sumiya;Yoo, Je-Ho;Kwon, Hyun-Han
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.131-131
    • /
    • 2022
  • 기후변화의 영향으로 극치강우의 변동성이 커지고 있으며 계획빈도를 초과하는 폭우로 피해가 증가하고 있다. 기존의 물리기반의 홍수예측모델은 개념적 및 구조적 제약과 함께 다양한 유역조건 및 수문기상 조건에 기인한 강우-유출 관계의 불확실성을 고려하는 데 한계가 있다. 특히 한정된 홍수 사상을 통해 구축된 관측 자료로 인해 새로운 홍수 사상 예측 능력이 저조할 수밖에 없다. 따라서 기존 물리모형 기반의 홍수예측과 함께, 딥러닝(deep learning) 모형을 고려한 홍수예측 모델 개발과 개선이 필요하다. 본 연구에서는 다양한 분야에서 활용되는 인공지능(artificial intelligence, AI) 기술을 종합적으로 검토하고, 홍수 예측 측면에서의 활용 가능성 및 신뢰성을 고려하여 AI 기법을 채택하였다. 한강수계에 존재하는 댐 중 일부를 선정하여 대상 댐의 수문·기상학적 자료를 전처리한 후, 인공지능 기반의 홍수예측모형을 구축 및 최적화하였다. 다양한 예측인자와 모델 구성으로 홍수예측력에 대한 평가를 다각적으로 수행함으로써 홍수예측모델의 신뢰성을 제고하였다. 전반적으로 우수한 결과를 도출하였고, 유역면적이 작을수록 결과가 좋았다. 이는 넓은 유역일수록 복잡한 강우-유출 과정이 내재되어 있기 때문으로 판단되며, 넓은 유역에는 본 연구에서 활용한 자료에 추가적인 자료를 도입하여 모형 개선이 이루어져야 할 것으로 판단하였다. 수문 예측 연구에 통계모형이나 기계학습모형의 적용은 많이 있었지만, 딥러닝 기법 활용은 새로운 시도라는 점에서 의미가 있다.

  • PDF

Application of adaptive neuro-fuzzy system in prediction of nanoscale and grain size effects on formability

  • Nan Yang;Meldi Suhatril;Khidhair Jasim Mohammed;H. Elhosiny Ali
    • Advances in nano research
    • /
    • v.14 no.2
    • /
    • pp.155-164
    • /
    • 2023
  • Grain size in sheet metals in one of the main parameters in determining formability. Grain size control in industry requires delicate process control and equipment. In the present study, effects of grain size on the formability of steel sheets is investigated. Experimental investigation of effect of grain size is a cumbersome method which due to existence of many other effective parameters are not conclusive in some cases. On the other hand, since the average grain size of a crystalline material is a statistical parameter, using traditional methods are not sufficient for find the optimum grain size to maximize formability. Therefore, design of experiment (DoE) and artificial intelligence (AI) methods are coupled together in this study to find the optimum conditions for formability in terms of grain size and to predict forming limits of sheet metals under bi-stretch loading conditions. In this regard, a set of experiment is conducted to provide initial data for training and testing DoE and AI. Afterwards, the using response surface method (RSM) optimum grain size is calculated. Moreover, trained neural network is used to predict formability in the calculated optimum condition and the results compared to the experimental results. The findings of the present study show that DoE and AI could be a great aid in the design, determination and prediction of optimum grain size for maximizing sheet formability.

Network Traffic Measurement Analysis using Machine Learning

  • Hae-Duck Joshua Jeong
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.2
    • /
    • pp.19-27
    • /
    • 2023
  • In recent times, an exponential increase in Internet traffic has been observed as a result of advancing development of the Internet of Things, mobile networks with sensors, and communication functions within various devices. Further, the COVID-19 pandemic has inevitably led to an explosion of social network traffic. Within this context, considerable attention has been drawn to research on network traffic analysis based on machine learning. In this paper, we design and develop a new machine learning framework for network traffic analysis whereby normal and abnormal traffic is distinguished from one another. To achieve this, we combine together well-known machine learning algorithms and network traffic analysis techniques. Using one of the most widely used datasets KDD CUP'99 in the Weka and Apache Spark environments, we compare and investigate results obtained from time series type analysis of various aspects including malicious codes, feature extraction, data formalization, network traffic measurement tool implementation. Experimental analysis showed that while both the logistic regression and the support vector machine algorithm were excellent for performance evaluation, among these, the logistic regression algorithm performs better. The quantitative analysis results of our proposed machine learning framework show that this approach is reliable and practical, and the performance of the proposed system and another paper is compared and analyzed. In addition, we determined that the framework developed in the Apache Spark environment exhibits a much faster processing speed in the Spark environment than in Weka as there are more datasets used to create and classify machine learning models.

Over the Rainbow: How to Fly over with ChatGPT in Tourism

  • Taekyung Kim
    • Journal of Smart Tourism
    • /
    • v.3 no.1
    • /
    • pp.41-47
    • /
    • 2023
  • Tourism and hospitality have encountered significant changes in recent years as a result of the rapid development of information technology (IT). Customers now expect more expedient services and customized travel experiences, which has intensified competition among service providers. To meet these demands, businesses have adopted sophisticated IT applications such as ChatGPT, which enables real-time interaction with consumers and provides recommendations based on their preferences. This paper focuses on the AI support-prompt middleware system, which functions as a mediator between generative AI and human users, and discusses two operational rules associated with it. The first rule is the Information Processing Rule, which requires the middleware system to determine appropriate responses based on the context of the conversation using techniques for natural language processing. The second rule is the Information Presentation Rule, which requires the middleware system to choose an appropriate language style and conversational attitude based on the gravity of the topic or the conversational context. These rules are essential for guaranteeing that the middleware system can fathom user intent and respond appropriately in various conversational contexts. This study contributes to the planning and analysis of service design by deriving design rules for middleware systems to incorporate artificial intelligence into tourism services. By comprehending the operation of AI support-prompt middleware systems, service providers can design more effective and efficient AI-driven tourism services, thereby improving the customer experience and obtaining a market advantage.

Deep Learning Based Radiographic Classification of Morphology and Severity of Peri-implantitis Bone Defects: A Preliminary Pilot Study

  • Jae-Hong Lee;Jeong-Ho Yun
    • Journal of Korean Dental Science
    • /
    • v.16 no.2
    • /
    • pp.156-163
    • /
    • 2023
  • Purpose: The aim of this study was to evaluate the feasibility of deep learning techniques to classify the morphology and severity of peri-implantitis bone defects based on periapical radiographs. Materials and Methods: Based on a pre-trained and fine-tuned ResNet-50 deep learning algorithm, the morphology and severity of peri-implantitis bone defects on periapical radiographs were classified into six groups (class I/II and slight/moderate/severe). Accuracy, precision, recall, and F1 scores were calculated to measure accuracy. Result: A total of 971 dental images were included in this study. Deep-learning-based classification achieved an accuracy of 86.0% with precision, recall, and F1 score values of 84.45%, 81.22%, and 82.80%, respectively. Class II and moderate groups had the highest F1 scores (92.23%), whereas class I and severe groups had the lowest F1 scores (69.33%). Conclusion: The artificial intelligence-based deep learning technique is promising for classifying the morphology and severity of peri-implantitis. However, further studies are required to validate their feasibility in clinical practice.