• Title/Summary/Keyword: AI Image Recognition

Search Result 135, Processing Time 0.022 seconds

Development of a Slope Condition Analysis System using IoT Sensors and AI Camera (IoT 센서와 AI 카메라를 융합한 급경사지 상태 분석 시스템 개발)

  • Seungjoo Lee;Kiyen Jeong;Taehoon Lee;YoungSeok Kim
    • Journal of the Korean Geosynthetics Society
    • /
    • v.23 no.2
    • /
    • pp.43-52
    • /
    • 2024
  • Recent abnormal climate conditions have increased the risk of slope collapses, which frequently result in significant loss of life and property due to the absence of early prediction and warning dissemination. In this paper, we develop a slope condition analysis system using IoT sensors and AI-based camera to assess the condition of slopes. To develop the system, we conducted hardware and firmware design for measurement sensors considering the ground conditions of slopes, designed AI-based image analysis algorithms, and developed prediction and warning solutions and systems. We aimed to minimize errors in sensor data through the integration of IoT sensor data and AI camera image analysis, ultimately enhancing the reliability of the data. Additionally, we evaluated the accuracy (reliability) by applying it to actual slopes. As a result, sensor measurement errors were maintained within 0.1°, and the data transmission rate exceeded 95%. Moreover, the AI-based image analysis system demonstrated nighttime partial recognition rates of over 99%, indicating excellent performance even in low-light conditions. Through this research, it is anticipated that the analysis of slope conditions and smart maintenance management in various fields of Social Overhead Capital (SOC) facilities can be applied.

High-Frequency Interchange Network for Multispectral Object Detection (다중 스펙트럼 객체 감지를 위한 고주파 교환 네트워크)

  • Park, Seon-Hoo;Yun, Jun-Seok;Yoo, Seok Bong;Han, Seunghwoi
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.8
    • /
    • pp.1121-1129
    • /
    • 2022
  • Object recognition is carried out using RGB images in various object recognition studies. However, RGB images in dark illumination environments or environments where target objects are occluded other objects cause poor object recognition performance. On the other hand, IR images provide strong object recognition performance in these environments because it detects infrared waves rather than visible illumination. In this paper, we propose an RGB-IR fusion model, high-frequency interchange network (HINet), which improves object recognition performance by combining only the strengths of RGB-IR image pairs. HINet connected two object detection models using a mutual high-frequency transfer (MHT) to interchange advantages between RGB-IR images. MHT converts each pair of RGB-IR images into a discrete cosine transform (DCT) spectrum domain to extract high-frequency information. The extracted high-frequency information is transmitted to each other's networks and utilized to improve object recognition performance. Experimental results show the superiority of the proposed network and present performance improvement of the multispectral object recognition task.

Presenting Practical Approaches for AI-specialized Fields in Gwangju Metro-city (광주광역시의 AI 특화분야를 위한 실용적인 접근 사례 제시)

  • Cha, ByungRae;Cha, YoonSeok;Park, Sun;Shin, Byeong-Chun;Kim, JongWon
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.55-62
    • /
    • 2021
  • We applied machine learning of semi-supervised learning, transfer learning, and federated learning as examples of AI use cases that can be applied to the three major industries(Automobile industry, Energy industry, and AI/Healthcare industry) of Gwangju Metro-city, and established an ML strategy for AI services for the major industries. Based on the ML strategy of AI service, practical approaches are suggested, the semi-supervised learning approach is used for automobile image recognition technology, and the transfer learning approach is used for diabetic retinopathy detection in the healthcare field. Finally, the case of the federated learning approach is to be used to predict electricity demand. These approaches were tested based on hardware such as single board computer Raspberry Pi, Jaetson Nano, and Intel i-7, and the validity of practical approaches was verified.

A Vehicle Recognition Method based on Radar and Camera Fusion in an Autonomous Driving Environment

  • Park, Mun-Yong;Lee, Suk-Ki;Shin, Dong-Jin
    • International journal of advanced smart convergence
    • /
    • v.10 no.4
    • /
    • pp.263-272
    • /
    • 2021
  • At a time when securing driving safety is the most important in the development and commercialization of autonomous vehicles, AI and big data-based algorithms are being studied to enhance and optimize the recognition and detection performance of various static and dynamic vehicles. However, there are many research cases to recognize it as the same vehicle by utilizing the unique advantages of radar and cameras, but they do not use deep learning image processing technology or detect only short distances as the same target due to radar performance problems. Radars can recognize vehicles without errors in situations such as night and fog, but it is not accurate even if the type of object is determined through RCS values, so accurate classification of the object through images such as cameras is required. Therefore, we propose a fusion-based vehicle recognition method that configures data sets that can be collected by radar device and camera device, calculates errors in the data sets, and recognizes them as the same target.

A Comparative Study on Artificial in Intelligence Model Performance between Image and Video Recognition in the Fire Detection Area (화재 탐지 영역의 이미지와 동영상 인식 사이 인공지능 모델 성능 비교 연구)

  • Jeong Rok Lee;Dae Woong Lee;Sae Hyun Jeong;Sang Jeong
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.4
    • /
    • pp.968-975
    • /
    • 2023
  • Purpose: We would like to confirm that the false positive rate of flames/smoke is high when detecting fires. Propose a method and dataset to recognize and classify fire situations to reduce the false detection rate. Method: Using the video as learning data, the characteristics of the fire situation were extracted and applied to the classification model. For evaluation, the model performance of Yolov8 and Slowfast were compared and analyzed using the fire dataset conducted by the National Information Society Agency (NIA). Result: YOLO's detection performance varies sensitively depending on the influence of the background, and it was unable to properly detect fires even when the fire scale was too large or too small. Since SlowFast learns the time axis of the video, we confirmed that detects fire excellently even in situations where the shape of an atypical object cannot be clearly inferred because the surrounding area is blurry or bright. Conclusion: It was confirmed that the fire detection rate was more appropriate when using a video-based artificial intelligence detection model rather than using image data.

Bioimage Analyses Using Artificial Intelligence and Future Ecological Research and Education Prospects: A Case Study of the Cichlid Fishes from Lake Malawi Using Deep Learning

  • Joo, Deokjin;You, Jungmin;Won, Yong-Jin
    • Proceedings of the National Institute of Ecology of the Republic of Korea
    • /
    • v.3 no.2
    • /
    • pp.67-72
    • /
    • 2022
  • Ecological research relies on the interpretation of large amounts of visual data obtained from extensive wildlife surveys, but such large-scale image interpretation is costly and time-consuming. Using an artificial intelligence (AI) machine learning model, especially convolution neural networks (CNN), it is possible to streamline these manual tasks on image information and to protect wildlife and record and predict behavior. Ecological research using deep-learning-based object recognition technology includes various research purposes such as identifying, detecting, and identifying species of wild animals, and identification of the location of poachers in real-time. These advances in the application of AI technology can enable efficient management of endangered wildlife, animal detection in various environments, and real-time analysis of image information collected by unmanned aerial vehicles. Furthermore, the need for school education and social use on biodiversity and environmental issues using AI is raised. School education and citizen science related to ecological activities using AI technology can enhance environmental awareness, and strengthen more knowledge and problem-solving skills in science and research processes. Under these prospects, in this paper, we compare the results of our early 2013 study, which automatically identified African cichlid fish species using photographic data of them, with the results of reanalysis by CNN deep learning method. By using PyTorch and PyTorch Lightning frameworks, we achieve an accuracy of 82.54% and an F1-score of 0.77 with minimal programming and data preprocessing effort. This is a significant improvement over the previous our machine learning methods, which required heavy feature engineering costs and had 78% accuracy.

Autonomous Vehicles as Safety and Security Agents in Real-Life Environments

  • Al-Absi, Ahmed Abdulhakim
    • International journal of advanced smart convergence
    • /
    • v.11 no.2
    • /
    • pp.7-12
    • /
    • 2022
  • Safety and security are the topmost priority in every environment. With the aid of Artificial Intelligence (AI), many objects are becoming more intelligent, conscious, and curious of their surroundings. The recent scientific breakthroughs in autonomous vehicular designs and development; powered by AI, network of sensors and the rapid increase of Internet of Things (IoTs) could be utilized in maintaining safety and security in our environments. AI based on deep learning architectures and models, such as Deep Neural Networks (DNNs), is being applied worldwide in the automotive design fields like computer vision, natural language processing, sensor fusion, object recognition and autonomous driving projects. These features are well known for their identification, detective and tracking abilities. With the embedment of sensors, cameras, GPS, RADAR, LIDAR, and on-board computers in many of these autonomous vehicles being developed, these vehicles can properly map their positions and proximity to everything around them. In this paper, we explored in detail several ways in which these enormous features embedded in these autonomous vehicles, such as the network of sensors fusion, computer vision and natural image processing, natural language processing, and activity aware capabilities of these automobiles, could be tapped and utilized in safeguarding our lives and environment.

Similarity Analysis Between SAR Target Images Based on Siamese Network (Siamese 네트워크 기반 SAR 표적영상 간 유사도 분석)

  • Park, Ji-Hoon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.25 no.5
    • /
    • pp.462-475
    • /
    • 2022
  • Different from the field of electro-optical(EO) image analysis, there has been less interest in similarity metrics between synthetic aperture radar(SAR) target images. A reliable and objective similarity analysis for SAR target images is expected to enable the verification of the SAR measurement process or provide the guidelines of target CAD modeling that can be used for simulating realistic SAR target images. For this purpose, this paper presents a similarity analysis method based on the siamese network that quantifies the subjective assessment through the distance learning of similar and dissimilar SAR target image pairs. The proposed method is applied to MSTAR SAR target images of slightly different depression angles and the resultant metrics are compared and analyzed with qualitative evaluation. Since the image similarity is somewhat related to recognition performance, the capacity of the proposed method for target recognition is further checked experimentally with the confusion matrix.

A Study on Interactive Talking Companion Doll Robot System Using Big Data for the Elderly Living Alone (빅데이터를 이용한 독거노인 돌봄 AI 대화형 말동무 아가야(AGAYA) 로봇 시스템에 관한 연구)

  • Song, Moon-Sun
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.5
    • /
    • pp.305-318
    • /
    • 2022
  • We focused on the care effectiveness of the interactive AI robots. developed an AI toy robot called 'Agaya' to contribute to personalization with more human-centered care. First, by applying P-TTS technology, you can maximize intimacy by autonomously selecting the voice of the person you want to hear. Second, it is possible to heal in your own way with good memory storage and bring back memory function. Third, by having five senses of the role of eyes, nose, mouth, ears, and hands, seeking better personalised services. Fourth, it attempted to develop technologies such as warm temperature maintenance, aroma, sterilization and fine dust removal, convenient charging method. These skills will expand the effective use of interactive robots by elderly people and contribute to building a positive image of the elderly who can plan the remaining old age productively and independently

A Study on the Development of AI-Based Fire Fighting Facility Design Technology through Image Recognition (이미지 인식을 통한 AI 기반 소방 시설 설계 기술 개발에 관한 연구)

  • Gi-Tae Nam;Seo-Ki Jun;Doo-Chan Choi
    • Journal of the Society of Disaster Information
    • /
    • v.18 no.4
    • /
    • pp.883-890
    • /
    • 2022
  • Purpose: Currently, in the case of domestic fire fighting facility design, it is difficult to secure highquality manpower due to low design costs and overheated competition between companies, so there is a limit to improving the fire safety performance of buildings. Accordingly, AI-based firefighting design solutions were studied to solve these problems and secure leading fire engineering technologies. Method: Through AutoCAD, which is widely used in existing fire fighting design, the procedures required for basic design and implementation design were processed, and AI technology was utilized through the YOLO v4 object recognition deep learning model. Result: Through the design process for fire fighting facilities, the facility was determined and the drawing design automation was carried out. In addition, by learning images of doors and pillars, artificial intelligence recognized the part and implemented the function of selecting boundary areas and installing piping and fire fighting facilities. Conclusion: Based on artificial intelligence technology, it was confirmed that human and material resources could be reduced when creating basic and implementation design drawings for building fire protection facilities, and technology was secured in artificial intelligence-based fire fighting design through prior technology development.