• Title/Summary/Keyword: AI machine vision

Search Result 19, Processing Time 0.022 seconds

A technique for predicting the cutting points of fish for the target weight using AI machine vision

  • Jang, Yong-hun;Lee, Myung-sub
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.4
    • /
    • pp.27-36
    • /
    • 2022
  • In this paper, to improve the conditions of the fish processing site, we propose a method to predict the cutting point of fish according to the target weight using AI machine vision. The proposed method performs image-based preprocessing by first photographing the top and front views of the input fish. Then, RANSAC(RANdom SAmple Consensus) is used to extract the fish contour line, and then 3D external information of the fish is obtained using 3D modeling. Next, machine learning is performed on the extracted three-dimensional feature information and measured weight information to generate a neural network model. Subsequently, the fish is cut at the cutting point predicted by the proposed technique, and then the weight of the cut piece is measured. We compared the measured weight with the target weight and evaluated the performance using evaluation methods such as MAE(Mean Absolute Error) and MRE(Mean Relative Error). The obtained results indicate that an average error rate of less than 3% was achieved in comparison to the target weight. The proposed technique is expected to contribute greatly to the development of the fishery industry in the future by being linked to the automation system.

Future Trends of IoT, 5G Mobile Networks, and AI: Challenges, Opportunities, and Solutions

  • Park, Ji Su;Park, Jong Hyuk
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.743-749
    • /
    • 2020
  • Internet of Things (IoT) is a growing technology along with artificial intelligence (AI) technology. Recently, increasing cases of developing knowledge services using information collected from sensor data have been reported. Communication is required to connect the IoT and AI, and 5G mobile networks have been widely spread recently. IoT, AI services, and 5G mobile networks can be configured and used as sensor-mobile edge-server. The sensor does not send data directly to the server. Instead, the sensor sends data to the mobile edge for quick processing. Subsequently, mobile edge enables the immediate processing of data based on AI technology or by sending data to the server for processing. 5G mobile network technology is used for this data transmission. Therefore, this study examines the challenges, opportunities, and solutions used in each type of technology. To this end, this study addresses clustering, Hyperledger Fabric, data, security, machine vision, convolutional neural network, IoT technology, and resource management of 5G mobile networks.

Evaluation of Video Codec AI-based Multiple tasks (인공지능 기반 멀티태스크를 위한 비디오 코덱의 성능평가 방법)

  • Kim, Shin;Lee, Yegi;Yoon, Kyoungro;Choo, Hyon-Gon;Lim, Hanshin;Seo, Jeongil
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.273-282
    • /
    • 2022
  • MPEG-VCM(Video Coding for Machine) aims to standardize video codec for machines. VCM provides data sets and anchors, which provide reference data for comparison, for several machine vision tasks including object detection, object segmentation, and object tracking. The evaluation template can be used to compare compression and machine vision task performance between anchor data and various proposed video codecs. However, performance comparison is carried out separately for each machine vision task, and information related to performance evaluation of multiple machine vision tasks on a single bitstream is not provided currently. In this paper, we propose a performance evaluation method of a video codec for AI-based multi-tasks. Based on bits per pixel (BPP), which is the measure of a single bitstream size, and mean average precision(mAP), which is the accuracy measure of each task, we define three criteria for multi-task performance evaluation such as arithmetic average, weighted average, and harmonic average, and to calculate the multi-tasks performance results based on the mAP values. In addition, as the dynamic range of mAP may very different from task to task, performance results for multi-tasks are calculated and evaluated based on the normalized mAP in order to prevent a problem that would be happened because of the dynamic range.

Passive Ranging Based on Planar Homography in a Monocular Vision System

  • Wu, Xin-mei;Guan, Fang-li;Xu, Ai-jun
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.155-170
    • /
    • 2020
  • Passive ranging is a critical part of machine vision measurement. Most of passive ranging methods based on machine vision use binocular technology which need strict hardware conditions and lack of universality. To measure the distance of an object placed on horizontal plane, we present a passive ranging method based on monocular vision system by smartphone. Experimental results show that given the same abscissas, the ordinatesis of the image points linearly related to their actual imaging angles. According to this principle, we first establish a depth extraction model by assuming a linear function and substituting the actual imaging angles and ordinates of the special conjugate points into the linear function. The vertical distance of the target object to the optical axis is then calculated according to imaging principle of camera, and the passive ranging can be derived by depth and vertical distance to the optical axis of target object. Experimental results show that ranging by this method has a higher accuracy compare with others based on binocular vision system. The mean relative error of the depth measurement is 0.937% when the distance is within 3 m. When it is 3-10 m, the mean relative error is 1.71%. Compared with other methods based on monocular vision system, the method does not need to calibrate before ranging and avoids the error caused by data fitting.

DEVELOPMENT OF A MACHINE VISION SYSTEM FOR AN AUTOMOBILE PLASTIC PART INSPECTION

  • ANDRES N.S.;MARIMUTHU R.P.;EOM Y.K.;JANG B.C.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.06a
    • /
    • pp.1131-1135
    • /
    • 2005
  • Since human is vulnerable to emotional, physical and environmental distractions, most human inspectors cannot sustain a consistent 8-hour inspection in a day specifically for small components like door locking levers. As an alternative for human inspection, presented in this study is the development of a machine vision inspection system (MVIS) purposely for door locking levers. Comprises the development is the structure of the MVIS components, designed to meet the demands, features and specifications of door locking lever manufacturing companies in increasing their production throughput upon keeping the quality assured. This computer-based MVIS is designed to perform quality measures of detecting missing portions and defects like burr on every door locking lever. NI Vision Builder software for Automatic Inspection (AI) is found to be the optimum solution in configuring the needed quality measures. The proposed software has measurement techniques such as edge detecting and pattern-matching which are capable of gauging, detecting missing portion and checking alignment. Furthermore, this study exemplifies the incorporation of the optimized NI Builder inspection environment to the pre-inspection and post-inspection subsystems.

  • PDF

Artificial Intelligence Applications on Mobile Telecommunication Systems (AI의 이동통신시스템 적용)

  • Yeh, C.I.;Chang, K.S.;Ko, Y.J.
    • Electronics and Telecommunications Trends
    • /
    • v.37 no.4
    • /
    • pp.60-69
    • /
    • 2022
  • So far, artificial intelligence (AI)/machine learning (ML) has produced impressive results in speech recognition, computer vision, and natural language processing. AI/ML has recently begun to show promise as a viable means for improving the performance of 5G mobile telecommunication systems. This paper investigates standardization activities in 3GPP and O-RAN Alliance regarding AI/ML applications on mobile telecommunication system. Future trends in AI/ML technologies are also summarized. As an overarching technology in 6G, there appears to be no doubt that AI/ML could contribute to every part of mobile systems, including core, RAN, and air-interface, in terms of performance enhancement, automation, cost reduction, and energy consumption reduction.

AI Processor Technology Trends (인공지능 프로세서 기술 동향)

  • Kwon, Youngsu
    • Electronics and Telecommunications Trends
    • /
    • v.33 no.5
    • /
    • pp.121-134
    • /
    • 2018
  • The Von Neumann based architecture of the modern computer has dominated the computing industry for the past 50 years, sparking the digital revolution and propelling us into today's information age. Recent research focus and market trends have shown significant effort toward the advancement and application of artificial intelligence technologies. Although artificial intelligence has been studied for decades since the Turing machine was first introduced, the field has recently emerged into the spotlight thanks to remarkable milestones such as AlexNet-CNN and Alpha-Go, whose neural-network based deep learning methods have achieved a ground-breaking performance superior to existing recognition, classification, and decision algorithms. Unprecedented results in a wide variety of applications (drones, autonomous driving, robots, stock markets, computer vision, voice, and so on) have signaled the beginning of a golden age for artificial intelligence after 40 years of relative dormancy. Algorithmic research continues to progress at a breath-taking pace as evidenced by the rate of new neural networks being announced. However, traditional Von Neumann based architectures have proven to be inadequate in terms of computation power, and inherently inefficient in their processing of vastly parallel computations, which is a characteristic of deep neural networks. Consequently, global conglomerates such as Intel, Huawei, and Google, as well as large domestic corporations and fabless companies are developing dedicated semiconductor chips customized for artificial intelligence computations. The AI Processor Research Laboratory at ETRI is focusing on the research and development of super low-power AI processor chips. In this article, we present the current trends in computation platform, parallel processing, AI processor, and super-threaded AI processor research being conducted at ETRI.

An Improved Fast Camera Calibration Method for Mobile Terminals

  • Guan, Fang-li;Xu, Ai-jun;Jiang, Guang-yu
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1082-1095
    • /
    • 2019
  • Camera calibration is an important part of machine vision and close-range photogrammetry. Since current calibration methods fail to obtain ideal internal and external camera parameters with limited computing resources on mobile terminals efficiently, this paper proposes an improved fast camera calibration method for mobile terminals. Based on traditional camera calibration method, the new method introduces two-order radial distortion and tangential distortion models to establish the camera model with nonlinear distortion items. Meanwhile, the nonlinear least square L-M algorithm is used to optimize parameters iteration, the new method can quickly obtain high-precise internal and external camera parameters. The experimental results show that the new method improves the efficiency and precision of camera calibration. Terminals simulation experiment on PC indicates that the time consuming of parameter iteration reduced from 0.220 seconds to 0.063 seconds (0.234 seconds on mobile terminals) and the average reprojection error reduced from 0.25 pixel to 0.15 pixel. Therefore, the new method is an ideal mobile terminals camera calibration method which can expand the application range of 3D reconstruction and close-range photogrammetry technology on mobile terminals.

Recent Trends and Prospects of 3D Content Using Artificial Intelligence Technology (인공지능을 이용한 3D 콘텐츠 기술 동향 및 향후 전망)

  • Lee, S.W.;Hwang, B.W.;Lim, S.J.;Yoon, S.U.;Kim, T.J.;Kim, K.N.;Kim, D.H;Park, C.J.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.15-22
    • /
    • 2019
  • Recent technological advances in three-dimensional (3D) sensing devices and machine learning such as deep leaning has enabled data-driven 3D applications. Research on artificial intelligence has developed for the past few years and 3D deep learning has been introduced. This is the result of the availability of high-quality big data, increases in computing power, and development of new algorithms; before the introduction of 3D deep leaning, the main targets for deep learning were one-dimensional (1D) audio files and two-dimensional (2D) images. The research field of deep leaning has extended from discriminative models such as classification/segmentation/reconstruction models to generative models such as those including style transfer and generation of non-existing data. Unlike 2D learning, it is not easy to acquire 3D learning data. Although low-cost 3D data acquisition sensors have become increasingly popular owing to advances in 3D vision technology, the generation/acquisition of 3D data is still very difficult. Even if 3D data can be acquired, post-processing remains a significant problem. Moreover, it is not easy to directly apply existing network models such as convolution networks owing to the various ways in which 3D data is represented. In this paper, we summarize technological trends in AI-based 3D content generation.