• 제목/요약/키워드: AI machine vision

검색결과 19건 처리시간 0.027초

A technique for predicting the cutting points of fish for the target weight using AI machine vision

  • Jang, Yong-hun;Lee, Myung-sub
    • 한국컴퓨터정보학회논문지
    • /
    • 제27권4호
    • /
    • pp.27-36
    • /
    • 2022
  • 본 논문에서는 이러한 어류 가공 현장의 문제점을 개선하기 위해서 AI 머신 비전을 이용한 어류의 목표 중량 절단 예측기법을 제안한다. 제안하는 방법은 먼저 입력된 물고기의 평면도와 정면도를 촬영하여 이미지기반의 전처리를 수행한다. 그런 다음 RANSAC(RANdom SAMmple Consensus)를 사용하여 어류의 윤곽선을 추출한 다음 3D 모델링을 사용하여 물고기의 3D 외부 정보를 추출한다. 이어서 추출된 3차원 특징 정보와 측정된 중량 정보를 머신러닝하여 목표 중량에 대한 절단 지점을 예측하기 위한 신경망 모델을 생성한다. 마지막으로 제안기법을 통해 예측된 절단 지점으로 직접 절단한 뒤 그 중량을 측정하였다. 그리고 측정된 무게를 목표 무게와 비교하여 MAE(Mean Absolute Error) 와 MRE(Mean Relative Error)와 같은 평가 방법을 사용해 성능을 평가하였다. 그 결과, 목표 중량과 비교해 3% 이내의 평균 오차율을 달성하였다. 제안된 기법은 향후 자동화 시스템과 연계되어 수산업 발전에 크게 기여할 것으로 전망한다.

Future Trends of IoT, 5G Mobile Networks, and AI: Challenges, Opportunities, and Solutions

  • Park, Ji Su;Park, Jong Hyuk
    • Journal of Information Processing Systems
    • /
    • 제16권4호
    • /
    • pp.743-749
    • /
    • 2020
  • Internet of Things (IoT) is a growing technology along with artificial intelligence (AI) technology. Recently, increasing cases of developing knowledge services using information collected from sensor data have been reported. Communication is required to connect the IoT and AI, and 5G mobile networks have been widely spread recently. IoT, AI services, and 5G mobile networks can be configured and used as sensor-mobile edge-server. The sensor does not send data directly to the server. Instead, the sensor sends data to the mobile edge for quick processing. Subsequently, mobile edge enables the immediate processing of data based on AI technology or by sending data to the server for processing. 5G mobile network technology is used for this data transmission. Therefore, this study examines the challenges, opportunities, and solutions used in each type of technology. To this end, this study addresses clustering, Hyperledger Fabric, data, security, machine vision, convolutional neural network, IoT technology, and resource management of 5G mobile networks.

인공지능 기반 멀티태스크를 위한 비디오 코덱의 성능평가 방법 (Evaluation of Video Codec AI-based Multiple tasks)

  • 김신;이예지;윤경로;추현곤;임한신;서정일
    • 방송공학회논문지
    • /
    • 제27권3호
    • /
    • pp.273-282
    • /
    • 2022
  • MPEG 내 VCM 그룹은 머신을 위한 비디오 코덱을 표준화하는 것으로 목표로 하고 있다. VCM 그룹은 객체 탐지, 객체 분할, 객체 추적 등 3가지의 머신비전 태스크를 포함한 데이터 세트와 데이터 세트 별 기준 데이터인 Anchor를 제공하고 있으며, 평가 템플릿을 이용하여 후보 기술군과 Anchor의 압축 대비 머신비전 성능을 비교할 수 있다. 하지만 성능 비교는 머신비전 태스크 별로 분리하여 수행되고 있으며, 다수의 머신비전 태스크에 대한 성능 평가를 수행할 수 있는 비트스트림을 생성할 수 있는 데이터는 별도로 제공하고 있지 않다. 본 논문에서는 인공 지능 기반 멀티 태스크를 위한 비디오 코덱의 성능 평가 방안에 대해 제안한다. 하나의 비트스트림의 크기 척도인 픽셀 당 비트수(BPP, Bits Per Pixel) 와 각 태스크의 정확도 결과인 Mean Average Precision(mAP)를 기반으로 산술 평균, 가중 평균, 조화 평균 등 총 3가지의 멀티 태스크 성능 평가 지표를 제안하며 mAP 결과를 기반으로 성능 결과를 비교하고자 한다. 멀티 태스크에서 태스크 별 mAP 결과 값의 범위의 차이가 있을 수 있으며 차이로 인해 생길 수 있는 성능 평가와 관련된 문제를 방지하고자 정규화한 mAP 기반 멀티 태스크 성능 결과를 산출하고 평가하고자 한다.

Passive Ranging Based on Planar Homography in a Monocular Vision System

  • Wu, Xin-mei;Guan, Fang-li;Xu, Ai-jun
    • Journal of Information Processing Systems
    • /
    • 제16권1호
    • /
    • pp.155-170
    • /
    • 2020
  • Passive ranging is a critical part of machine vision measurement. Most of passive ranging methods based on machine vision use binocular technology which need strict hardware conditions and lack of universality. To measure the distance of an object placed on horizontal plane, we present a passive ranging method based on monocular vision system by smartphone. Experimental results show that given the same abscissas, the ordinatesis of the image points linearly related to their actual imaging angles. According to this principle, we first establish a depth extraction model by assuming a linear function and substituting the actual imaging angles and ordinates of the special conjugate points into the linear function. The vertical distance of the target object to the optical axis is then calculated according to imaging principle of camera, and the passive ranging can be derived by depth and vertical distance to the optical axis of target object. Experimental results show that ranging by this method has a higher accuracy compare with others based on binocular vision system. The mean relative error of the depth measurement is 0.937% when the distance is within 3 m. When it is 3-10 m, the mean relative error is 1.71%. Compared with other methods based on monocular vision system, the method does not need to calibrate before ranging and avoids the error caused by data fitting.

DEVELOPMENT OF A MACHINE VISION SYSTEM FOR AN AUTOMOBILE PLASTIC PART INSPECTION

  • ANDRES N.S.;MARIMUTHU R.P.;EOM Y.K.;JANG B.C.
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2005년도 춘계학술대회 논문집
    • /
    • pp.1131-1135
    • /
    • 2005
  • Since human is vulnerable to emotional, physical and environmental distractions, most human inspectors cannot sustain a consistent 8-hour inspection in a day specifically for small components like door locking levers. As an alternative for human inspection, presented in this study is the development of a machine vision inspection system (MVIS) purposely for door locking levers. Comprises the development is the structure of the MVIS components, designed to meet the demands, features and specifications of door locking lever manufacturing companies in increasing their production throughput upon keeping the quality assured. This computer-based MVIS is designed to perform quality measures of detecting missing portions and defects like burr on every door locking lever. NI Vision Builder software for Automatic Inspection (AI) is found to be the optimum solution in configuring the needed quality measures. The proposed software has measurement techniques such as edge detecting and pattern-matching which are capable of gauging, detecting missing portion and checking alignment. Furthermore, this study exemplifies the incorporation of the optimized NI Builder inspection environment to the pre-inspection and post-inspection subsystems.

  • PDF

AI의 이동통신시스템 적용 (Artificial Intelligence Applications on Mobile Telecommunication Systems)

  • 예충일;장갑석;고영조
    • 전자통신동향분석
    • /
    • 제37권4호
    • /
    • pp.60-69
    • /
    • 2022
  • So far, artificial intelligence (AI)/machine learning (ML) has produced impressive results in speech recognition, computer vision, and natural language processing. AI/ML has recently begun to show promise as a viable means for improving the performance of 5G mobile telecommunication systems. This paper investigates standardization activities in 3GPP and O-RAN Alliance regarding AI/ML applications on mobile telecommunication system. Future trends in AI/ML technologies are also summarized. As an overarching technology in 6G, there appears to be no doubt that AI/ML could contribute to every part of mobile systems, including core, RAN, and air-interface, in terms of performance enhancement, automation, cost reduction, and energy consumption reduction.

인공지능 프로세서 기술 동향 (AI Processor Technology Trends)

  • 권영수
    • 전자통신동향분석
    • /
    • 제33권5호
    • /
    • pp.121-134
    • /
    • 2018
  • The Von Neumann based architecture of the modern computer has dominated the computing industry for the past 50 years, sparking the digital revolution and propelling us into today's information age. Recent research focus and market trends have shown significant effort toward the advancement and application of artificial intelligence technologies. Although artificial intelligence has been studied for decades since the Turing machine was first introduced, the field has recently emerged into the spotlight thanks to remarkable milestones such as AlexNet-CNN and Alpha-Go, whose neural-network based deep learning methods have achieved a ground-breaking performance superior to existing recognition, classification, and decision algorithms. Unprecedented results in a wide variety of applications (drones, autonomous driving, robots, stock markets, computer vision, voice, and so on) have signaled the beginning of a golden age for artificial intelligence after 40 years of relative dormancy. Algorithmic research continues to progress at a breath-taking pace as evidenced by the rate of new neural networks being announced. However, traditional Von Neumann based architectures have proven to be inadequate in terms of computation power, and inherently inefficient in their processing of vastly parallel computations, which is a characteristic of deep neural networks. Consequently, global conglomerates such as Intel, Huawei, and Google, as well as large domestic corporations and fabless companies are developing dedicated semiconductor chips customized for artificial intelligence computations. The AI Processor Research Laboratory at ETRI is focusing on the research and development of super low-power AI processor chips. In this article, we present the current trends in computation platform, parallel processing, AI processor, and super-threaded AI processor research being conducted at ETRI.

An Improved Fast Camera Calibration Method for Mobile Terminals

  • Guan, Fang-li;Xu, Ai-jun;Jiang, Guang-yu
    • Journal of Information Processing Systems
    • /
    • 제15권5호
    • /
    • pp.1082-1095
    • /
    • 2019
  • Camera calibration is an important part of machine vision and close-range photogrammetry. Since current calibration methods fail to obtain ideal internal and external camera parameters with limited computing resources on mobile terminals efficiently, this paper proposes an improved fast camera calibration method for mobile terminals. Based on traditional camera calibration method, the new method introduces two-order radial distortion and tangential distortion models to establish the camera model with nonlinear distortion items. Meanwhile, the nonlinear least square L-M algorithm is used to optimize parameters iteration, the new method can quickly obtain high-precise internal and external camera parameters. The experimental results show that the new method improves the efficiency and precision of camera calibration. Terminals simulation experiment on PC indicates that the time consuming of parameter iteration reduced from 0.220 seconds to 0.063 seconds (0.234 seconds on mobile terminals) and the average reprojection error reduced from 0.25 pixel to 0.15 pixel. Therefore, the new method is an ideal mobile terminals camera calibration method which can expand the application range of 3D reconstruction and close-range photogrammetry technology on mobile terminals.

인공지능을 이용한 3D 콘텐츠 기술 동향 및 향후 전망 (Recent Trends and Prospects of 3D Content Using Artificial Intelligence Technology)

  • 이승욱;황본우;임성재;윤승욱;김태준;김기남;김대희;박창준
    • 전자통신동향분석
    • /
    • 제34권4호
    • /
    • pp.15-22
    • /
    • 2019
  • Recent technological advances in three-dimensional (3D) sensing devices and machine learning such as deep leaning has enabled data-driven 3D applications. Research on artificial intelligence has developed for the past few years and 3D deep learning has been introduced. This is the result of the availability of high-quality big data, increases in computing power, and development of new algorithms; before the introduction of 3D deep leaning, the main targets for deep learning were one-dimensional (1D) audio files and two-dimensional (2D) images. The research field of deep leaning has extended from discriminative models such as classification/segmentation/reconstruction models to generative models such as those including style transfer and generation of non-existing data. Unlike 2D learning, it is not easy to acquire 3D learning data. Although low-cost 3D data acquisition sensors have become increasingly popular owing to advances in 3D vision technology, the generation/acquisition of 3D data is still very difficult. Even if 3D data can be acquired, post-processing remains a significant problem. Moreover, it is not easy to directly apply existing network models such as convolution networks owing to the various ways in which 3D data is represented. In this paper, we summarize technological trends in AI-based 3D content generation.