• Title/Summary/Keyword: 지능기계

Search Result 1,059, Processing Time 0.03 seconds

Development of Robot Arm Placing technology based on Artificial Intelligence using image data (영상을 적용한 인공지능을 이용한 Robot Arm Placing 기술 개발)

  • Baek, Young-Jin;Kim, Wonha
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.652-655
    • /
    • 2020
  • 최근 딥 러닝을 이용해 기계로 인간을 대체하는 스마트 팩토리에 대한 연구 및 개발이 활발히 진행되고 있다. 그러나 FPCB를 Placing하는 방법에 기계를 도입하는 과정은 발전이 더딘 상태이다. 현재 로봇 팔을 이용해 Placing하는 방법은 사람이 직접 로봇 팔을 튜닝해 사용하고 있다. 이에 본 논문은 딥 러닝을 이용한 영상처리 기법을 활용해 FPCB를 사람의 개입 없이 트레이에 삽입하는 기법을 개발하였다. 이를 위해 여러 알고리즘을 비교한 후 각각의 장단점을 고려해 적합한 알고리즘을 제시하였다. 본 논문에서 제시하는 기법은 FPCB에 아무 행동을 가하지 않으며, 힘 센서, 깊이 센서 등 기타 센서들의 도움 없이 RGB 센서(카메라)를 통해 획득한 이미지만을 이용해 자동화가 가능하다. 또한, 개발 단계에서 실제 기계를 이용해 이미지 촬영, 이동 등을 진행했기 때문에 조명, 로봇 팔 위치 등 알고리즘 외 조건들에 영향을 받지 않고 실제 사용이 가능하다.

  • PDF

Splash Detection Algorithm for Machine Learning-based Fluid Simulation (기계학습 기반 유체 시뮬레이션의 비말 검출 알고리즘)

  • Jae-Hyeong Kim;Su-Kyung Sung;Byeong-Seok Shin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.427-429
    • /
    • 2023
  • 인공지능 기술의 발전에 따라 유체 시뮬레이션 분야에서는 복잡한 액체의 흐름을 모사하기 위해 기계학습 기술이 많이 활용되고 있다. 이러한 시뮬레이션에서 성능 향상의 가장 중요한 요소는 학습 데이터다. 이 논문에서는 기계학습 기반 유체 시뮬레이션의 학습 데이터 생성 단계 중 기존의 방법보다 효율적으로 비말(splash) 탐색하는 방법을 제안한다. 기존 방법에서는 CPU 환경에서 큐(queue)를 이용하는 너비우선탐색(breadth first search) 기법을 사용하기 때문에 처리속도가 느리다. 반면에 제안하는 기법에서는 배열로 되어 있는 해시 테이블(hash table)을 이용해 충돌 문제를 해결해 GPU 환경에서 비말을 신속하게 검출하도록 하기 때문에 빠른 학습 데이터 생성이 가능하도록 했다. 이 알고리즘의 유효성을 확인하기 위하여 정확성과 수행시간을 확인하였다.

Development and Verification of Smart Greenhouse Internal Temperature Prediction Model Using Machine Learning Algorithm (기계학습 알고리즘을 이용한 스마트 온실 내부온도 예측 모델 개발 및 검증)

  • Oh, Kwang Cheol;Kim, Seok Jun;Park, Sun Yong;Lee, Chung Geon;Cho, La Hoon;Jeon, Young Kwang;Kim, Dae Hyun
    • Journal of Bio-Environment Control
    • /
    • v.31 no.3
    • /
    • pp.152-162
    • /
    • 2022
  • This study developed simulation model for predicting the greenhouse interior environment using artificial intelligence machine learning techniques. Various methods have been studied to predict the internal environment of the greenhouse system. But the traditional simulation analysis method has a problem of low precision due to extraneous variables. In order to solve this problem, we developed a model for predicting the temperature inside the greenhouse using machine learning. Machine learning models are developed through data collection, characteristic analysis, and learning, and the accuracy of the model varies greatly depending on parameters and learning methods. Therefore, an optimal model derivation method according to data characteristics is required. As a result of the model development, the model accuracy increased as the parameters of the hidden unit increased. Optimal model was derived from the GRU algorithm and hidden unit 6 (r2 = 0.9848 and RMSE = 0.5857℃). Through this study, it was confirmed that it is possible to develop a predictive model for the temperature inside the greenhouse using data outside the greenhouse. In addition, it was confirmed that application and comparative analysis were necessary for various greenhouse data. It is necessary that research for development environmental control system by improving the developed model to the forecasting stage.

Effectiveness of Normalization Pre-Processing of Big Data to the Machine Learning Performance (빅데이터의 정규화 전처리과정이 기계학습의 성능에 미치는 영향)

  • Jo, Jun-Mo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.3
    • /
    • pp.547-552
    • /
    • 2019
  • Recently, the massive growth in the scale of data has been observed as a major issue in the Big Data. Furthermore, the Big Data should be preprocessed for normalization to get a high performance of the Machine learning since the Big Data is also an input of Machine Learning. The performance varies by many factors such as the scope of the columns in a Big Data or the methods of normalization preprocessing. In this paper, the various types of normalization preprocessing methods and the scopes of the Big Data columns will be applied to the SVM(: Support Vector Machine) as a Machine Learning method to get the efficient environment for the normalization preprocessing. The Machine Learning experiment has been programmed in Python and the Jupyter Notebook.

Comparison of Classification and Convolution algorithm in Condition assessment of the Failure Modes in Rotational equipments with varying speed (회전수가 변하는 기기의 상태 진단에 있어서 특성 기반 분류 알고리즘과 합성곱 기반 알고리즘의 예측 정확도 비교)

  • Ki-Yeong Moon;Se-Yun Hwang;Jang-Hyun Lee
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.06a
    • /
    • pp.301-301
    • /
    • 2022
  • 본 연구는 운영 조건이 달라짐에 따라 회전수가 변하는 기기의 정상적 가동 여부와 고장 종류를 판별하기 위한 인공지능 알고리즘의 적용을 다루고 있다. 회전수가 변하는 장비로부터 계측된 상태 모니터링 센서의 신호는 비정상(non-stationary)적 특성이 있으므로, 상태 신호의 한계치가 고장 판별의 기준이 되기 어렵다는 점을 해결하고자 하였다. 정상 가동 여부는 이상 감지에 효율적인 오토인코더 및 기계학습 알고리즘을 적용하였으며, 고장 종류 판별에는 기계학습법과 합성곱 기반의 심층학습 방법을 적용하였다. 변하는 회전수와 연계된 주파수의 비정상적 시계열도 적절한 고장 특징 (Feature)로 대변될 수 있도록 시간 및 주파수 영역에서 특징 벡터를 구성할 수 있음을 예제로 설명하였다. 차원 축소 및 카이 제곱 기법을 적용하여 최적의 특징 벡터를 추출하여 기계학습의 분류 알고리즘이 비정상적 회전 신호를 가진 장비의 고장 예측에 활용될 수 있음을 보였다. 이 과정에서 k-NN(k-Nearest Neighbor), SVM(Support Vector Machine), Random Forest의 기계학습 알고리즘을 적용하였다. 또한 시계열 기반의 오토인코더 및 CNN (Convolution Neural Network) 적용하여 이상 감지와 고장진단을 수행한 결과를 비교하여 제시하였다.

  • PDF

User Adaptation Using User Model in Intelligent Image Retrieval System (지능형 화상 검색 시스템에서의 사용자 모델을 이용한 사용자 적응)

  • Kim, Yong-Hwan;Rhee, Phill-Kyu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.12
    • /
    • pp.3559-3568
    • /
    • 1999
  • The information overload with many information resources is an inevitable problem in modern electronic life. It is more difficult to search some information with user's information needs from an uncontrolled flood of many digital information resources, such as the internet which has been rapidly increased. So, many information retrieval systems have been researched and appeared. In text retrieval systems, they have met with user's information needs. While, in image retrieval systems, they have not properly dealt with user's information needs. In this paper, for resolving this problem, we proposed the intelligent user interface for image retrieval. It is based on HCOS(Human-Computer Symmetry) model which is a layed interaction model between a human and computer. Its' methodology is employed to reduce user's information overhead and semantic gap between user and systems. It is implemented with machine learning algorithms, decision tree and backpropagation neural network, for user adaptation capabilities of intelligent image retrieval system(IIRS).

  • PDF

Intelligent Navigation Safety Information System using Blackboard (블랙보드를 이용한 지능형 항행 안전 정보 시스템)

  • Kim, Do-Yeon;Yi, Mi-Ra
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.3
    • /
    • pp.307-316
    • /
    • 2011
  • The majority of maritime accidents happened by human factor. For that reason, navigation experts want to an intelligent support system for navigation safety, without officer involvement. The expert system which is one of artificial intelligence skills for navigation support is an important tool that a machine can substitute for an expert through the design of a knowledge base and inference engine using the experience or knowledge of an expert. Further, in the real world, a complex situation requires synthetic estimation with the input of experts in various fields for the correct estimation of the situation, not any one expert. In particular, synthetic estimation is more important for navigation situations than in other cases, because of diverse potential threats. This paper presents the method of knowledge fusion pertaining to navigation safety knowledge from various expert systems, using a blackboard system. Then we will show the validity of the method via a design and implementation of test system effort.

The Role and Collaboration Model of Human and Artificial Intelligence Considering Human Factor in Financial Security (금융 보안에서 휴먼팩터를 고려한 인간과 인공지능의 역할 및 협업 모델)

  • Lee, Bo-Ra;Kim, In-Seok
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.6
    • /
    • pp.1563-1583
    • /
    • 2018
  • With the deregulation of electronic finance, FinTech has been revitalized. The discussion on artificial intelligence is active in the financial industry. However, there is a problem of increasing security threats behind new technologies. Security vulnerabilities have increased because we are more connected than before, and the channels and entities of the financial industry have diversified. Although there are technical and policy discussions on security, the essence of all discussions is human. Fundamentals of finance are trust and security, and attention to human factors is important. This study presents the role of human and artificial intelligence for financial security, respectively. Furthermore, this derives a collaborative model in which human and artificial intelligence complement each other's limitations. To support this, it first discusses the development of finance and IT, AI, human factors, and financial security threats. This study suggests that the security threats will intensify in the era of new technology, but it can overcome them by using machinery and technology.

A Model Design for Enhancing the Efficiency of Smart Factory for Small and Medium-Sized Businesses Based on Artificial Intelligence (인공지능 기반의 중소기업 스마트팩토리 효율성 강화 모델 설계)

  • Jeong, Yoon-Su
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.3
    • /
    • pp.16-21
    • /
    • 2019
  • Small and medium-sized Korean companies are currently changing their industrial structure faster than in the past due to various environmental factors (such as securing competitiveness and developing excellent products). In particular, the importance of collecting and utilizing data produced in smart factory environments is increasing as diverse devices related to artificial intelligence are put into manufacturing sites. This paper proposes an artificial intelligence-based smart factory model to improve the process of products produced at the manufacturing site with the recent smart factory. The proposed model aims to ensure the increasingly competitive manufacturing environment and minimize production costs. The proposed model is managed by considering not only information on products produced at the site of smart factory based on artificial intelligence, but also labour force consumed in the production of products, working hours and operating plant machinery. In addition, data produced in the proposed model can be linked with similar companies and share information, enabling strategic cooperation between enterprises in manufacturing site operations.

Calculating Data and Artificial Neural Network Capability (데이터와 인공신경망 능력 계산)

  • Yi, Dokkyun;Park, Jieun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.49-57
    • /
    • 2022
  • Recently, various uses of artificial intelligence have been made possible through the deep artificial neural network structure of machine learning, demonstrating human-like capabilities. Unfortunately, the deep structure of the artificial neural network has not yet been accurately interpreted. This part is acting as anxiety and rejection of artificial intelligence. Among these problems, we solve the capability part of artificial neural networks. Calculate the size of the artificial neural network structure and calculate the size of data that the artificial neural network can process. The calculation method uses the group method used in mathematics to calculate the size of data and artificial neural networks using an order that can know the structure and size of the group. Through this, it is possible to know the capabilities of artificial neural networks, and to relieve anxiety about artificial intelligence. The size of the data and the deep artificial neural network are calculated and verified through numerical experiments.