• Title/Summary/Keyword: Vision Technology

Search Result 2,010, Processing Time 0.031 seconds

Recent Trends and Prospects of 3D Content Using Artificial Intelligence Technology (인공지능을 이용한 3D 콘텐츠 기술 동향 및 향후 전망)

  • Lee, S.W.;Hwang, B.W.;Lim, S.J.;Yoon, S.U.;Kim, T.J.;Kim, K.N.;Kim, D.H;Park, C.J.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.15-22
    • /
    • 2019
  • Recent technological advances in three-dimensional (3D) sensing devices and machine learning such as deep leaning has enabled data-driven 3D applications. Research on artificial intelligence has developed for the past few years and 3D deep learning has been introduced. This is the result of the availability of high-quality big data, increases in computing power, and development of new algorithms; before the introduction of 3D deep leaning, the main targets for deep learning were one-dimensional (1D) audio files and two-dimensional (2D) images. The research field of deep leaning has extended from discriminative models such as classification/segmentation/reconstruction models to generative models such as those including style transfer and generation of non-existing data. Unlike 2D learning, it is not easy to acquire 3D learning data. Although low-cost 3D data acquisition sensors have become increasingly popular owing to advances in 3D vision technology, the generation/acquisition of 3D data is still very difficult. Even if 3D data can be acquired, post-processing remains a significant problem. Moreover, it is not easy to directly apply existing network models such as convolution networks owing to the various ways in which 3D data is represented. In this paper, we summarize technological trends in AI-based 3D content generation.

An Experimental Study on the Optimal Arrangement of Cameras Used for the Robot's Vision Control Scheme (로봇 비젼 제어기법에 사용된 카메라의 최적 배치에 대한 실험적 연구)

  • Min, Kwan-Ung;Jang, Wan-Shik
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.19 no.1
    • /
    • pp.15-25
    • /
    • 2010
  • The objective of this study is to investigate the optimal arrangement of cameras used for the robot's vision control scheme. The used robot's vision control scheme involves two estimation models, which are the parameter estimation and robot's joint angle estimation models. In order to perform this study, robot's working region is divided into three work spaces such as left, central and right spaces. Also, cameras are positioned on circular arcs with radius of 1.5m, 2.0m and 2.5m. Seven cameras are placed on each circular arc. For the experiment, nine cases of camera arrangement are selected in each robot's work space, and each case uses three cameras. Six parameters are estimated for each camera using the developed parameter estimation model in order to show the suitability of the vision system model in nine cases of each robot's work space. Finally, the robot's joint angles are estimated using the joint angle estimation model according to the arrangement of cameras for robot's point-position control. Thus, the effect of camera arrangement used for the robot's vision control scheme is shown for robot's point-position control experimentally.

Calibration for Color Measurement of Lean Tissue and Fat of the Beef

  • Lee, S.H.;Hwang, H.
    • Agricultural and Biosystems Engineering
    • /
    • v.4 no.1
    • /
    • pp.16-21
    • /
    • 2003
  • In the agricultural field, a machine vision system has been widely used to automate most inspection processes especially in quality grading. Though machine vision system was very effective in quantifying geometrical quality factors, it had a deficiency in quantifying color information. This study was conducted to evaluate color of beef using machine vision system. Though measuring color of a beef using machine vision system had an advantage of covering whole lean tissue area at a time compared to a colorimeter, it revealed the problem of sensitivity depending on the system components such as types of camera, lighting conditions, and so on. The effect of color balancing control of a camera was investigated and multi-layer BP neural network based color calibration process was developed. Color calibration network model was trained using reference color patches and showed the high correlation with L*a*b* coordinates of a colorimeter. The proposed calibration process showed the successful adaptability to various measurement environments such as different types of cameras and light sources. Compared results with the proposed calibration process and MLR based calibration were also presented. Color calibration network was also successfully applied to measure the color of the beef. However, it was suggested that reflectance properties of reference materials for calibration and test materials should be considered to achieve more accurate color measurement.

  • PDF

Vision Chip for Edge and Motion Detection with a Function of Output Offset Cancellation (출력옵셋의 제거기능을 가지는 윤곽 및 움직임 검출용 시각칩)

  • Park, Jong-Ho;Kim, Jung-Hwan;Suh, Sung-Ho;Shin, Jang-Kyoo;Lee, Min-Ho
    • Journal of Sensor Science and Technology
    • /
    • v.13 no.3
    • /
    • pp.188-194
    • /
    • 2004
  • With a remarkable advance in CMOS (complimentary metal-oxide-semiconductor) process technology, a variety of vision sensors with signal processing circuits for complicated functions are actively being developed. Especially, as the principles of signal processing in human retina have been revealed, a series of vision chips imitating human retina have been reported. Human retina is able to detect the edge and motion of an object effectively. The edge detection among the several functions of the retina is accomplished by the cells called photoreceptor, horizontal cell and bipolar cell. We designed a CMOS vision chip by modeling cells of the retina as hardwares involved in edge and motion detection. The designed vision chip was fabricated using $0.6{\mu}m$ CMOS process and the characteristics were measured. Having reliable output characteristics, this chip can be used at the input stage for many applications, like targe tracking system, fingerprint recognition system, human-friendly robot system and etc.

Force monitoring of steel cables using vision-based sensing technology: methodology and experimental verification

  • Ye, X.W.;Dong, C.Z.;Liu, T.
    • Smart Structures and Systems
    • /
    • v.18 no.3
    • /
    • pp.585-599
    • /
    • 2016
  • Steel cables serve as the key structural components in long-span bridges, and the force state of the steel cable is deemed to be one of the most important determinant factors representing the safety condition of bridge structures. The disadvantages of traditional cable force measurement methods have been envisaged and development of an effective alternative is still desired. In the last decade, the vision-based sensing technology has been rapidly developed and broadly applied in the field of structural health monitoring (SHM). With the aid of vision-based multi-point structural displacement measurement method, monitoring of the tensile force of the steel cable can be realized. In this paper, a novel cable force monitoring system integrated with a multi-point pattern matching algorithm is developed. The feasibility and accuracy of the developed vision-based force monitoring system has been validated by conducting the uniaxial tensile tests of steel bars, steel wire ropes, and parallel strand cables on a universal testing machine (UTM) as well as a series of moving loading experiments on a scale arch bridge model. The comparative study of the experimental outcomes indicates that the results obtained by the vision-based system are consistent with those measured by the traditional method for cable force measurement.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

Accurate Range-free Localization Based on Quantum Particle Swarm Optimization in Heterogeneous Wireless Sensor Networks

  • Wu, Wenlan;Wen, Xianbin;Xu, Haixia;Yuan, Liming;Meng, Qingxia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1083-1097
    • /
    • 2018
  • This paper presents a novel range-free localization algorithm based on quantum particle swarm optimization. The proposed algorithm is capable of estimating the distance between two non-neighboring sensors for multi-hop heterogeneous wireless sensor networks where all nodes' communication ranges are different. Firstly, we construct a new cumulative distribution function of expected hop progress for sensor nodes with different transmission capability. Then, the distance between any two nodes can be computed accurately and effectively by deriving the mathematical expectation of cumulative distribution function. Finally, quantum particle swarm optimization algorithm is used to improve the positioning accuracy. Simulation results show that the proposed algorithm is superior in the localization accuracy and efficiency when used in random and uniform placement of nodes for heterogeneous wireless sensor networks.

Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment (구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식)

  • Kim, Donghoon;Lee, Donghwa;Myung, Hyun;Choi, Hyun-Taek
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

A study on ways to make employment improve through Big Data analysis of university information public

  • Lim, Heon-Wook;Kim, Sun-Jib
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.174-180
    • /
    • 2021
  • The necessity of this study is as follows. A decrease in the number of newborns, an increase in the youth unemployment rate, and a decrease in the employment rate are having a fatal impact on universities. To help increase the employment rate of universities, we intend to utilize Big Data of university public information. Big data refers to the process of collecting and analyzing data, and includes all business processes of finding data, reprocessing information in an easy-to-understand manner, and selling information to people and institutions. Big data technology can be divided into technologies for storing, refining, analyzing, and predicting big data. The purpose of this study is to find the vision and special department of a university with a high employment rate by using big data technology. As a result of the study, big data was collected from 227 universities on www.academyinfo.go.kr site, We selected 130 meaningful universities and selected 25 universities with high employment rates and 25 universities with low employment rates. In conclusion, the university with a high employment rate can first be said to have a student-centered vision and university specialization. The reason is that, for universities with a high employment rate, the vision was to foster talents and specialize, whereas for universities with a low employment rate, regional bases took precedence. Second, universities with a high employment rate have a high interest in specialized departments. This is because, as a result of checking the presence or absence of a characterization plan, universities with a high employment rate were twice as high (21/7). Third, universities with high employment rates promote social needs and characterization. This is because the characteristic departments of universities with high employment rates are in the order of future technology and nursing and health, while universities with low employment rates promoted school-centered specialization in future technology and culture, tourism and art. In summary, universities with high employment rates showed high interest in student-centered vision and development of special departments for social needs.