• 제목/요약/키워드: Low vision

검색결과 703건 처리시간 0.029초

POSITION RECOGNITION AND QUALITY EVALUATION OF TOBACCO LEAVES VIA COLOR COMPUTER VISION

  • Lee, C. H.;H. Hwang
    • 한국농업기계학회:학술대회논문집
    • /
    • 한국농업기계학회 2000년도 THE THIRD INTERNATIONAL CONFERENCE ON AGRICULTURAL MACHINERY ENGINEERING. V.III
    • /
    • pp.569-577
    • /
    • 2000
  • The position of tobacco leaves is affluence to the quality. To evaluate its quality, sample leaves was collected according to the position of attachment. In Korea, the position was divided into four classes such as high, middle, low and inside positioned leaves. Until now, the grade of standard sample was determined by human expert from korea ginseng and tobacco company. Many research were done by the chemical and spectrum analysis using NIR and computer vision. The grade of tobacco leaves mainly classified into 5 grades according to the attached position and its chemical composition. In high and low positioned leaves shows a low level grade under grade 3. Generally, inside and medium positioned leaf has a high level grade. This is the basic research to develop a real time tobacco leaves grading system combined with portable NIR spectrum analysis system. However, this research just deals with position recognition and grading using the color machine vision. The RGB color information was converted to HSI image format and the sample was all investigated using the bundle of tobacco leaves. Quality grade and position recognition was performed through well known general error back propagation neural network. Finally, the relationship about attached leaf position and its grade was analyzed.

  • PDF

한국공학교육인증원의 2020 비전 달성도 및 사회적 위상 분석 (A Study on the Analysis of the Vision Achievement and Social Status of the ABEEK)

  • 한지영
    • 공학교육연구
    • /
    • 제27권2호
    • /
    • pp.3-12
    • /
    • 2024
  • The purpose of this study was to evaluate how well the 2020 vision presented by the Accreditation Board for Engineering Education of Korea(ABEEK) had been achieved, and to objectively examine the social status. It was very necessary for the development of engineering education in Korea to provide room for improvement by diagnosing how well the ABEEK, one of the major engineering education communities, was achieving its own vision. In order to achieve the objectives of the study, research methods such as literature review, survey research, and expert advisory committee were used. To evaluate the level of achievement of the Vision 2020 of the ABEEK, the analysis was based on the response results of 61 people who had experience as a member of the steering committe. In addition, the vision and mission of the 23 countries that are currently signatory members of the Washington Accord were surveyed, and the social responsibility and financial independence of the 20 countries that joined the signatory member countries before 2020 were compared with each other. As a result of the analysis, the item of securing international equivalence in engineering education received the most positive evaluation, and the social compensation efforts for accreditied graduates received the least evaluation. The ABEEK was evaluated as having a medium level of social responsibility and a low level of financial independence. Based on the results of this research, we proposed ways the ABEEK to contribute to the improvement of Korean engineering education.

비전정보와 캐드 DB 의 매칭을 통한 웹기반 금형판별 시스템 개발 (Development of Web Based Die Discrimination System by matching the information of vision with CAD Database)

  • 김세원;김동우;전병철;조명우
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2004년도 추계학술대회 논문집
    • /
    • pp.277-280
    • /
    • 2004
  • In recent die industry, web-based production control system is applied widely because of the improvement of IT technology. In result, many researches are published about remote monitoring at a long distance. The target of this study is to develop Die Discrimination System using web-based vision, and CAD API when client discriminates die in process at a long distance. Special feature of this system is to use 2D vision image and to match with DB. We can get discrimination result enough to want with short time and a little low precision in web-monitoring by development of this system.

  • PDF

로봇 비젼을 이용한 대형 2차원 물체의 인식과 가공 (Recognition and Machining for Large 2D Object using Robot Vision)

  • 조지승;정병묵
    • 한국정밀공학회지
    • /
    • 제16권2호통권95호
    • /
    • pp.68-73
    • /
    • 1999
  • Generally, most of machining processes are done according to the dimention of the draft made by CAD. However, there are many cases that a sample is given without the draft because of the simplicity of the shape in the machining of 2D objects. To cut the same shape as the given sample, this paper proposes the method to extract the geometric information about a large sample using the robot vision and to draw the demensional draft for the machining. Because the resolution of one frame in the vision system is too low, it is necessary to set up a camera according to the desired resolution and to capture the image moving along the contour. And the overall outline can be compounded of the sequentially captured images. In the experiment, we compared the product after the cutting with the original sample and found that the size of two objects was coincided within the allowed error bound.

  • PDF

용접 로봇을 위한 비젼 시스템 응용 연구 (A Study on the Vision System Application for Welding Robot)

  • 박병호;정선환;노승훈;최성대;최환
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2000년도 추계학술대회 논문집
    • /
    • pp.678-682
    • /
    • 2000
  • The purpose of this study is to develop a powerful 6-axes general welding robot utilizing a low cost vision system. The developed vision system is composed of a CCD camera, a PC with windows 98 OS, and a PC-Robot communication program using Visual C++. A test was carried out to investigate whether the welding torch can precisely follow up the welding path. It shows that the result of this study can readily be applied to practical welding operations.

  • PDF

교통 신호등과 비전 센서의 위치 관계 분석을 통한 이미지에서 교통 신호등 검출 방법 (Traffic Light Detection Method in Image Using Geometric Analysis Between Traffic Light and Vision Sensor)

  • 최창환;유국열;박용완
    • 대한임베디드공학회논문지
    • /
    • 제10권2호
    • /
    • pp.101-108
    • /
    • 2015
  • In this paper, a robust traffic light detection method is proposed by using vision sensor and DGPS(Difference Global Positioning System). The conventional vision-based detection methods are very sensitive to illumination change, for instance, low visibility at night time or highly reflection by bright light. To solve these limitations in visual sensor, DGPS is incorporated to determine the location and shape of traffic lights which are available from traffic light database. Furthermore the geometric relationship between traffic light and vision sensor is used to locate the traffic light in the image by using DGPS information. The empirical results show that the proposed method improves by 51% in detection rate for night time with marginal improvement in daytime environment.

Vision을 이용한 자율주행 로봇의 라인 인식 및 주행방향 결정에 관한 연구 (A Study of Line Recognition and Driving Direction Control On Vision based AGV)

  • 김영숙;김태완;이창구
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2002년도 하계학술대회 논문집 D
    • /
    • pp.2341-2343
    • /
    • 2002
  • This paper describes a vision-based line recognition and control of driving direction for an AGV(autonomous guided vehicle). As navigation guide, black stripe attached on the corridor is used. Binary image of guide stripe captured by a CCD camera is used. For detect the guideline quickly and extractly, we use for variable thresholding algorithm. this low-cost line-tracking system is efficiently using pc-based real time vision processing. steering control is studied through controller with guide-line angle error. This method is tested via a typical agv with a single camera in laboratory environment.

  • PDF

인공지능을 이용한 3D 콘텐츠 기술 동향 및 향후 전망 (Recent Trends and Prospects of 3D Content Using Artificial Intelligence Technology)

  • 이승욱;황본우;임성재;윤승욱;김태준;김기남;김대희;박창준
    • 전자통신동향분석
    • /
    • 제34권4호
    • /
    • pp.15-22
    • /
    • 2019
  • Recent technological advances in three-dimensional (3D) sensing devices and machine learning such as deep leaning has enabled data-driven 3D applications. Research on artificial intelligence has developed for the past few years and 3D deep learning has been introduced. This is the result of the availability of high-quality big data, increases in computing power, and development of new algorithms; before the introduction of 3D deep leaning, the main targets for deep learning were one-dimensional (1D) audio files and two-dimensional (2D) images. The research field of deep leaning has extended from discriminative models such as classification/segmentation/reconstruction models to generative models such as those including style transfer and generation of non-existing data. Unlike 2D learning, it is not easy to acquire 3D learning data. Although low-cost 3D data acquisition sensors have become increasingly popular owing to advances in 3D vision technology, the generation/acquisition of 3D data is still very difficult. Even if 3D data can be acquired, post-processing remains a significant problem. Moreover, it is not easy to directly apply existing network models such as convolution networks owing to the various ways in which 3D data is represented. In this paper, we summarize technological trends in AI-based 3D content generation.

PCA알고리즘을 이용한 최적 pRBFNNs 기반 나이트비전 얼굴인식 시스템 설계 (Design of Optimized pRBFNNs-based Night Vision Face Recognition System Using PCA Algorithm)

  • 오성권;장병희
    • 전자공학회논문지
    • /
    • 제50권1호
    • /
    • pp.225-231
    • /
    • 2013
  • 본 연구에서는 PCA알고리즘을 이용한 최적 pRBFNNs 기반 나이트비전 얼굴인식 시스템을 설계 하고자 한다. 조명이 없는 주위 상태 하에서 조도가 낮기 때문에 CCD 카메라를 이용하여 영상을 획득하는 것이 어렵다. 본 논문에서는 낮은 조도에 의해 왜곡된 이미지의 품질을 나이트 비전 카메라와 히스토그램 평활화를 사용하여 향상시킨다. 그리고 얼굴과 비얼굴 이미지 영역 사이에서 얼굴 이미지를 검출하기 위하여 Ada-Boost 알고리즘을 사용한다. 추출된 고차원 특징 데이터를 저차원의 특징 데이터로 변환하기 위하여 데이터 차원축소 기법인 주성분 분석법(Principal Components Analysis; PCA)을 사용한다. 또한 인식 모듈로서 pRBFNNs(Polynomial- based Radial Basis Function Neural Networks) 패턴분류기를 소개한다. 제안된 다항식 기반 RBFNNs은 조건부, 결론부, 추론부 세 가지의 기능적 모듈로 구성되어 있다. 조건부는 FCM (Fuzzy C-means) 클러스터링을 사용하여 입력공간을 분할하고, 결론부는 분할된 로컬 영역을 다항식 함수로 표현한다. 그리고 차분진화 (Differential Evolution; DE) 알고리즘을 사용하여 모델의 파라미터를 최적화 한다.

비색 MOF 가스센서 어레이 기반 고정밀 질환 VOCs 바이오마커 검출을 위한 머신비전 플랫폼 (Machine Vision Platform for High-Precision Detection of Disease VOC Biomarkers Using Colorimetric MOF-Based Gas Sensor Array)

  • 이준영;오승윤;김동민;김영웅;허정석;이대식
    • 센서학회지
    • /
    • 제33권2호
    • /
    • pp.112-116
    • /
    • 2024
  • Gas-sensor technology for volatile organic compounds (VOC) biomarker detection offers significant advantages for noninvasive diagnostics, including rapid response time and low operational costs, exhibiting promising potential for disease diagnosis. Colorimetric gas sensors, which enable intuitive analysis of gas concentrations through changes in color, present additional benefits for the development of personal diagnostic kits. However, the traditional method of visually monitoring these sensors can limit quantitative analysis and consistency in detection threshold evaluation, potentially affecting diagnostic accuracy. To address this, we developed a machine vision platform based on metal-organic framework (MOF) for colorimetric gas sensor arrays, designed to accurately detect disease-related VOC biomarkers. This platform integrates a CMOS camera module, gas chamber, and colorimetric MOF sensor jig to quantitatively assess color changes. A specialized machine vision algorithm accurately identifies the color-change Region of Interest (ROI) from the captured images and monitors the color trends. Performance evaluation was conducted through experiments using a platform with four types of low-concentration standard gases. A limit-of-detection (LoD) at 100 ppb level was observed. This approach significantly enhances the potential for non-invasive and accurate disease diagnosis by detecting low-concentration VOC biomarkers and offers a novel diagnostic tool.