• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.033 seconds

Scholarly Assessment of Aruco Marker-Driven Worker Localization Techniques within Construction Environments (Aruco marker 기반 건설 현장 작업자 위치 파악 적용성 분석)

  • Choi, Tae-Hun;Kim, Do-Kuen;Jang, Se-Jun
    • Journal of the Korea Institute of Building Construction
    • /
    • v.23 no.5
    • /
    • pp.629-638
    • /
    • 2023
  • This study introduces an innovative approach to monitor the whereabouts of workers within indoor construction settings. While traditional modalities such as GPS and NTRIP have demonstrated efficacy for outdoor localizations, their precision dwindles in indoor environments. In response, this research advocates for the adoption of Aruco markers. Leveraging computer vision technology, these markers facilitate the quantification of the distance between a worker and the marker, subsequently pinpointing the worker's instantaneous location with heightened accuracy. The methodology's efficacy was rigorously evaluated in a real-world construction scenario. Parameters including system stability, the influence of lighting conditions, the extremity of measurable distances, and the breadth of recognition angles were methodically appraised. System stability was ascertained by maneuvering the camera at a uniform velocity, gauging its marker recognition prowess. The impact of varying luminosity on marker discernibility was scrutinized by modulating the ambient lighting. Furthermore, the camera's spatial movement ascertained both the upper threshold of distance until marker recognition waned and the maximal angle at which markers remained discernible.

Coating defect classification method for steel structures with vision-thermography imaging and zero-shot learning

  • Jun Lee;Kiyoung Kim;Hyeonjin Kim;Hoon Sohn
    • Smart Structures and Systems
    • /
    • v.33 no.1
    • /
    • pp.55-64
    • /
    • 2024
  • This paper proposes a fusion imaging-based coating-defect classification method for steel structures that uses zero-shot learning. In the proposed method, a halogen lamp generates heat energy on the coating surface of a steel structure, and the resulting heat responses are measured by an infrared (IR) camera, while photos of the coating surface are captured by a charge-coupled device (CCD) camera. The measured heat responses and visual images are then analyzed using zero-shot learning to classify the coating defects, and the estimated coating defects are visualized throughout the inspection surface of the steel structure. In contrast to older approaches to coating-defect classification that relied on visual inspection and were limited to surface defects, and older artificial neural network (ANN)-based methods that required large amounts of data for training and validation, the proposed method accurately classifies both internal and external defects and can classify coating defects for unobserved classes that are not included in the training. Additionally, the proposed model easily learns about additional classifying conditions, making it simple to add classes for problems of interest and field application. Based on the results of validation via field testing, the defect-type classification performance is improved 22.7% of accuracy by fusing visual and thermal imaging compared to using only a visual dataset. Furthermore, the classification accuracy of the proposed method on a test dataset with only trained classes is validated to be 100%. With word-embedding vectors for the labels of untrained classes, the classification accuracy of the proposed method is 86.4%.

Fusion of Gamma and Realistic Imaging (감마영상과 실사영상의 Fusion)

  • Kim, Yun-Cheol;Yu, Yeon-Uk;Seo, Young-Deok;Moon, Jong-Woon;Kim, Yeong-Seok;Won, Woo-Jae;Kim, Seok-Ki
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.78-82
    • /
    • 2010
  • Purpose: Recently, South Korea has seen a rapidly increased incidence of both breast and thyroid cancers. As a result, the I-131 scan and lymphoscintigraphy have been performed more frequently. Although this type of diagnostic imaging is prominent in that visualizes pathological conditions, which is similar to previous nuclear diagnostic imaging techniques, there is not much anatomical information obtained. Accordingly, it has been used in different ways to help find anatomical locations by transmission scan, however the results were unsatisfactory. Therefore, this study aims to realize an imaging technique which shows more anatomical information through the fusion of gamma and realistic imaging. Materials and Methods: We analyzed the data from patients who were examined by the lymphoscintigraphy and I-131 additional scan by Symbia Gamma camera (SIEMENS) in the nuclear medicine department of the National Cancer Center from April to July of 2009. First, we scanned the same location in patients by using a miniature camera (R-2000) in hyVISION. Afterwards, we scanned by gamma camera. The data we obtained was evaluated based on the scanning that measures an agreement of gamma and realistic imaging by the Gamma Ray Tool fusion program. Results: The amount of radiation technicians and patients were exposed was generated during the production process of flood source and applied transmission scan. During this time, the radiation exposure dose of technicians was an average of 14.1743 ${\mu}Sv$, while the radiation exposure dose of patients averaged 0.9037 ${\mu}Sv$. We also confirmed this to matching gamma and realistic markers in fusion imaging. Conclusion: Therefore, we found that we could provide imaging with more anatomical information to clinical doctors by fusion of system of gamma and realistic imaging. This has allowed us to perform an easier method in which to reduce the work process. In addition, we found that the radiation exposure can be reduced from the flood source. Eventually, we hope that this will be applicable in other nuclear medicine studies. Therefore, in order to respect the privacy of patients, this procedure will be performed only after the patient has agreed to the procedure after being given a detailed explanation about the process itself and its advantages.

  • PDF

Thermographic Assessment in Dry Eye Syndrome, Compared with Normal Eyes by Using Thermography (열화상카메라를 이용한 정상안과 건성안의 서모그래피 비교)

  • Park, Chang Won;Lee, Ok Jin;Lee, Seung Won
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.20 no.2
    • /
    • pp.247-253
    • /
    • 2015
  • Purpose: The purpose of this study was to compare and analyze the ocular surface and the palpebral conjunctiva of categorized subjects, which were divided into normal eye group and dry eye group, by using a thermal camera. Methods: Subjects were 144 eyes of 72 normal university students, who didn't have any corneal disease, abnormal lacrimal ducts, medical records regarding ocular surgeries, or experience of using contact lens. Subjects were divided into two groups, which were normal eye group and dry eye group, based on the results of TBUT, Schirmer I test, and McMonnies test. After categorizing the subjects, the temperature of the subjects' ocular surface and the palpebral conjunctiva were measured and analyzed by using a thermal camera (Cox CX series, Answer co., Korea). Results: In the normal eye group's Central Ar.1, Nasal Ar.2, Temporal Ar.3, Superior Ar.4, Inferior Ar.5, the measured amount of temperature change on each area was $-0.13{\pm}0.08$, $-0.14{\pm}0.08$, $-0.12{\pm}0.08$, $-0.14{\pm}0.08$, $-0.10{\pm}0.09(^{\circ}C/sec)$. The dry eye group's results were $-0.17{\pm}0.08$, $-0.16{\pm}0.07$, $-0.16{\pm}0.08$, $-0.17{\pm}0.09$, $-0.15{\pm}0.08(^{\circ}C/sec)$. When compared with the normal eye group, the values of Ar.1, Ar.3, Ar.5 were significantly different in the dry eye group(p<0.05). The amount of temperature change, which was observed on the palpebral conjunctiva(Ar.1:central, Ar.2: nasal, Ar.3: temporal) of the normal eyes, measured by thermography, was $34.36{\pm}1.12$, $34.17{\pm}1.10$, $34.07{\pm}1.12^{\circ}C$ on each area. Same values taken from the dry eye group was $33.55{\pm}0.94$, $33.43{\pm}0.97$, $33.51{\pm}1.06^{\circ}C$ on each area. The values of Ar.1, taken from the dry eye group, had a significant difference, compared to the values of the normal eye group(p=0.05). Conclusion: The temperature of the ocular surface decreased faster on the dry eyes, compared to the normal eyes. The temperature measured on the palpebral conjunctiva of the dry eyes were also lower than the normal eyes. The temperature changes on the ocular surface, observed with a thermal camera, were objective values to assess the stability of tear films, and might provide useful data for studies related to dry eye syndrome.

Forward Vehicle Detection Algorithm Using Column Detection and Bird's-Eye View Mapping Based on Stereo Vision (스테레오 비전기반의 컬럼 검출과 조감도 맵핑을 이용한 전방 차량 검출 알고리즘)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Kim, Jong-Hwan
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.255-264
    • /
    • 2011
  • In this paper, we propose a forward vehicle detection algorithm using column detection and bird's-eye view mapping based on stereo vision. The algorithm can detect forward vehicles robustly in real complex traffic situations. The algorithm consists of the three steps, namely road feature-based column detection, bird's-eye view mapping-based obstacle segmentation, obstacle area remerging and vehicle verification. First, we extract a road feature using maximum frequent values in v-disparity map. And we perform a column detection using the road feature as a new criterion. The road feature is more appropriate criterion than the median value because it is not affected by a road traffic situation, for example the changing of obstacle size or the number of obstacles. But there are still multiple obstacles in the obstacle areas. Thus, we perform a bird's-eye view mapping-based obstacle segmentation to divide obstacle accurately. We can segment obstacle easily because a bird's-eye view mapping can represent the position of obstacle on planar plane using depth map and camera information. Additionally, we perform obstacle area remerging processing because a segmented obstacle area may be same obstacle. Finally, we verify the obstacles whether those are vehicles or not using a depth map and gray image. We conduct experiments to prove the vehicle detection performance by applying our algorithm to real complex traffic situations.

Accuracy Assessment on the Stereoscope based Digital Mapping Using Unmanned Aircraft Vehicle Image (무인항공기 영상을 이용한 입체시기반 수치도화 정확도 평가)

  • Yun, Kong-Hyun;Kim, Deok-In;Song, Yeong Sun
    • Journal of Cadastre & Land InformatiX
    • /
    • v.48 no.1
    • /
    • pp.111-121
    • /
    • 2018
  • RIn this research, digital elevation models, true-ortho image and 3-dimensional digital complied data was generated and evaluated using unmanned aircraft vehicle stereoscopic images by applying photogrammetric principles. In order to implement stereoscopic vision, digital Photogrammetric Workstation should be used necessarily. For conducting this, in this study GEOMAPPER 1.0 is used. That was developed by the Ministry of Trade, Industry and Energy. To realize stereoscopic vision using two overlapping images of the unmanned aerial vehicle, the interior and exterior orientation parameters should be calculated. Especially lens distortion of non-metric camera must be accurately compensated for stereoscope. In this work. photogrammetric orientation process was conducted using commercial Software, PhotoScan 1.4. Fixed wing KRobotics KD-2 was used for the acquisition of UAV images. True-ortho photo was generated and digital topographic map was partially produced. Finally, we presented error analysis on the generated digital complied map. As the results, it is confirmed that the production of digital terrain map with a scale 1:2,500~1:3,000 is available using stereoscope method.

Artificial Vision Project by Micro-Bio Technologies

  • Kim Sung June;Jung Hum;Yu Young Suk;Yu Hyeong Gon;Cho Dong il;Lee Byeong Ho;Ku Yong Sook;Kim Eun Mi;Seo Jong Mo;Kim Hyo kyum;Kim Eui tae;Paik Seung June;Yoon Il Young
    • 한국가시화정보학회:학술대회논문집
    • /
    • 2002.04a
    • /
    • pp.51-78
    • /
    • 2002
  • A number of research groups worldwide are studying electronic implants that can be mounted on retinal optic nerve/visual cortex to restore vision of patients suffering from retinal degeneration. The implants consist of a neural interface made of biocompatible materials, one or more integrated circuits for stimuli generation, a camera, an image processor, and a telemetric channel. The realization of these classes of neural prosthetic devices is largely due to the explosive development of micro- and nano-electronics technologies in the late $20^{th}$ century and biotechnologies more recently. Animal experiments showed promise and some human experiments are in progress to indicate that recognition of images can be obtained and improved over time. We, at NBS-ERC of SNU, have started our own retinal implant project in 2000. We have selected polyimide as the biomaterial for an epi-retinal stimulator. In-vitro and in-vivo biocompatibility studies have been performed on the electrode arrays. We have obtained good affinity to retinal pigment epithelial cells and no harmful effect. The implant also showed very good stability and safety in rabbit eye for 12 weeks. We have also demonstrated that through proper stimulation of inner retina, meaning vision can be obtained.

  • PDF

A study on measurement and compensation of automobile door gap using optical triangulation algorithm (광 삼각법 측정 알고리즘을 이용한 자동차 도어 간격 측정 및 보정에 관한 연구)

  • Kang, Dong-Sung;Lee, Jeong-woo;Ko, Kang-Ho;Kim, Tae-Min;Park, Kyu-Bag;Park, Jung Rae;Kim, Ji-Hun;Choi, Doo-Sun;Lim, Dong-Wook
    • Design & Manufacturing
    • /
    • v.14 no.1
    • /
    • pp.8-14
    • /
    • 2020
  • In general, auto parts production assembly line is assembled and produced by automatic mounting by an automated robot. In such a production site, quality problems such as misalignment of parts (doors, trunks, roofs, etc.) to be assembled with the vehicle body or collision between assembly robots and components are often caused. In order to solve such a problem, the quality of parts is manually inspected by using mechanical jig devices outside the automated production line. Automotive inspection technology is the most commonly used field of vision, which includes surface inspection such as mounting hole spacing and defect detection, body panel dents and bends. It is used for guiding, providing location information to the robot controller to adjust the robot's path to improve process productivity and manufacturing flexibility. The most difficult weighing and measuring technology is to calibrate the surface analysis and position and characteristics between parts by storing images of the part to be measured that enters the camera's field of view mounted on the side or top of the part. The problem of the machine vision device applied to the automobile production line is that the lighting conditions inside the factory are severely changed due to various weather changes such as morning-evening, rainy days and sunny days through the exterior window of the assembly production plant. In addition, since the material of the vehicle body parts is a steel sheet, the reflection of light is very severe, which causes a problem in that the quality of the captured image is greatly changed even with a small light change. In this study, the distance between the car body and the door part and the door are acquired by the measuring device combining the laser slit light source and the LED pattern light source. The result is transferred to the joint robot for assembling parts at the optimum position between parts, and the assembly is done at the optimal position by changing the angle and step.

A study on vision system based on Generalized Hough Transform 2-D object recognition (Generalized Hough Transform을 이용한 이차원 물체인식 비젼 시스템 구현에 대한 연구)

  • Koo, Bon-Cheol;Park, Jin-Soo;Chien Sung-Il
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.67-78
    • /
    • 1996
  • The purpose of this paper is object recognition even in the presence of occlusion by using generalized Hough transform(GHT). The GHT can be considered as a kind of model based object recognition algorithm and is executed in the following two stages. The first stage is to store the information of the model in the form of R-table (Reference table). The next stage is to identify the existence of the objects in the image by using the R-table. The improved GHT method is proposed for the practical vision system. First, in constructing the R-table, we extracted the partial arc from the portion of the whole object boundary, and this partial arc can be used for constructing the R-table. Also, clustering algorithm is employed for compensating an error arised by digitizing an object image. Second, an efficient method is introduced to avoid Ballard's use of 4-D array which is necessary for estimating position, orientation and scale change of an object. Only 2-D array is enough for recognizing an object. Especially, scale token method is introduced for calculating the scale change which is easily affected by camera zoom. The results of our test show that the improved hierarchical GHT method operates stably in the realistic vision situation, even in the case of object occlusion.

  • PDF

A Study on Atmospheric Turbulence-Induced Errors in Vision Sensor based Structural Displacement Measurement (대기외란시 비전센서를 활용한 구조물 동적 변위 측정 성능에 관한 연구)

  • Junho Gong
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.3
    • /
    • pp.1-9
    • /
    • 2024
  • This study proposes a multi-scale template matching technique with image pyramids (TMI) to measure structural dynamic displacement using a vision sensor under atmospheric turbulence conditions and evaluates its displacement measurement performance. To evaluate displacement measurement performance according to distance, the three-story shear structure was designed, and an FHD camera was prepared to measure structural response. The initial measurement distance was set at 10m, and increased with an increment of 10m up to 40m. The atmospheric disturbance was generated using a heating plate under indoor illuminance condition, and the image was distorted by the optical turbulence. Through preliminary experiments, the feasibility of displacement measurement of the feature point-based displacement measurement method and the proposed method during atmospheric disturbances were compared and verified, and the verification results showed a low measurement error rate of the proposed method. As a result of evaluating displacement measurement performance in an atmospheric disturbance environment, there was no significant difference in displacement measurement performance for TMI using an artificial target depending on the presence or absence of atmospheric disturbance. However, when natural targets were used, RMSE increased significantly at shooting distances of 20 m or more, showing the operating limitations of the proposed technique. This indicates that the resolution of the natural target decreases as the shooting distance increases, and image distortion due to atmospheric disturbance causes errors in template image estimation, resulting in a high displacement measurement error.