Journal of the Institute of Electronics Engineers of Korea SC
/
v.49
no.3
/
pp.53-61
/
2012
This paper proposes a real-time distance measurement system of high temperature and high speed target using infrared stereo camera. We construct an infrared stereo camera system that measure the difference between target and background temperatures for automatic target measurement. First, the proposed method detects target region based on target motion and intensity variation of local region using difference between target and background temperatures. Second, stereo matching by left and right target information is used to estimate disparity about real-time distance of target. In the proposed method using infrared stereo camera system, we compare distances in three dimension trajectory measuring instrument and in infrared stereo camera measurement. In this experiment from three video data, the result shows an average 9.68% distance error rate. The proposed method is suitable for distance and position measurement of varied targets using infrared stereo system.
Journal of the Korean Association of Geographic Information Studies
/
v.7
no.4
/
pp.143-154
/
2004
The researchers, who seek geological and environmental information, depend on the remote sensing and aerial photographic datum from various commercial satellites and aircraft. However, the adverse weather conditions and the expensive equipment can restrict that the researcher can collect their data anywhere and any time. To allow for better flexibility, we have developed a compact, a multi-spectral automatic Aerial photographic system(PKNU 2). This system's Multi-spectral camera can catch the visible(RGB) and infrared(NIR) bands($3032{\times}2008$ pixels) image. Visible and infrared bands images were obtained from each camera respectively and produced Color-infrared composite images to be analyzed in the purpose of the environment monitor but that was not very good data. Moreover, it has a demerit that the stereoscopic overlap area is not satisfied with 60% due to the 12s storage time of each data, while it was possible that PKNU 2 system photographed photos of great capacity. Therefore, we have been developing the advanced PKNU 2(PKNU 3) that consists of color-infrared spectral camera can photograph the visible and near infrared bands data using one sensor at once, thermal infrared camera, two of 40 G computers to store images, and MPEG board to compress and transfer data to the computer at the real time and can attach and detach itself to a helicopter. Verification and calibration of each sensor(REDLAKE MS 4000, Raytheon IRPro) were conducted before we took the aerial photographs for obtaining more valuable data. Corrections for the spectral characteristics and radial lens distortions of sensor were carried out.
Journal of the Korea Academia-Industrial cooperation Society
/
v.17
no.9
/
pp.292-301
/
2016
Because damages arising from the occurrence of foot-and-mouth disease (FMD) are very great, it is essential to make a preemptive diagnosis to cope with it in order to minimize those damages. The main symptoms of foot-and-mouth disease are body temperature increase, loss of appetite, formation of blisters in the mouth, on hooves and breasts, etc. in a cow or a bull, among which the body temperature check is the easiest and quickest way to detect the disease. In this paper, an algorithm to detect FMD from the hooves of cattle was developed and implemented for preemptive coping with foot-and-mouth disease, and a hoof check test is conducted after the installation of a high-resolution camera module, a thermo-graphic camera, and a temperature/humidity module in the cattle shed. Through the algorithm and system developed in this study, it is possible to cope with an early-stage situation in which cattle are suspected as suffering from foot-and-mouth disease, creating an optimized growth environment for cattle. In particular, in this study, the system to cope with FMD does not use a portable thermo-graphic camera, but a fixed camera attached to the cattle shed. It does not need additional personnel, has a function to measure the temperature of cattle hooves automatically through an image algorithm, and includes an automated alarm for a smart phone. This system enables the prediction of a possible occurrence of foot-and-mouth disease on a real-time basis, and also enables initial-stage disinfection to be performed to cope with the disease without needing extra personnel.
Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
/
v.37
no.4
/
pp.267-277
/
2019
Photogrammetry and computer vision are identical in determining the three-dimensional coordinates of images taken with a camera, but the two fields are not directly compatible with each other due to differences in camera lens distortion modeling methods and camera coordinate systems. In general, data processing of drone images is performed by bundle block adjustments using computer vision-based software, and then the plotting of the image is performed by photogrammetry-based software for mapping. In this case, we are faced with the problem of converting the model of camera lens distortions into the formula used in photogrammetry. Therefore, this study described the differences between the coordinate systems and lens distortion models used in photogrammetry and computer vision, and proposed a methodology for converting them. In order to verify the conversion formula of the camera lens distortion models, first, lens distortions were added to the virtual coordinates without lens distortions by using the computer vision-based lens distortion models. Then, the distortion coefficients were determined using photogrammetry-based lens distortion models, and the lens distortions were removed from the photo coordinates and compared with the virtual coordinates without the original distortions. The results showed that the root mean square distance was good within 0.5 pixels. In addition, epipolar images were generated to determine the accuracy by applying lens distortion coefficients for photogrammetry. The calculated root mean square error of y-parallax was found to be within 0.3 pixels.
Park, Byung-Seo;Kang, Ji-Won;Lee, Sol;Park, Jung-Tak;Choi, Jang-Hwan;Kim, Dong-Wook;Seo, Young-Ho
Journal of Broadcast Engineering
/
v.26
no.3
/
pp.247-257
/
2021
This paper proposes a new technique for calibrating a multi-view RGB-D camera using a 3D (dimensional) skeleton. In order to calibrate a multi-view camera, consistent feature points are required. In addition, it is necessary to acquire accurate feature points in order to obtain a high-accuracy calibration result. We use the human skeleton as a feature point to calibrate a multi-view camera. The human skeleton can be easily obtained using state-of-the-art pose estimation algorithms. We propose an RGB-D-based calibration algorithm that uses the joint coordinates of the 3D skeleton obtained through the posture estimation algorithm as a feature point. Since the human body information captured by the multi-view camera may be incomplete, the skeleton predicted based on the image information acquired through it may be incomplete. After efficiently integrating a large number of incomplete skeletons into one skeleton, multi-view cameras can be calibrated by using the integrated skeleton to obtain a camera transformation matrix. In order to increase the accuracy of the calibration, multiple skeletons are used for optimization through temporal iterations. We demonstrate through experiments that a multi-view camera can be calibrated using a large number of incomplete skeletons.
Journal of the Korea Institute of Information Security & Cryptology
/
v.33
no.6
/
pp.1099-1110
/
2023
The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.
In this paper, we proposes a deep learning network for quality inspection in a multi-camera inline inspection system for pharmaceutical containers. The proposed deep learning network is specifically designed for pharmaceutical containers by using data produced in real manufacturing environments, leading to more accurate quality inspection. Additionally, the use of an inline-capable deep learning network allows for an increase in inspection speed. The development of the deep learning network for quality inspection in the multi-camera inline inspection system consists of three steps. First, a dataset of approximately 10,000 images is constructed from the production site using one line camera for foreign substance inspection and three area cameras for dimensional inspection. Second, the pharmaceutical container data is preprocessed by designating regions of interest (ROI) in areas where defects are likely to occur, tailored for foreign substance and dimensional inspections. Third, the preprocessed data is used to train the deep learning network. The network improves inference speed by reducing the number of channels and eliminating the use of linear layers, while accuracy is enhanced by applying PReLU and residual learning. This results in the creation of four deep learning modules tailored to the dataset built from the four cameras. The performance of the proposed deep learning network for quality inspection in the multi-camera inline inspection system for pharmaceutical containers was evaluated through experiments conducted by a certified testing agency. The results show that the deep learning modules achieved a classification accuracy of 99.4%, exceeding the world-class level of 95%, and an average classification speed of 0.947 seconds, which is superior to the world-class level of 1 second. Therefore, the effectiveness of the proposed deep learning network for quality inspection in a multi-camera inline inspection system for pharmaceutical containers has been demonstrated.
Animation movie is a non-photorealistic animated art that consists of formative language forming a frame based on a story and cuts describing frames that form the cuts. Therefore, in expressing an image, artistic expression methods and devices for a formative space are should be provided in a frame while cuts have the images between frames faithfully. Short animation movie is produced by various image experiments with unique image expressions rather than narration for expressing subjective discourse of a writer. Therefore, image style that forms unique images and various image directions are important factors. This study compared the experimental image directions of and , both of which showed a production method of film manipulation. First, while uses pixilation that produces images obtained from live images through painting and many optical disclosure process on a cell mat, was made with diverse collage techniques such as tearing, cutting, pasting, and folding hundreds of scenes from action movies. Second, expresses non-causal relationship of characters by their repetitive behaviors and circulatory image structure through a fixed camera angle, resisting typical scene transition. On the other hand, has an advancing structure that progresses antagonistic relationship of characters through diverse camera angles and scene transition of unique images. Third, in terms of editing, uses a long-take short cut technique in which the whole image consists of one short cut, though it seems to be many scenes with the appearance of various characters. On the other hand, maximizes visual fun and commitment by image reconstruction with hundreds of various short cuts. That is, both works have common features of an experimental work that shows expansion of animated image expressions through film manipulation that is different form general animation productions. On top of that, delivers routine life of diverse human beings without clear narration through image of conceptualized spaces. expresses it in a new image space through image reconstruction with collage technique and speedy progress, setting a binary opposition structure.
Purpose: We characterized the signals obtained from the components of a small gamma camera using Nal(Tl)-position sensitive photomultiplier tube (PSPMT) and optimized the parameters employed in the modules of the system. Materials and Methods: The small gamma camera system consists of a Nal(Tl) crystal ($60{\times}60{\times}6mm^3$) coupled with a Hamamatsu R3941 PSPMT, a resister chain circuit, preamplifiers, nuclear instrument modules (NIMs), an analog to digital converter and a personal computer for control and display. The PSPMT was read out using a resistive charge division circuit which multiplexes the 34 cross wire anode channels into 4 signals (X+, X-, Y+, Y -). Those signals were individually amplified by four preamplifiers and then, shaped and amplified by amplifiers. The signals were discriminated and digitized via triggering signal and used to localize the position of an event by applying the Anger logic. The gamma camera control and image display was performed by a program implemented using a graphic software. Results: The characteristics of signal and the parameters employed in each module of the system were presented. The intrinsic sensitivity of the system was approximately $8{\times}10^3$ counts/sec/${\mu}Ci$. The intrinsic energy resolution of the system was 18% FWHM at 140 keV. The spatial resolution obtained using a line-slit mask and $^{99m}Tc$ point source were, respectively, 2.2 and 2.3 mm FWHM in X and Y directions. Breast phantom containing $2{\sim}7mm$ diameter spheres was successfully imaged with a parallel hole collimator. The image displayed accurate size and activity distribution over the imaging field of view Conclusion: We proposed a simple method for development of a small gamma camera and presented the characteristics of the signals from the system and the optimized parameters used in the modules of the small gamma camera.
Purpose: The conventional gamma camera is not ideal for scintimammography because of its large detector size (${\sim}500mm$ in width) causing high cost and low image quality. We are developing a small gamma camera dedicated for breast imaging. Materials and Methods: The small gamma camera system consists of a NaI (T1) crystal ($60 mm{\times}60 mm{\times}6 mm$) coupled with a Hamamatsu R3941 Position Sensitive Photomultiplier Tube (PSPMT), a resister chain circuit, preamplifiers, nuclear instrument modules, an analog to digital converter and a personal computer for control and display. The PSPMT was read out using a standard resistive charge division which multiplexes the 34 cross wire anode channels into 4 signals ($X^+,\;X^-,\;Y^+,\;Y^-$). Those signals were individually amplified by four preamplifiers and then, shaped and amplified by amplifiers. The signals were discriminated ana digitized via triggering signal and used to localize the position of an event by applying the Anger logic. Results: The intrinsic sensitivity of the system was approximately 8,000 counts/sec/${\mu}Ci$. High quality flood and hole mask images were obtained. Breast phantom containing $2{\sim}7 mm$ diameter spheres was successfully imaged with a parallel hole collimator The image displayed accurate size and activity distribution over the imaging field of view Conclusion: We have succesfully developed a small gamma camera using NaI(T1)-PSPMT and nuclear Instrument modules. The small gamma camera developed in this study might improve the diagnostic accuracy of scintimammography by optimally imaging the breast.
본 웹사이트에 게시된 이메일 주소가 전자우편 수집 프로그램이나
그 밖의 기술적 장치를 이용하여 무단으로 수집되는 것을 거부하며,
이를 위반시 정보통신망법에 의해 형사 처벌됨을 유념하시기 바랍니다.
[게시일 2004년 10월 1일]
이용약관
제 1 장 총칙
제 1 조 (목적)
이 이용약관은 KoreaScience 홈페이지(이하 “당 사이트”)에서 제공하는 인터넷 서비스(이하 '서비스')의 가입조건 및 이용에 관한 제반 사항과 기타 필요한 사항을 구체적으로 규정함을 목적으로 합니다.
제 2 조 (용어의 정의)
① "이용자"라 함은 당 사이트에 접속하여 이 약관에 따라 당 사이트가 제공하는 서비스를 받는 회원 및 비회원을
말합니다.
② "회원"이라 함은 서비스를 이용하기 위하여 당 사이트에 개인정보를 제공하여 아이디(ID)와 비밀번호를 부여
받은 자를 말합니다.
③ "회원 아이디(ID)"라 함은 회원의 식별 및 서비스 이용을 위하여 자신이 선정한 문자 및 숫자의 조합을
말합니다.
④ "비밀번호(패스워드)"라 함은 회원이 자신의 비밀보호를 위하여 선정한 문자 및 숫자의 조합을 말합니다.
제 3 조 (이용약관의 효력 및 변경)
① 이 약관은 당 사이트에 게시하거나 기타의 방법으로 회원에게 공지함으로써 효력이 발생합니다.
② 당 사이트는 이 약관을 개정할 경우에 적용일자 및 개정사유를 명시하여 현행 약관과 함께 당 사이트의
초기화면에 그 적용일자 7일 이전부터 적용일자 전일까지 공지합니다. 다만, 회원에게 불리하게 약관내용을
변경하는 경우에는 최소한 30일 이상의 사전 유예기간을 두고 공지합니다. 이 경우 당 사이트는 개정 전
내용과 개정 후 내용을 명확하게 비교하여 이용자가 알기 쉽도록 표시합니다.
제 4 조(약관 외 준칙)
① 이 약관은 당 사이트가 제공하는 서비스에 관한 이용안내와 함께 적용됩니다.
② 이 약관에 명시되지 아니한 사항은 관계법령의 규정이 적용됩니다.
제 2 장 이용계약의 체결
제 5 조 (이용계약의 성립 등)
① 이용계약은 이용고객이 당 사이트가 정한 약관에 「동의합니다」를 선택하고, 당 사이트가 정한
온라인신청양식을 작성하여 서비스 이용을 신청한 후, 당 사이트가 이를 승낙함으로써 성립합니다.
② 제1항의 승낙은 당 사이트가 제공하는 과학기술정보검색, 맞춤정보, 서지정보 등 다른 서비스의 이용승낙을
포함합니다.
제 6 조 (회원가입)
서비스를 이용하고자 하는 고객은 당 사이트에서 정한 회원가입양식에 개인정보를 기재하여 가입을 하여야 합니다.
제 7 조 (개인정보의 보호 및 사용)
당 사이트는 관계법령이 정하는 바에 따라 회원 등록정보를 포함한 회원의 개인정보를 보호하기 위해 노력합니다. 회원 개인정보의 보호 및 사용에 대해서는 관련법령 및 당 사이트의 개인정보 보호정책이 적용됩니다.
제 8 조 (이용 신청의 승낙과 제한)
① 당 사이트는 제6조의 규정에 의한 이용신청고객에 대하여 서비스 이용을 승낙합니다.
② 당 사이트는 아래사항에 해당하는 경우에 대해서 승낙하지 아니 합니다.
- 이용계약 신청서의 내용을 허위로 기재한 경우
- 기타 규정한 제반사항을 위반하며 신청하는 경우
제 9 조 (회원 ID 부여 및 변경 등)
① 당 사이트는 이용고객에 대하여 약관에 정하는 바에 따라 자신이 선정한 회원 ID를 부여합니다.
② 회원 ID는 원칙적으로 변경이 불가하며 부득이한 사유로 인하여 변경 하고자 하는 경우에는 해당 ID를
해지하고 재가입해야 합니다.
③ 기타 회원 개인정보 관리 및 변경 등에 관한 사항은 서비스별 안내에 정하는 바에 의합니다.
제 3 장 계약 당사자의 의무
제 10 조 (KISTI의 의무)
① 당 사이트는 이용고객이 희망한 서비스 제공 개시일에 특별한 사정이 없는 한 서비스를 이용할 수 있도록
하여야 합니다.
② 당 사이트는 개인정보 보호를 위해 보안시스템을 구축하며 개인정보 보호정책을 공시하고 준수합니다.
③ 당 사이트는 회원으로부터 제기되는 의견이나 불만이 정당하다고 객관적으로 인정될 경우에는 적절한 절차를
거쳐 즉시 처리하여야 합니다. 다만, 즉시 처리가 곤란한 경우는 회원에게 그 사유와 처리일정을 통보하여야
합니다.
제 11 조 (회원의 의무)
① 이용자는 회원가입 신청 또는 회원정보 변경 시 실명으로 모든 사항을 사실에 근거하여 작성하여야 하며,
허위 또는 타인의 정보를 등록할 경우 일체의 권리를 주장할 수 없습니다.
② 당 사이트가 관계법령 및 개인정보 보호정책에 의거하여 그 책임을 지는 경우를 제외하고 회원에게 부여된
ID의 비밀번호 관리소홀, 부정사용에 의하여 발생하는 모든 결과에 대한 책임은 회원에게 있습니다.
③ 회원은 당 사이트 및 제 3자의 지적 재산권을 침해해서는 안 됩니다.
제 4 장 서비스의 이용
제 12 조 (서비스 이용 시간)
① 서비스 이용은 당 사이트의 업무상 또는 기술상 특별한 지장이 없는 한 연중무휴, 1일 24시간 운영을
원칙으로 합니다. 단, 당 사이트는 시스템 정기점검, 증설 및 교체를 위해 당 사이트가 정한 날이나 시간에
서비스를 일시 중단할 수 있으며, 예정되어 있는 작업으로 인한 서비스 일시중단은 당 사이트 홈페이지를
통해 사전에 공지합니다.
② 당 사이트는 서비스를 특정범위로 분할하여 각 범위별로 이용가능시간을 별도로 지정할 수 있습니다. 다만
이 경우 그 내용을 공지합니다.
제 13 조 (홈페이지 저작권)
① NDSL에서 제공하는 모든 저작물의 저작권은 원저작자에게 있으며, KISTI는 복제/배포/전송권을 확보하고
있습니다.
② NDSL에서 제공하는 콘텐츠를 상업적 및 기타 영리목적으로 복제/배포/전송할 경우 사전에 KISTI의 허락을
받아야 합니다.
③ NDSL에서 제공하는 콘텐츠를 보도, 비평, 교육, 연구 등을 위하여 정당한 범위 안에서 공정한 관행에
합치되게 인용할 수 있습니다.
④ NDSL에서 제공하는 콘텐츠를 무단 복제, 전송, 배포 기타 저작권법에 위반되는 방법으로 이용할 경우
저작권법 제136조에 따라 5년 이하의 징역 또는 5천만 원 이하의 벌금에 처해질 수 있습니다.
제 14 조 (유료서비스)
① 당 사이트 및 협력기관이 정한 유료서비스(원문복사 등)는 별도로 정해진 바에 따르며, 변경사항은 시행 전에
당 사이트 홈페이지를 통하여 회원에게 공지합니다.
② 유료서비스를 이용하려는 회원은 정해진 요금체계에 따라 요금을 납부해야 합니다.
제 5 장 계약 해지 및 이용 제한
제 15 조 (계약 해지)
회원이 이용계약을 해지하고자 하는 때에는 [가입해지] 메뉴를 이용해 직접 해지해야 합니다.
제 16 조 (서비스 이용제한)
① 당 사이트는 회원이 서비스 이용내용에 있어서 본 약관 제 11조 내용을 위반하거나, 다음 각 호에 해당하는
경우 서비스 이용을 제한할 수 있습니다.
- 2년 이상 서비스를 이용한 적이 없는 경우
- 기타 정상적인 서비스 운영에 방해가 될 경우
② 상기 이용제한 규정에 따라 서비스를 이용하는 회원에게 서비스 이용에 대하여 별도 공지 없이 서비스 이용의
일시정지, 이용계약 해지 할 수 있습니다.
제 17 조 (전자우편주소 수집 금지)
회원은 전자우편주소 추출기 등을 이용하여 전자우편주소를 수집 또는 제3자에게 제공할 수 없습니다.
제 6 장 손해배상 및 기타사항
제 18 조 (손해배상)
당 사이트는 무료로 제공되는 서비스와 관련하여 회원에게 어떠한 손해가 발생하더라도 당 사이트가 고의 또는 과실로 인한 손해발생을 제외하고는 이에 대하여 책임을 부담하지 아니합니다.
제 19 조 (관할 법원)
서비스 이용으로 발생한 분쟁에 대해 소송이 제기되는 경우 민사 소송법상의 관할 법원에 제기합니다.
[부 칙]
1. (시행일) 이 약관은 2016년 9월 5일부터 적용되며, 종전 약관은 본 약관으로 대체되며, 개정된 약관의 적용일 이전 가입자도 개정된 약관의 적용을 받습니다.