• Title/Summary/Keyword: RGB sensor

Search Result 144, Processing Time 0.028 seconds

Recent Progress in Membrane based Colorimetric Sensor for Metal Ion Detection (색 변화를 활용한 중금속 이온 검출에 특화된 멤브레인 기반 센서의 최근 연구 개발 동향)

  • Bhang, Saeyun;Patel, Rajkumar
    • Membrane Journal
    • /
    • v.31 no.2
    • /
    • pp.87-100
    • /
    • 2021
  • With a striking increase in the level of contamination and subsequent degradations in the environment, detection and monitoring of contaminants in various sites has become a crucial mission in current society. In this review, we have summarized the current research areas in membrane-based colorimetric sensors for trace detection of various molecules. The researches covered in this summary utilize membranes composed of cellulose fibers as sensing platforms and metal nanoparticles or fluorophores as optical reagents. Displaying decent or excellent sensitivity, most of the developed sensors achieve a significant selectivity in the presence of interfering ions. The physical and chemical properties of cellulose membrane platforms can be customized by changing the synthesis method or type of optical reagent used, allowing a wide range of applications possible. Membrane-based sensors are also portable and have great mechanical properties, which enable on-site detection of contaminants. With such superior qualities, membrane-based sensors examined in the researches were used for versatile purposes including quantification of heavy metals in drinking water, trace detection of toxic antibiotics and heavy metals in environmental water samples. Some of the sensors exhibited additional features like antimicrobial ability and recyclability. Lastly, while most of the sensors aimed for a detection enabled by naked eyes through rapid colour change, many of them investigated further detection methods like fluorescence, UV-vis spectroscopy, and RGB colour intensity.

A New Demosaicking Algorithm for Honeycomb CFA CCD by Utilizing Color Filter Characteristics (Honeycomb CFA 구조를 갖는 CCD 이미지센서의 필터특성을 고려한 디모자이킹 알고리즘의 개발 및 검증)

  • Seo, Joo-Hyun;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.62-70
    • /
    • 2011
  • Nowadays image sensor is an essential component in many multimedia devices, and it is covered by a color filter array to filter out specific color components at each pixel. We need a certain algorithm to combine those color components reconstructed a full color image from incomplete color samples output from an image sensor, which is called a demosaicking process. Most existing demosaicking algorithms are developed for ideal image sensors, but they do not work well for the practical cases because of dissimilar characteristics of each sensor. In this paper, we propose a new demosaicking algorithm in which the color filter characteristics are fully utilized to generate a good image. To demonstrate significance of our algorithm, we used a commerically available sensor, CBN385B, which is a sort of Honeycomb-style CFA(Color Filter Array) CCD image sensor. As a performance metric of the algorithm, PSNR(Peak Signal to Noise Ratio) and RGB distribution of the output image are used. We first implemented our algorithm in C-language for simulation on various input images. As a result, we could obtain much enhanced images whose PSNR was improved by 4~8 dB compared to the commonly idealized approaches, and we also could remove the inclined red property which was an unique characteristics of the image sensor(CBN385B).Then we implemented it in hardware to overcome its problem of computational complexity which made it operate slow in software. The hardware was verified on Spartan-3E FPGA(Field Programable Gate Array) to give almost the same performance as software, but in much faster execution time. The total logic gate count is 45K, and it handles 25 image frmaes per second.

Development of tracer concentration analysis method using drone-based spatio-temporal hyperspectral image and RGB image (드론기반 시공간 초분광영상 및 RGB영상을 활용한 추적자 농도분석 기법 개발)

  • Gwon, Yeonghwa;Kim, Dongsu;You, Hojun;Han, Eunjin;Kwon, Siyoon;Kim, Youngdo
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.8
    • /
    • pp.623-634
    • /
    • 2022
  • Due to river maintenance projects such as the creation of hydrophilic areas around rivers and the Four Rivers Project, the flow characteristics of rivers are continuously changing, and the risk of water quality accidents due to the inflow of various pollutants is increasing. In the event of a water quality accident, it is necessary to minimize the effect on the downstream side by predicting the concentration and arrival time of pollutants in consideration of the flow characteristics of the river. In order to track the behavior of these pollutants, it is necessary to calculate the diffusion coefficient and dispersion coefficient for each section of the river. Among them, the dispersion coefficient is used to analyze the diffusion range of soluble pollutants. Existing experimental research cases for tracking the behavior of pollutants require a lot of manpower and cost, and it is difficult to obtain spatially high-resolution data due to limited equipment operation. Recently, research on tracking contaminants using RGB drones has been conducted, but RGB images also have a limitation in that spectral information is limitedly collected. In this study, to supplement the limitations of existing studies, a hyperspectral sensor was mounted on a remote sensing platform using a drone to collect temporally and spatially higher-resolution data than conventional contact measurement. Using the collected spatio-temporal hyperspectral images, the tracer concentration was calculated and the transverse dispersion coefficient was derived. It is expected that by overcoming the limitations of the drone platform through future research and upgrading the dispersion coefficient calculation technology, it will be possible to detect various pollutants leaking into the water system, and to detect changes in various water quality items and river factors.

Land cover classification using LiDAR intensity data and neural network

  • Minh, Nguyen Quang;Hien, La Phu
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.4
    • /
    • pp.429-438
    • /
    • 2011
  • LiDAR technology is a combination of laser ranging, satellite positioning technology and digital image technology for study and determination with high accuracy of the true earth surface features in 3 D. Laser scanning data is typically a points cloud on the ground, including coordinates, altitude and intensity of laser from the object on the ground to the sensor (Wehr & Lohr, 1999). Data from laser scanning can produce products such as digital elevation model (DEM), digital surface model (DSM) and the intensity data. In Vietnam, the LiDAR technology has been applied since 2005. However, the application of LiDAR in Vietnam is mostly for topological mapping and DEM establishment using point cloud 3D coordinate. In this study, another application of LiDAR data are present. The study use the intensity image combine with some other data sets (elevation data, Panchromatic image, RGB image) in Bacgiang City to perform land cover classification using neural network method. The results show that it is possible to obtain land cover classes from LiDAR data. However, the highest accurate classification can be obtained using LiDAR data with other data set and the neural network classification is more appropriate approach to conventional method such as maximum likelyhood classification.

A Study on Detection of Lane and Situation of Obstacle for AGV using Vision System (비전 시스템을 이용한 AGV의 차선인식 및 장애물 위치 검출에 관한 연구)

  • 이진우;이영진;이권순
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2000.11a
    • /
    • pp.207-217
    • /
    • 2000
  • In this paper, we describe an image processing algorithm which is able to recognize the road lane. This algorithm performs to recognize the interrelation between AGV and the other vehicle. We experimented on AGV driving test with color CCD camera which is setup on the top of vehicle and acquires the digital signal. This paper is composed of two parts. One is image preprocessing part to measure the condition of the lane and vehicle. This finds the information of lines using RGB ratio cutting algorithm, the edge detection and Hough transform. The other obtains the situation of other vehicles using the image processing and viewport. At first, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, if vehicle knows the driving conditions which are angle, distance error and real position of other vehicles, we should calculate the reference steering angle.

  • PDF

Research for development of small format multi -spectral aerial photographing systems (PKNU 3) (소형 다중분광 항공촬영 시스템(PKNU 3호) 개발에 관한 연구)

  • 이은경;최철웅;서영찬;조남춘
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2004.11a
    • /
    • pp.143-152
    • /
    • 2004
  • Researchers seeking geological and environmental information, depend on remote sensing and aerial photographic datum from various commercial satellites and aircraft. However, adverse weather conditions as well as equipment expense limit the ability to collect data anywhere and anytime. To allow for better flexibility in geological and environmental data collection, we have developed a compact, multi-spectral automatic Aerial Photographic system (PKNU2). This system's Multi-spectral camera can record visible (RGB) and infrared (NIR) band (3032*2008 Pixels) images Visible and infrared band images were obtained from each camera respectively and produced color-infrared composite images to be analyzed for the purpose of the environmental monitoring. However this did not provide quality data. Furthermore, it has the disadvantage of having the stereoscopic overlap area being 60% unsatisfied due to the 12 seconds of storage time of each data The PKNU2 system in contrast, photographed photos of great capacity Thus, with such results, we have been proceeding to develop the advanced PKNU2 (PKNU3) system that consists of a color-infrared spectral camera that can photograph in the visible and near-infrared bands simultaneously using a single sensor, a thermal infrared camera, two 40G computers to store images, and an MPEG board that can compress and transfer data to the computer in real time as well as be able to be mounted onto a helicopter platform.

  • PDF

Recognition of Tabacco Ripeness & Grading based on the Neural Network (신경회로망을 이용한 담배 숙도인식 및 등급판정)

  • LEE, S.S.;LEE, C.H.;LEE, D.W.;HWANG, H.
    • Journal of the Korean Society of Tobacco Science
    • /
    • v.17 no.1
    • /
    • pp.5-14
    • /
    • 1995
  • Efficient algorithms for the automatic classification of flue-cured tovacco ripeness and grading have been developed The ripeness of the tobacco was classified into 4 levels vased on the color. The lab-built simple RGB color measuring system was utilized for detecting the light reflectance of the tobacco leaves. The measured data were used far training the artificial neural network The performance of the trained network was also tested far the untrained samples. The spectrophotometer was used to detect the light reflectance and absorption of the graded tobacco leaves in the frequency ranges of the visible light The measured data and the statistical analysis was performed to investigate the light characteristics of the graded samples. The measured data were obtained from samples of 5 different grades directly without considering the leaf positions. Those data were used far training the artificial neural network The performance of the trained network was also tested far the untrained samples. The neural network based sensor information processing showed successful results for grading of tobacco leaves.

  • PDF

Forest Fire Damage Assessment Using UAV Images: A Case Study on Goseong-Sokcho Forest Fire in 2019

  • Yeom, Junho;Han, Youkyung;Kim, Taeheon;Kim, Yongmin
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.5
    • /
    • pp.351-357
    • /
    • 2019
  • UAV (Unmanned Aerial Vehicle) images can be exploited for rapid forest fire damage assessment by virtue of UAV systems' advantages. In 2019, catastrophic forest fire occurred in Goseong and Sokcho, Korea and burned 1,757 hectares of forests. We visited the town in Goseong where suffered the most severe damage and conducted UAV flights for forest fire damage assessment. In this study, economic and rapid damage assessment method for forest fire has been proposed using UAV systems equipped with only a RGB sensor. First, forest masking was performed using automatic elevation thresholding to extract forest area. Then ExG (Excess Green) vegetation index which can be calculated without near-infrared band was adopted to extract damaged forests. In addition, entropy filtering was applied to ExG for better differentiation between damaged and non-damaged forest. We could confirm that the proposed forest masking can screen out non-forest land covers such as bare soil, agriculture lands, and artificial objects. In addition, entropy filtering enhanced the ExG homogeneity difference between damaged and non-damaged forests. The automatically detected damaged forests of the proposed method showed high accuracy of 87%.

Physical Function Monitoring Systems for Community-Dwelling Elderly Living Alone: A Comprehensive Review

  • Jo, Sungbae;Song, Changho
    • Physical Therapy Rehabilitation Science
    • /
    • v.11 no.1
    • /
    • pp.49-57
    • /
    • 2022
  • Objective: This study aims to conduct a comprehensive review of monitoring systems to monitor and manage physical function of community-dwelling elderly living alone and suggest future directions of unobtrusive monitoring. Design: Literature review Methods: The importance of health-related monitoring has been emphasized due to the aging population and novel corona virus (COVID-19) outbreak.As the population gets old and because of changes in culture, the number of single-person households among the elderly is expected to continue to increase. Elders are staying home longer and their physical function may decline rapidly,which can be a disturbing factorto successful aging.Therefore, systematic elderly management must be considered. Results: Frequently used technologies to monitor elders at home included red, green, blue (RGB) camera, accelerometer, passive infrared (PIR) sensor, wearable devices, and depth camera. Of them all, considering privacy concerns and easy-to-use features for elders, depth camera possibly can be a technology to be adapted at homes to unobtrusively monitor physical function of elderly living alone.The depth camera has been used to evaluate physical functions during rehabilitation and proven its efficiency. Conclusions: Therefore, physical monitoring system that is unobtrusive should be studied and developed in the future to monitor physical function of community-dwelling elderly living alone for the aging population.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF