• Title/Summary/Keyword: Low Vision

Search Result 701, Processing Time 0.027 seconds

A Study on the Prototype Setting for Energy Independent Site Planning (에너지 자립형 단지계획 프로토타입 설정에 관한 연구)

  • Ha, Seung-Beom
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.359-366
    • /
    • 2021
  • It's been more than 30 years since global warming by the increase in CO2 became a cause celebre worldwide. Recently the government promulgated Framework Act on on Low-Carbon Green Growth and has been continuously putting much effort into saving energy and reducing carbon dioxide emissions such as an international climate change conference to prevent the increase in CO2. However, because most cities are not planned for energy saving, new cities should be planned as the active energy-efficient urban structure for 'sustainable urban development' from a long-term perspective. This study aims to design a new prototype for the sustainable energy-independent and environment-friendly housing estates which is the nation's new vision in the era of the Fourth Industrial Revolution. A study on the energy-independent site planning and the quantitative standardization of its factor will be conducted.

Extraction of Skin Regions through Filtering-based Noise Removal (필터링 기반의 잡음 제거를 통한 피부 영역의 추출)

  • Jang, Seok-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.672-678
    • /
    • 2020
  • Ultra-high-speed images that accurately depict the minute movements of objects have become common as low-cost and high-performance cameras that can film at high speeds have emerged. In this paper, the proposed method removes unexpected noise contained in images after input at high speed, and then extracts an area of interest that can represent personal information, such as skin areas, from the image in which noise has been removed. In this paper, noise generated by abnormal electrical signals is removed by applying bilateral filters. A color model created through pre-learning is then used to extract the area of interest that represents the personal information contained within the image. Experimental results show that the introduced algorithms remove noise from high-speed images and then extract the area of interest robustly. The approach presented in this paper is expected to be useful in various applications related to computer vision, such as image preprocessing, noise elimination, tracking and monitoring of target areas, etc.

A System for Determining the Growth Stage of Fruit Tree Using a Deep Learning-Based Object Detection Model (딥러닝 기반의 객체 탐지 모델을 활용한 과수 생육 단계 판별 시스템)

  • Bang, Ji-Hyeon;Park, Jun;Park, Sung-Wook;Kim, Jun-Yung;Jung, Se-Hoon;Sim, Chun-Bo
    • Smart Media Journal
    • /
    • v.11 no.4
    • /
    • pp.9-18
    • /
    • 2022
  • Recently, research and system using AI is rapidly increasing in various fields. Smart farm using artificial intelligence and information communication technology is also being studied in agriculture. In addition, data-based precision agriculture is being commercialized by convergence various advanced technology such as autonomous driving, satellites, and big data. In Korea, the number of commercialization cases of facility agriculture among smart agriculture is increasing. However, research and investment are being biased in the field of facility agriculture. The gap between research and investment in facility agriculture and open-air agriculture continues to increase. The fields of fruit trees and plant factories have low research and investment. There is a problem that the big data collection and utilization system is insufficient. In this paper, we are proposed the system for determining the fruit tree growth stage using a deep learning-based object detection model. The system was proposed as a hybrid app for use in agricultural sites. In addition, we are implemented an object detection function for the fruit tree growth stage determine.

Assessment and Comparison of Three Dimensional Exoscopes for Near-Infrared Fluorescence-Guided Surgery Using Second-Window Indocyanine-Green

  • Cho, Steve S.;Teng, Clare W.;Ravin, Emma De;Singh, Yash B.;Lee, John Y.K.
    • Journal of Korean Neurosurgical Society
    • /
    • v.65 no.4
    • /
    • pp.572-581
    • /
    • 2022
  • Objective : Compared to microscopes, exoscopes have advantages in field-depth, ergonomics, and educational value. Exoscopes are especially well-poised for adaptation into fluorescence-guided surgery (FGS) due to their excitation source, light path, and image processing capabilities. We evaluated the feasibility of near-infrared FGS using a 3-dimensional (3D), 4 K exoscope with near-infrared fluorescence imaging capability. We then compared it to the most sensitive, commercially-available near-infrared exoscope system (3D and 960 p). In-vitro and intraoperative comparisons were performed. Methods : Serial dilutions of indocyanine-green (1-2000 ㎍/mL) were imaged with the 3D, 4 K Olympus Orbeye (system 1) and the 3D, 960 p VisionSense Iridium (system 2). Near-infrared sensitivity was calculated using signal-to-background ratios (SBRs). In addition, three patients with brain tumors were administered indocyanine-green and imaged with system 1, with two also imaged with system 2 for comparison. Results : Systems 1 and 2 detected near-infrared fluorescence from indocyanine green concentrations of >250 ㎍/L and >31.3 ㎍/L, respectively. Intraoperatively, system 1 visualized strong near-infrared fluorescence from two, strongly gadolinium-enhancing meningiomas (SBR=2.4, 1.7). The high-resolution, bright images were sufficient for the surgeon to appreciate the underlying anatomy in the near-infrared mode. However, system 1 was not able to visualize fluorescence from a weakly-enhancing intraparenchymal metastasis. In contrast, system 2 successfully visualized both the meningioma and the metastasis but lacked high resolution stereopsis. Conclusion : Three-dimensional exoscope systems provide an alternative visualization platform for both standard microsurgery and near-infrared fluorescent guided surgery. However, when tumor fluorescence is weak (i.e., low fluorophore uptake, deep tumors), highly sensitive near-infrared visualization systems may be required.

Divide and Conquer Strategy for CNN Model in Facial Emotion Recognition based on Thermal Images (얼굴 열화상 기반 감정인식을 위한 CNN 학습전략)

  • Lee, Donghwan;Yoo, Jang-Hee
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.2
    • /
    • pp.1-10
    • /
    • 2021
  • The ability to recognize human emotions by computer vision is a very important task, with many potential applications. Therefore the demand for emotion recognition using not only RGB images but also thermal images is increasing. Compared to RGB images, thermal images has the advantage of being less affected by lighting conditions but require a more sophisticated recognition method with low-resolution sources. In this paper, we propose a Divide and Conquer-based CNN training strategy to improve the performance of facial thermal image-based emotion recognition. The proposed method first trains to classify difficult-to-classify similar emotion classes into the same class group by confusion matrix analysis and then divides and solves the problem so that the emotion group classified into the same class group is recognized again as actual emotions. In experiments, the proposed method has improved accuracy in all the tests than when recognizing all the presented emotions with a single CNN model.

Multi-level Cross-attention Siamese Network For Visual Object Tracking

  • Zhang, Jianwei;Wang, Jingchao;Zhang, Huanlong;Miao, Mengen;Cai, Zengyu;Chen, Fuguo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.3976-3990
    • /
    • 2022
  • Currently, cross-attention is widely used in Siamese trackers to replace traditional correlation operations for feature fusion between template and search region. The former can establish a similar relationship between the target and the search region better than the latter for robust visual object tracking. But existing trackers using cross-attention only focus on rich semantic information of high-level features, while ignoring the appearance information contained in low-level features, which makes trackers vulnerable to interference from similar objects. In this paper, we propose a Multi-level Cross-attention Siamese network(MCSiam) to aggregate the semantic information and appearance information at the same time. Specifically, a multi-level cross-attention module is designed to fuse the multi-layer features extracted from the backbone, which integrate different levels of the template and search region features, so that the rich appearance information and semantic information can be used to carry out the tracking task simultaneously. In addition, before cross-attention, a target-aware module is introduced to enhance the target feature and alleviate interference, which makes the multi-level cross-attention module more efficient to fuse the information of the target and the search region. We test the MCSiam on four tracking benchmarks and the result show that the proposed tracker achieves comparable performance to the state-of-the-art trackers.

SHOMY: Detection of Small Hazardous Objects using the You Only Look Once Algorithm

  • Kim, Eunchan;Lee, Jinyoung;Jo, Hyunjik;Na, Kwangtek;Moon, Eunsook;Gweon, Gahgene;Yoo, Byungjoon;Kyung, Yeunwoong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.8
    • /
    • pp.2688-2703
    • /
    • 2022
  • Research on the advanced detection of harmful objects in airport cargo for passenger safety against terrorism has increased recently. However, because associated studies are primarily focused on the detection of relatively large objects, research on the detection of small objects is lacking, and the detection performance for small objects has remained considerably low. Here, we verified the limitations of existing research on object detection and developed a new model called the Small Hazardous Object detection enhanced and reconstructed Model based on the You Only Look Once version 5 (YOLOv5) algorithm to overcome these limitations. We also examined the performance of the proposed model through different experiments based on YOLOv5, a recently launched object detection model. The detection performance of our model was found to be enhanced by 0.3 in terms of the mean average precision (mAP) index and 1.1 in terms of mAP (.5:.95) with respect to the YOLOv5 model. The proposed model is especially useful for the detection of small objects of different types in overlapping environments where objects of different sizes are densely packed. The contributions of the study are reconstructed layers for the Small Hazardous Object detection enhanced and reconstructed Model based on YOLOv5 and the non-requirement of data preprocessing for immediate industrial application without any performance degradation.

Korean Text Image Super-Resolution for Improving Text Recognition Accuracy (텍스트 인식률 개선을 위한 한글 텍스트 이미지 초해상화)

  • Junhyeong Kwon;Nam Ik Cho
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.178-184
    • /
    • 2023
  • Finding texts in general scene images and recognizing their contents is a very important task that can be used as a basis for robot vision, visual assistance, and so on. However, for the low-resolution text images, the degradations, such as noise or blur included in text images, are more noticeable, which leads to severe performance degradation of text recognition accuracy. In this paper, we propose a new Korean text image super-resolution based on a Transformer-based model, which generally shows higher performance than convolutional neural networks. In the experiments, we show that text recognition accuracy for Korean text images can be improved when our proposed text image super-resolution method is used. We also propose a new Korean text image dataset for training our model, which contains massive HR-LR Korean text image pairs.

A Comprehensive Survey of Lightweight Neural Networks for Face Recognition (얼굴 인식을 위한 경량 인공 신경망 연구 조사)

  • Yongli Zhang;Jaekyung Yang
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.1
    • /
    • pp.55-67
    • /
    • 2023
  • Lightweight face recognition models, as one of the most popular and long-standing topics in the field of computer vision, has achieved vigorous development and has been widely used in many real-world applications due to fewer number of parameters, lower floating-point operations, and smaller model size. However, few surveys reviewed lightweight models and reimplemented these lightweight models by using the same calculating resource and training dataset. In this survey article, we present a comprehensive review about the recent research advances on the end-to-end efficient lightweight face recognition models and reimplement several of the most popular models. To start with, we introduce the overview of face recognition with lightweight models. Then, based on the construction of models, we categorize the lightweight models into: (1) artificially designing lightweight FR models, (2) pruned models to face recognition, (3) efficient automatic neural network architecture design based on neural architecture searching, (4) Knowledge distillation and (5) low-rank decomposition. As an example, we also introduce the SqueezeFaceNet and EfficientFaceNet by pruning SqueezeNet and EfficientNet. Additionally, we reimplement and present a detailed performance comparison of different lightweight models on the nine different test benchmarks. At last, the challenges and future works are provided. There are three main contributions in our survey: firstly, the categorized lightweight models can be conveniently identified so that we can explore new lightweight models for face recognition; secondly, the comprehensive performance comparisons are carried out so that ones can choose models when a state-of-the-art end-to-end face recognition system is deployed on mobile devices; thirdly, the challenges and future trends are stated to inspire our future works.

Spatio-Temporal Variations of Paddy and Water Salinity of Gunnae Reclaimed Tidelands in Western Coastal Area of Korea (서해안 군내간척지 담수호 및 농경지 염류의 시공간적 분포 특성 분석)

  • Beom, Jina;Jeung, Minhyuk;Park, Hyun-Jin;Choi, Woo-Jung;Kim, YeongJoo;Yoon, Kwang Sik
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.65 no.1
    • /
    • pp.73-81
    • /
    • 2023
  • To understand salinity status of fresh water and paddy soils and the susceptibility of rice to salinity stress of Gunnae reclaimed tidelands, salinity monitoring was conducted in rainy and dry seasons. For fresh water, a high salinity was observed at the sampling location near the sluice gate and decreased with distance from the gate. This spatial pattern of fresh water salinity indicates the necessity of spatial distribution of salinity in the assessment of salinity status of fresh water. Interestingly, there was significant correlation between rainfall amount and salinity, implying that salinity of fresh water varies with rainfall and thus it may be possible to predict salinity of water using rainfall. Soil salinity also higher near the gate, reflecting the influence of high saline water. In addition, the groundwater salinity also high to threat rice growth. Though soil salinity status indicated low possibility of sodium injury, there was changes in soil salinity status during the course of rice growth, suggesting that more intensive monitoring of soil salinity may be necessary for soil salinity assessment. Our study suggests the necessity of intensive salinity monitoring to understand the spatio-temporal variations of salinity of water and soil of reclaimed tideland areas.