• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.033 seconds

Development of Smart Switchgear for Versatile Ventilation Garments: Optimum Diameter and Voltage Application Unit Time of One-way Shape Memory Alloy Wire for a Bi-directional Actuator (가변 통기성 의복을 위한 스마트 개폐장치 개발: 양방향 작동 액추에이터 제작을 위한 일방향 형상기억합금 와이어의 최적 직경 및 전압인가 단위시간의 도출)

  • Kim, Sanggu;Kim, Minsung;Yoo, Shinjung
    • Science of Emotion and Sensibility
    • /
    • v.21 no.2
    • /
    • pp.137-144
    • /
    • 2018
  • The study figured out the operational conditions of a two-way movement actuator made of one-way shape memory alloy (OWSMA) for versatile ventilation intelligent garments. To develop a low-power actuator that consumes energy only when a garment changes its form such as opening and closing, multiple channels of OWSMA were used, and optimum diameter of the wires was examined. For the switch device, optimum voltage application unit time was determined. Optimum diameter of OWSMA wire was determined by applying 3.7V to the pre-determined candidate diameters, which demonstrated two-way operation in previous studies. In order to evaluate the optimum voltage application time, the internal diameter of the actuator was measured while increasing and decreasing by 50 ms from the unit time of voltage application. Delay time under two-way operation of the actuator was measured to minimize interference caused by heat between channels. Power of 3.7V was applied to OWSMA for assessment of optimal time, and the whole process from heating to cooling was video-recorded with a thermal image camera to determine the point of time at which the temperature of OWSMA wire dropped below the phase transformation temperature. The results showed that $0.4{\Phi}$ was the most suitable diameter, and the optimum unit time of voltage applied to open and close the actuator was 4100ms. It was also shown that the delay time should be more than 1.8 seconds between two-way operations of the actuator.

Image-based Proximity Warning System for Excavator of Construction Sites (건설현장에 적합한 영상 기반 굴삭기 접근 감지 시스템)

  • Jo, Byung-Wan;Lee, Yun-Sung;Kim, Do-Keun;Kim, Jung-Hoon;Choi, Pyung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.10
    • /
    • pp.588-597
    • /
    • 2016
  • According to an annual industrial accident report from Ministry of Employment of Labor, among the various types of accidents, the number of accidents from construction industry increases every year with the percentage of 27.56% as of 2014. In fact, this number has risen almost 3% over the last four years. Currently, among the industrial accidents, heavy machinery causes most of the tragedy such as collision or narrowness. As reported by the government, most of the time, both heavy machinery drivers and workers were unaware of each other's positions. Nowadays, however when society requires highly complex structures in minimal time, it is inevitable to allow heavy construction equipments running simultaneously in a construction field. In this paper, we have developed Approach Detection System for excavator in order to reduce the increasing number. The imaged based Approach Detection System contains camera, approach detection sensor and Around View Monitor (AVM). This system is also applicable in a small scale construction fields along with other machineries besides excavators since this system does not require additional communication infra such as server.

Usability of a smartphone food picture app for assisting 24-hour dietary recall: a pilot study

  • Hongu, Nobuko;Pope, Benjamin T.;Bilgic, Pelin;Orr, Barron J.;Suzuki, Asuka;Kim, Angela Sarah;Merchant, Nirav C.;Roe, Denise J.
    • Nutrition Research and Practice
    • /
    • v.9 no.2
    • /
    • pp.207-212
    • /
    • 2015
  • BACKGROUND/OBJECTIVES: The Recaller app was developed to help individuals record their food intakes. This pilot study evaluated the usability of this new food picture application (app), which operates on a smartphone with an embedded camera and Internet capability. SUBJECTS/METHODS: Adults aged 19 to 28 years (23 males and 22 females) were assigned to use the Recaller app on six designated, nonconsecutive days in order to capture an image of each meal and snack before and after eating. The images were automatically time-stamped and uploaded by the app to the Recaller website. A trained nutritionist administered a 24-hour dietary recall interview 1 day after food images were taken. Participants' opinions of the Recaller app and its usability were determined by a follow-up survey. As an evaluation indicator of usability, the number of images taken was analyzed and multivariate Poisson regression used to model the factors determining the number of images sent. RESULTS: A total of 3,315 food images were uploaded throughout the study period. The median number of images taken per day was nine for males and 13 for females. The survey showed that the Recaller app was easy to use, and 50% of the participants would consider using the app daily. Predictors of a higher number of images were as follows: greater interval (hours) between the first and last food images sent, weekend, and female. CONCLUSIONS: The results of this pilot study provide valuable information for understanding the usability of the Recaller smartphone food picture app as well as other similarly designed apps. This study provides a model for assisting nutrition educators in their collection of food intake information by using tools available on smartphones. This innovative approach has the potential to improve recall of foods eaten and monitoring of dietary intake in nutritional studies.

Development of a CNN-based Cross Point Detection Algorithm for an Air Duct Cleaning Robot (CNN 기반 공조 덕트 청소 로봇의 교차점 검출 알고리듬 개발)

  • Yi, Sarang;Noh, Eunsol;Hong, Seokmoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.8
    • /
    • pp.1-8
    • /
    • 2020
  • Air ducts installed for ventilation inside buildings accumulate contaminants during their service life. Robots are installed to clean the air duct at low cost, but they are still not fully automated and depend on manpower. In this study, an intersection detection algorithm for autonomous driving was applied to an air duct cleaning robot. Autonomous driving of the robot was achieved by calculating the distance and angle between the extracted point and the center point through the intersection detection algorithm from the camera image mounted on the robot. The training data consisted of CAD images of the duct interior as well as the cross-point coordinates and angles between the two boundary lines. The deep learning-based CNN model was applied as a detection algorithm. For training, the cross-point coordinates were obtained from CAD images. The accuracy was determined based on the differences in the actual and predicted areas and distances. A cleaning robot prototype was designed, consisting of a frame, a Raspberry Pi computer, a control unit and a drive unit. The algorithm was validated by video imagery of the robot in operation. The algorithm can be applied to vehicles operating in similar environments.

Forward Vehicle Detection Algorithm Using Column Detection and Bird's-Eye View Mapping Based on Stereo Vision (스테레오 비전기반의 컬럼 검출과 조감도 맵핑을 이용한 전방 차량 검출 알고리즘)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Kim, Jong-Hwan
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.255-264
    • /
    • 2011
  • In this paper, we propose a forward vehicle detection algorithm using column detection and bird's-eye view mapping based on stereo vision. The algorithm can detect forward vehicles robustly in real complex traffic situations. The algorithm consists of the three steps, namely road feature-based column detection, bird's-eye view mapping-based obstacle segmentation, obstacle area remerging and vehicle verification. First, we extract a road feature using maximum frequent values in v-disparity map. And we perform a column detection using the road feature as a new criterion. The road feature is more appropriate criterion than the median value because it is not affected by a road traffic situation, for example the changing of obstacle size or the number of obstacles. But there are still multiple obstacles in the obstacle areas. Thus, we perform a bird's-eye view mapping-based obstacle segmentation to divide obstacle accurately. We can segment obstacle easily because a bird's-eye view mapping can represent the position of obstacle on planar plane using depth map and camera information. Additionally, we perform obstacle area remerging processing because a segmented obstacle area may be same obstacle. Finally, we verify the obstacles whether those are vehicles or not using a depth map and gray image. We conduct experiments to prove the vehicle detection performance by applying our algorithm to real complex traffic situations.

Drone-based Vegetation Index Analysis Considering Vegetation Vitality (식생 활력도를 고려한 드론 기반의 식생지수 분석)

  • CHO, Sang-Ho;LEE, Geun-Sang;HWANG, Jee-Wook
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.2
    • /
    • pp.21-35
    • /
    • 2020
  • Vegetation information is a very important factor used in various fields such as urban planning, landscaping, water resources, and the environment. Vegetation varies according to canopy density or chlorophyll content, but vegetation vitality is not considered when classifying vegetation areas in previous studies. In this study, in order to satisfy various applied studies, a study was conducted to set a threshold value of vegetation index considering vegetation vitality. First, an eBee fixed-wing drone was equipped with a multi-spectral camera to construct optical and near-infrared orthomosaic images. Then, GIS calculation was performed for each orthomosaic image to calculate the NDVI, GNDVI, SAVI, and MSAVI vegetation index. In addition, the vegetation position of the target site was investigated through VRS survey, and the accuracy of each vegetation index was evaluated using vegetation vitality. As a result, the scenario in which the vegetation vitality point was selected as the vegetation area was higher in the classification accuracy of the vegetation index than the scenario in which the vegetation vitality point was slightly insufficient. In addition, the Kappa coefficient for each vegetation index calculated by overlapping with each site survey point was used to select the best threshold value of vegetation index for classifying vegetation by scenario. Therefore, the evaluation of vegetation index accuracy considering the vegetation vitality suggested in this study is expected to provide useful information for decision-making support in various business fields such as city planning in the future.

No-Reference Visibility Prediction Model of Foggy Images Using Perceptual Fog-Aware Statistical Features (시지각적 통계 특성을 활용한 안개 영상의 가시성 예측 모델)

  • Choi, Lark Kwon;You, Jaehee;Bovik, Alan C.
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.131-143
    • /
    • 2014
  • We propose a no-reference perceptual fog density and visibility prediction model in a single foggy scene based on natural scene statistics (NSS) and perceptual "fog aware" statistical features. Unlike previous studies, the proposed model predicts fog density without multiple foggy images, without salient objects in a scene including lane markings or traffic signs, without supplementary geographical information using an onboard camera, and without training on human-rated judgments. The proposed fog density and visibility predictor makes use of only measurable deviations from statistical regularities observed in natural foggy and fog-free images. Perceptual "fog aware" statistical features are derived from a corpus of natural foggy and fog-free images by using a spatial NSS model and observed fog characteristics including low contrast, faint color, and shifted luminance. The proposed model not only predicts perceptual fog density for the entire image but also provides local fog density for each patch size. To evaluate the performance of the proposed model against human judgments regarding fog visibility, we executed a human subjective study using a variety of 100 foggy images. Results show that the predicted fog density of the model correlates well with human judgments. The proposed model is a new fog density assessment work based on human visual perceptions. We hope that the proposed model will provide fertile ground for future research not only to enhance the visibility of foggy scenes but also to accurately evaluate the performance of defog algorithms.

Hybrid (refrctive/diffractive) lens design for the ultra-compact camera module (초소형 영상 전송 모듈용 DOE(Diffractive optical element)렌즈의 설계 및 평가)

  • Lee, Hwan-Seon;Rim, Cheon-Seog;Jo, jae-Heung;Chang, Soo;Lim, Hyun-Kyu
    • Korean Journal of Optics and Photonics
    • /
    • v.12 no.3
    • /
    • pp.240-249
    • /
    • 2001
  • A high speed ultra-compact lens with a diffractive optical element (DOE) is designed, which can be applied to mobile communication devices such as IMT2000, PDA, notebook computer, etc. The designed hybrid lens has sufficiently high performance of less than f/2.2, compact size of 3.3 mm (1st surf. to image), and wide field angle of more than 30 deg. compared with the specifications of a single lens. By proper choice of the aspheric and DOE surface which has very large negative dispersion, we can correct chromatic and high order aberrations through the optimization technique. From Seidel third order aberration theory and Sweatt modeling, the initial data and surface configurations, that is, the combination condition of the DOE and the aspherical surface are obtained. However, due to the consideration of diffraction efficiency of a DOE, we can choose only four cases as the optimization input, and present the best solution after evaluating and comparing those four cases. On the other hand, we also report dramatic improvement in optical performance by inserting another refractive lens (so-called, field flattener), that keeps the refractive power of an original DOE lens and makes the petzval sum zero in the original DOE lens system. ystem.

  • PDF

Information Fusion of Cameras and Laser Radars for Perception Systems of Autonomous Vehicles (영상 및 레이저레이더 정보융합을 통한 자율주행자동차의 주행환경인식 및 추적방법)

  • Lee, Minchae;Han, Jaehyun;Jang, Chulhoon;Sunwoo, Myoungho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.1
    • /
    • pp.35-45
    • /
    • 2013
  • A autonomous vehicle requires improved and robust perception systems than conventional perception systems of intelligent vehicles. In particular, single sensor based perception systems have been widely studied by using cameras and laser radar sensors which are the most representative sensors for perception by providing object information such as distance information and object features. The distance information of the laser radar sensor is used for road environment perception of road structures, vehicles, and pedestrians. The image information of the camera is used for visual recognition such as lanes, crosswalks, and traffic signs. However, single sensor based perception systems suffer from false positives and true negatives which are caused by sensor limitations and road environments. Accordingly, information fusion systems are essentially required to ensure the robustness and stability of perception systems in harsh environments. This paper describes a perception system for autonomous vehicles, which performs information fusion to recognize road environments. Particularly, vision and laser radar sensors are fused together to detect lanes, crosswalks, and obstacles. The proposed perception system was validated on various roads and environmental conditions with an autonomous vehicle.

A Framework of Recognition and Tracking for Underwater Objects based on Sonar Images : Part 2. Design and Implementation of Realtime Framework using Probabilistic Candidate Selection (소나 영상 기반의 수중 물체 인식과 추종을 위한 구조 : Part 2. 확률적 후보 선택을 통한 실시간 프레임워크의 설계 및 구현)

  • Lee, Yeongjun;Kim, Tae Gyun;Lee, Jihong;Choi, Hyun-Taek
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.3
    • /
    • pp.164-173
    • /
    • 2014
  • In underwater robotics, vision would be a key element for recognition in underwater environments. However, due to turbidity an underwater optical camera is rarely available. An underwater imaging sonar, as an alternative, delivers low quality sonar images which are not stable and accurate enough to find out natural objects by image processing. For this, artificial landmarks based on the characteristics of ultrasonic waves and their recognition method by a shape matrix transformation were proposed and were proven in Part 1. But, this is not working properly in undulating and dynamically noisy sea-bottom. To solve this, we propose a framework providing a selection phase of likelihood candidates, a selection phase for final candidates, recognition phase and tracking phase in sequence images, where a particle filter based selection mechanism to eliminate fake candidates and a mean shift based tracking algorithm are also proposed. All 4 steps are running in parallel and real-time processing. The proposed framework is flexible to add and to modify internal algorithms. A pool test and sea trial are carried out to prove the performance, and detail analysis of experimental results are done. Information is obtained from tracking phase such as relative distance, bearing will be expected to be used for control and navigation of underwater robots.