• Title/Summary/Keyword: one camera

Search Result 1,583, Processing Time 0.031 seconds

Remote control system for management of a stall using PDA (PDA를 이용한 축사관리 원격제어 시스템)

  • Kim, Tae-Soo;Chun, Joong-Chang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.1010-1013
    • /
    • 2009
  • The youths of the farming village have moved to the metropolis, thus the most of the manpower which engages to a production remained in the village reaches layer old age and all thing sprouts long the whole and it is one. So to remove the waste of barn rather than to give feed to the livestock is hard that what step all automation of this part is necessary. Consequently we have developed the automation system in order to reduce the massive death of the livestock at the time of intense cold and hot. The system will be able to clean the waste of the barn and confront quickly in the change of temperature which is sudden it came. And we proposed also the system that will be able to watch at real-time and monitor the operational environment from a remote using CCD camera. In this paper, we proposed the remote control system which uses PDA in order to control the automation system of a stall while moving. The proposed system was embodied in order for the control and the monitor while the user is mobile using PDA screens. We also added a protection system in that system. The system sends the case warning and SMS while will have the fire and the intrusion from the outside and prevents a robbery.

  • PDF

Velocity Distribution Measurements in Mach 2.0 Supersonic Nozzle using Two-Color PIV Method (Two Color PIV 기법을 이용한 마하 2.0 초음속 노즐의 속도분포 측정)

  • 안규복;임성규;윤영빈
    • Journal of the Korean Society of Propulsion Engineers
    • /
    • v.4 no.4
    • /
    • pp.18-25
    • /
    • 2000
  • A two-color particle image velocimetry (PIV) has been developed for measuring two dimensional velocity flowfields and applied to a Mach 2.0 supersonic nozzle. This technique is similar to a single-color PIV technique except that two different color laser beams are used to solve the directional ambiguity problem. A green-color laser sheet (532 nm: 2nd harmonic beam of YAG laser) and a red-color laser sheet (619 nm: output beam from YAG pumped Dye laser using Rhodamine 640) are employed to illuminate the seeded particles. A high resolution (3060${\times}$2036) digital color CCD camera is used to record the particle positions. This system eliminates the photographic-film processing time and subsequent digitization time as well as the complexities associated with conventional image shifting techniques for solving directional ambiguity problem. The two-color PIV also has the advantage that velocity distributions in high speed flowfields can be measured simply and accurately by varying the time interval between two different laser beams due to its high signal-to-noise ratio and thereby less requirement of panicle pair numbers for a velocity vector in one interrogation spot. The velocity distribution in the Mach 2.0 supersonic nozzle has been measured and the over-expanded shock cell structure can be predicted by the strain rate field. These results are compared and analyzed with the schlieren photograph for the velocity distributions and shock location.

  • PDF

A Study on the Estimation of Multi-Object Social Distancing Using Stereo Vision and AlphaPose (Stereo Vision과 AlphaPose를 이용한 다중 객체 거리 추정 방법에 관한 연구)

  • Lee, Ju-Min;Bae, Hyeon-Jae;Jang, Gyu-Jin;Kim, Jin-Pyeong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.279-286
    • /
    • 2021
  • Recently, We are carrying out a policy of physical distancing of at least 1m from each other to prevent the spreading of COVID-19 disease in public places. In this paper, we propose a method for measuring distances between people in real time and an automation system that recognizes objects that are within 1 meter of each other from stereo images acquired by drones or CCTVs according to the estimated distance. A problem with existing methods used to estimate distances between multiple objects is that they do not obtain three-dimensional information of objects using only one CCTV. his is because three-dimensional information is necessary to measure distances between people when they are right next to each other or overlap in two dimensional image. Furthermore, they use only the Bounding Box information to obtain the exact coordinates of human existence. Therefore, in this paper, to obtain the exact two-dimensional coordinate value in which a person exists, we extract a person's key point to detect the location, convert it to a three-dimensional coordinate value using Stereo Vision and Camera Calibration, and estimate the Euclidean distance between people. As a result of performing an experiment for estimating the accuracy of 3D coordinates and the distance between objects (persons), the average error within 0.098m was shown in the estimation of the distance between multiple people within 1m.

Fire Detection using Deep Convolutional Neural Networks for Assisting People with Visual Impairments in an Emergency Situation (시각 장애인을 위한 영상 기반 심층 합성곱 신경망을 이용한 화재 감지기)

  • Kong, Borasy;Won, Insu;Kwon, Jangwoo
    • 재활복지
    • /
    • v.21 no.3
    • /
    • pp.129-146
    • /
    • 2017
  • In an event of an emergency, such as fire in a building, visually impaired and blind people are prone to exposed to a level of danger that is greater than that of normal people, for they cannot be aware of it quickly. Current fire detection methods such as smoke detector is very slow and unreliable because it usually uses chemical sensor based technology to detect fire particles. But by using vision sensor instead, fire can be proven to be detected much faster as we show in our experiments. Previous studies have applied various image processing and machine learning techniques to detect fire, but they usually don't work very well because these techniques require hand-crafted features that do not generalize well to various scenarios. But with the help of recent advancement in the field of deep learning, this research can be conducted to help solve this problem by using deep learning-based object detector that can detect fire using images from security camera. Deep learning based approach can learn features automatically so they can usually generalize well to various scenes. In order to ensure maximum capacity, we applied the latest technologies in the field of computer vision such as YOLO detector in order to solve this task. Considering the trade-off between recall vs. complexity, we introduced two convolutional neural networks with slightly different model's complexity to detect fire at different recall rate. Both models can detect fire at 99% average precision, but one model has 76% recall at 30 FPS while another has 61% recall at 50 FPS. We also compare our model memory consumption with each other and show our models robustness by testing on various real-world scenarios.

Observation on the Seabed around Simheungteak Seamount near Dokdo and using Mini-ROV (소형 ROV를 활용한 독도 및 심흥택해산 해저면 탐사)

  • MIN, WON-GI;RHO, HYUN SOO;KIM, CHANG HWAN;PARK, CHAN HONG;KIM, DONGSUNG
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.24 no.1
    • /
    • pp.18-29
    • /
    • 2019
  • ROV surveys were conducted using 500 meter mini class ROV with HD video camera, 2 LED lights, a simple manipulator and 8 thrusters near the Dokdo and Simheungtaek seamount. Total six dives have been conducted using the ROV "V8 SII" from Sweden and ROV's support ship, "KOSAL V" at 4 stations between 45 and 370 meters with diving time ranged from 30 to 120 minutes. Dense communities of sea anemone (Actinostolidae sp.) and ophiuroids (Ophiuridae sp.) on the surface of rocky bottom and snow crab on the soft bottom with muddy-sand were observed at northwestern part of Simheungtaek seamount. We obtained the following results 1) habitats information for snow crab, one of the major fisheries resources, and deep-sea fauna, 2) observation on the specific topography and sediment conditions, 3) observation of the seabed surface covered with the discarded fishing gears. This study represents the first report of in situ visual observation of deep-sea organisms and their habitats near the Dokdo slopes and flat top of the Simheungtaek seamount in the East Sea. These results indicated that immediate oceanographic survey using the mini class ROV is available in the East Sea.

White striping degree assessment using computer vision system and consumer acceptance test

  • Kato, Talita;Mastelini, Saulo Martiello;Campos, Gabriel Fillipe Centini;Barbon, Ana Paula Ayub da Costa;Prudencio, Sandra Helena;Shimokomaki, Massami;Soares, Adriana Lourenco;Barbon, Sylvio Jr.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.32 no.7
    • /
    • pp.1015-1026
    • /
    • 2019
  • Objective: The objective of this study was to evaluate three different degrees of white striping (WS) addressing their automatic assessment and customer acceptance. The WS classification was performed based on a computer vision system (CVS), exploring different machine learning (ML) algorithms and the most important image features. Moreover, it was verified by consumer acceptance and purchase intent. Methods: The samples for image analysis were classified by trained specialists, according to severity degrees regarding visual and firmness aspects. Samples were obtained with a digital camera, and 25 features were extracted from these images. ML algorithms were applied aiming to induce a model capable of classifying the samples into three severity degrees. In addition, two sensory analyses were performed: 75 samples properly grilled were used for the first sensory test, and 9 photos for the second. All tests were performed using a 10-cm hybrid hedonic scale (acceptance test) and a 5-point scale (purchase intention). Results: The information gain metric ranked 13 attributes. However, just one type of image feature was not enough to describe the phenomenon. The classification models support vector machine, fuzzy-W, and random forest showed the best results with similar general accuracy (86.4%). The worst performance was obtained by multilayer perceptron (70.9%) with the high error rate in normal (NORM) sample predictions. The sensory analysis of acceptance verified that WS myopathy negatively affects the texture of the broiler breast fillets when grilled and the appearance attribute of the raw samples, which influenced the purchase intention scores of raw samples. Conclusion: The proposed system has proved to be adequate (fast and accurate) for the classification of WS samples. The sensory analysis of acceptance showed that WS myopathy negatively affects the tenderness of the broiler breast fillets when grilled, while the appearance attribute of the raw samples eventually influenced purchase intentions.

Slim Mobile Lens Design Using a Hybrid Refractive/Diffractive Lens (굴절/회절 하이브리드 렌즈 적용 슬림 모바일 렌즈 설계)

  • Park, Yong Chul;Joo, Ji Yong;Lee, Jun Ho
    • Korean Journal of Optics and Photonics
    • /
    • v.31 no.6
    • /
    • pp.281-289
    • /
    • 2020
  • This paper reports a slim mobile lens design using a hybrid refractive/diffractive optical element. Conventionally a wide field of view (FOV) camera-lens design adopts a retrofocus type having a negative (-) lens at the forefront, so that it improves in imaging performance over the wide FOV, but with the sacrifice of longer total track length (TTL). However, we chose a telephoto type as a baseline design layout having a positive (+) lens at the forefront, to achieving slimness, based on the specification analysis of 23 reported optical designs. Following preliminary optimization of a baseline design and aberration analysis based on Zernike-polynomial decomposition, we applied a hybrid refractive/diffractive element to effectively reduce the residual chromatic spherical aberration. The optimized optical design consists of 6 optical elements, including one hybrid element. It results in a very slim telephoto ratio of 1.7, having an f-number of 2.0, FOV of 90°, effective focal length of 2.23 mm, and TTL of 3.7 mm. Compared to a comparable conventional lens design with no hybrid elements, the hybrid design improved the value of the modulation transfer function (MTF) at a spatial frequency of 180 cycles/mm from 63% to 71-73% at zero field (0 F), and about 2-3% at 0.5, 0.7, and 0.9 fields. It was also found that a design with a hybrid lens with only two diffraction zones at the stop achieved the same performance improvement.

A Study on the Benefits and Issues of 360-degree VR Performance Videos (360도 VR공연영상의 효과와 문제점 연구)

Change Attention-based Vehicle Scratch Detection System (변화 주목 기반 차량 흠집 탐지 시스템)

  • Lee, EunSeong;Lee, DongJun;Park, GunHee;Lee, Woo-Ju;Sim, Donggyu;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.228-239
    • /
    • 2022
  • In this paper, we propose an unmanned vehicle scratch detection deep learning model for car sharing services. Conventional scratch detection models consist of two steps: 1) a deep learning module for scratch detection of images before and after rental, 2) a manual matching process for finding newly generated scratches. In order to build a fully automatic scratch detection model, we propose a one-step unmanned scratch detection deep learning model. The proposed model is implemented by applying transfer learning and fine-tuning to the deep learning model that detects changes in satellite images. In the proposed car sharing service, specular reflection greatly affects the scratch detection performance since the brightness of the gloss-treated automobile surface is anisotropic and a non-expert user takes a picture with a general camera. In order to reduce detection errors caused by specular reflected light, we propose a preprocessing process for removing specular reflection components. For data taken by mobile phone cameras, the proposed system can provide high matching performance subjectively and objectively. The scores for change detection metrics such as precision, recall, F1, and kappa are 67.90%, 74.56%, 71.08%, and 70.18%, respectively.

Lunar Exploration Employing a Quadruped Robot on the Fault of the Rupes Recta for Investigating the Geological Formation History of the Mare Nubium (4족 보행 로봇을 활용한 달의 직선절벽(Rupes Recta)의 단층면 탐사를 통한 구름의 바다(Mare Nubium) 지역의 지질학적 형성 연구)

  • Hong, Ik-Seon;Yi, Yu;Ju, Gwanghyeok
    • Journal of Space Technology and Applications
    • /
    • v.1 no.1
    • /
    • pp.64-75
    • /
    • 2021
  • On the moon as well as the earth, one of the easiest ways to understand geological formation history of any region is to observe the stratigraphy if it is available, the order in which the strata build up. By analyzing stratigraphy, it is possible to infer what geological events have occurred in the past. Mare Nubium also has an unique normal fault called Rupes Recta that shows stratigraphy. However, a rover moving with wheels is incompetent to explore the cliff since the Rupes Recta has an inclination of 10° - 30°. Therefore, a quadruped walking robot must be employed for stable expedition. To exploration a fault with a four-legged walking robot, it is necessary to design an expedition route by taking account of whether the stratigraphy is well displayed, whether the slope of the terrain is moderate, and whether there are obstacles and rough texture in the terrain based on the remote sensing data from the previous lunar missions. For the payloads required for fault surface exploration we propose an optical camera to grasp the actual appearance, a spectrometer to analyze the composition, and a drill to obtain samples that are not exposed outward.