• Title/Summary/Keyword: Lighting conditions

Search Result 507, Processing Time 0.03 seconds

Development of Safety Clothing for Sports and Entertainment for Adolescent (청소년을 위한 스포츠 및 엔터테인먼트 안전의복의 개발)

  • Park, Soon Ja;Ko, Soo Kyung
    • Human Ecology Research
    • /
    • v.59 no.1
    • /
    • pp.83-97
    • /
    • 2021
  • This study developed safety clothing that is essential for adolescent to protect their bodies from accidents, pursue activities and individuality. Therefore, the developed safety clothing was first based on international standards, while changing design to emphasize creativity, activity, and functionality. Two suits of boy's clothes and a girl's suit were developed as safety clothing for sportswear, along with two pairs of girl's clothes and a pair of boy's clothes for entertainment. It was confirmed that the difference in visibility was revealed by testing under different lighting conditions. Second, the survey on adolescents indicated no significant gender difference in sportswear. Round shirt+shorts for both boys and girls were the most preferred for ball game sportswear. However, there was a significant difference by gender in the design of safety clothing for entertainment. Male students preferred jump suit=cape+pants the most, and female students preferred jump suit>cape+pants in the order (p<0.05). In the safety clothes with the highest preference for entertainment, there was no gender difference. All students preferred the jump suit at the most. Checking at each school level, it was found that both middle and high school students preferred jump suit designs, and in safety clothing, middle school students preferred high-neck shirt blouse+tight skirt, and high school students preferred jump suits. Third, 35.5% responded that they would wear it more if current safety clothing is improved. This indicated the necessity of developing various safety clothing for adolescence.

In-camera VFX implementation study using short-throw projector (focused on low-cost solution)

  • Li, Penghui;Kim, Ki-Hong;Lee, David-Junesok
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.10-16
    • /
    • 2022
  • As an important part of virtual production, In-camera VFX is the process of shooting actual objects and virtual three-dimensional backgrounds in real-time through computer graphics technology and display technology, and obtaining the final film. In the In-camera VFX process, there are currently only two types of medium used to undertake background imaging, LED wall and chroma key screen. Among them, the In-camera VFX based on LED wall realizes background imaging through LED display technology. Although the imaging quality is guaranteed, the high cost of LED wall increases the cost of virtual production. The In-camera VFX based on chroma key screen, the background imaging is realized by real-time keying technology. Although the price is low, due to the limitation of real-time keying technology and lighting conditions, the usability of the final picture is not high. The short-throw projection technology can compress the projection distance to within 1 meter and get a relatively large picture, which solves the problem of traditional projection technology that must leaving a certain space between screen and the projector, and its price is relatively cheap compared to the LED wall. Therefore, in the In-camera VFX process, short-throw projection technology can be tried to project backgrounds. This paper will analyze the principle of short-throw projection technology and the existing In-camera VFX solutions, and through the comparison experiments, propose a low-cost solution that uses short-throw projectors to project virtual backgrounds and realize the In-camera VFX process.

Comparison of Recognition Performance of Color QR Codes for Inserted Pattern Information (칼라 QR코드의 패턴 종류에 따른 인식 성능 비교)

  • Kim, Jin-soo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.3
    • /
    • pp.11-20
    • /
    • 2022
  • Currently, the black-white QR (Quick Response) codes have been used widely in consumer advertising fields and the study of color QR codes have received a growing demand because of much higher data encoding capacity. Color QR codes can be reproduced by the printing and scanning processes, however, these encounter colors distortion caused by insufficient lighting, low resolution of camera and geometric deformation during the capturing processes. In order to overcome these problems, this paper proposes an efficient decoding algorithm for color QR codes with inserted patterns, which are dealt with conventional studies. These are evaluated in view of the recognition rate under different noise conditions, for example, Gaussian noises/blurring and geometric deformation. Experimental results demonstrate that the color QR codes with simple pattern can resist the distortion of Gaussian noises/blurrings.

Comparison of Animal Welfare Standards for Broiler (육계 관련 동물복지 인증기준 비교)

  • Yoo, Geum Zoo;Cheon, Si Nae;Kim, Chan Ho;Jung, Ji Yeon;Kim, Dong-Hoon;Jeon, Jung Hwan
    • Korean Journal of Organic Agriculture
    • /
    • v.28 no.4
    • /
    • pp.643-658
    • /
    • 2020
  • Animal welfare has become a prominent concern around the world so that the laws and guidelines of animal welfare are being strengthened in many countries including the EU. In Korea, it is required to supplement animal welfare standards because social awareness of animal welfare has changed. This study was conducted to compare broiler welfare certification standards and improve the quality of practice. We found that broiler welfare certification standards differ among countries according to environmental and managemental differences. Standards for stocking density and perch which is considered more important for poultry welfare are similar, but there is a little difference in feed, water, litter and lighting. Therefore, we assumed that theses are able to revise standards taking into account the environment and suggested that the broiler welfare certification standard will serve as a more useful criterion if breeding conditions in Korea are considered.

Anomaly Detection using Geometric Transformation of Normal Sample Images (정상 샘플 이미지의 기하학적 변환을 사용한 이상 징후 검출)

  • Kwon, Yong-Wan;Kang, Dong-Joong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.4
    • /
    • pp.157-163
    • /
    • 2022
  • Recently, with the development of automation in the industrial field, research on anomaly detection is being actively conducted. An application for anomaly detection used in factory automation is camera-based defect inspection. Vision camera inspection shows high performance and efficiency in factory automation, but it is difficult to overcome the instability of lighting and environmental conditions. Although camera inspection using deep learning can solve the problem of vision camera inspection with much higher performance, it is difficult to apply to actual industrial fields because it requires a huge amount of normal and abnormal data for learning. Therefore, in this study, we propose a network that overcomes the problem of collecting abnormal data with 72 geometric transformation deep learning methods using only normal data and adds an outlier exposure method for performance improvement. By applying and verifying this to the MVTec data set, which is a database for auto-mobile parts data and outlier detection, it is shown that it can be applied in actual industrial sites.

Design of Image Extraction Hardware for Hand Gesture Vision Recognition

  • Lee, Chang-Yong;Kwon, So-Young;Kim, Young-Hyung;Lee, Yong-Hwan
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.1
    • /
    • pp.71-83
    • /
    • 2020
  • In this paper, we propose a system that can detect the shape of a hand at high speed using an FPGA. The hand-shape detection system is designed using Verilog HDL, a hardware language that can process in parallel instead of sequentially running C++ because real-time processing is important. There are several methods for hand gesture recognition, but the image processing method is used. Since the human eye is sensitive to brightness, the YCbCr color model was selected among various color expression methods to obtain a result that is less affected by lighting. For the CbCr elements, only the components corresponding to the skin color are filtered out from the input image by utilizing the restriction conditions. In order to increase the speed of object recognition, a median filter that removes noise present in the input image is used, and this filter is designed to allow comparison of values and extraction of intermediate values at the same time to reduce the amount of computation. For parallel processing, it is designed to locate the centerline of the hand during scanning and sorting the stored data. The line with the highest count is selected as the center line of the hand, and the size of the hand is determined based on the count, and the hand and arm parts are separated. The designed hardware circuit satisfied the target operating frequency and the number of gates.

Divide and Conquer Strategy for CNN Model in Facial Emotion Recognition based on Thermal Images (얼굴 열화상 기반 감정인식을 위한 CNN 학습전략)

  • Lee, Donghwan;Yoo, Jang-Hee
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.2
    • /
    • pp.1-10
    • /
    • 2021
  • The ability to recognize human emotions by computer vision is a very important task, with many potential applications. Therefore the demand for emotion recognition using not only RGB images but also thermal images is increasing. Compared to RGB images, thermal images has the advantage of being less affected by lighting conditions but require a more sophisticated recognition method with low-resolution sources. In this paper, we propose a Divide and Conquer-based CNN training strategy to improve the performance of facial thermal image-based emotion recognition. The proposed method first trains to classify difficult-to-classify similar emotion classes into the same class group by confusion matrix analysis and then divides and solves the problem so that the emotion group classified into the same class group is recognized again as actual emotions. In experiments, the proposed method has improved accuracy in all the tests than when recognizing all the presented emotions with a single CNN model.

Synthetic data augmentation for pixel-wise steel fatigue crack identification using fully convolutional networks

  • Zhai, Guanghao;Narazaki, Yasutaka;Wang, Shuo;Shajihan, Shaik Althaf V.;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.237-250
    • /
    • 2022
  • Structural health monitoring (SHM) plays an important role in ensuring the safety and functionality of critical civil infrastructure. In recent years, numerous researchers have conducted studies to develop computer vision and machine learning techniques for SHM purposes, offering the potential to reduce the laborious nature and improve the effectiveness of field inspections. However, high-quality vision data from various types of damaged structures is relatively difficult to obtain, because of the rare occurrence of damaged structures. The lack of data is particularly acute for fatigue crack in steel bridge girder. As a result, the lack of data for training purposes is one of the main issues that hinders wider application of these powerful techniques for SHM. To address this problem, the use of synthetic data is proposed in this article to augment real-world datasets used for training neural networks that can identify fatigue cracks in steel structures. First, random textures representing the surface of steel structures with fatigue cracks are created and mapped onto a 3D graphics model. Subsequently, this model is used to generate synthetic images for various lighting conditions and camera angles. A fully convolutional network is then trained for two cases: (1) using only real-word data, and (2) using both synthetic and real-word data. By employing synthetic data augmentation in the training process, the crack identification performance of the neural network for the test dataset is seen to improve from 35% to 40% and 49% to 62% for intersection over union (IoU) and precision, respectively, demonstrating the efficacy of the proposed approach.

Bactericidal Effect of a 275-nm UV-C LED Sterilizer for Escalator Handrails: Optimization of Optical Structure and Evaluation of Sterilization of Six Bacterial Strains

  • Kim, Jong-Oh;Jeong, Geum-Jae;Son, Eun-Ik;Jo, Du-Min;Kim, Myung-Sub;Chun, Dong-Hae;Kim, Young-Mog;Ryu, Uh-Chan
    • Current Optics and Photonics
    • /
    • v.6 no.2
    • /
    • pp.202-211
    • /
    • 2022
  • In the pasteurization of escalator handrails using ultraviolet (UV) sterilizers, a combination of light distribution and escalator speed has priority over other important factors. Furthermore, since part of the escalator handrail has a curved structure, proper design is needed to improve the sterilization rate on the surfaces touched by users. In this paper, two types of sterilizers satisfying these conditions are manufactured with 275-nm UV-C LEDs, after modeling the three-dimensional (3D) structure of an escalator handrail and simulating optical distributions of UV-C irradiation on the handrail's surface according to light-emitting diode (LED) positions and reflector variations in the sterilizers. Pasteurization experiments with the UV-C LED sterilizers are conducted on six types of gram-positive and gram-negative bacteria, with exposure times of 0.2, 5, and 15 s at an actual installation distance of 20 mm. The sterilization rates for the gram-positive bacteria are 10.63% to 27.94% at 0.2 s, 89.44% to 96.30% at 5 s, and 99.64% to 99.88% at 15 s. Those for the gram-negative bacteria are 57.70% to 77.63% at 0.2 s, 98.90% to 99.49% at 5 s, and 99.88% to 99.99% at 15 s. The power consumption of the UV-C LED sterilizer is about 8 W, which can be supplied by a self-generation module instead of an external power supply.

Multi-camera-based 3D Human Pose Estimation for Close-Proximity Human-robot Collaboration in Construction

  • Sarkar, Sajib;Jang, Youjin;Jeong, Inbae
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.328-335
    • /
    • 2022
  • With the advance of robot capabilities and functionalities, construction robots assisting construction workers have been increasingly deployed on construction sites to improve safety, efficiency and productivity. For close-proximity human-robot collaboration in construction sites, robots need to be aware of the context, especially construction worker's behavior, in real-time to avoid collision with workers. To recognize human behavior, most previous studies obtained 3D human poses using a single camera or an RGB-depth (RGB-D) camera. However, single-camera detection has limitations such as occlusions, detection failure, and sensor malfunction, and an RGB-D camera may suffer from interference from lighting conditions and surface material. To address these issues, this study proposes a novel method of 3D human pose estimation by extracting 2D location of each joint from multiple images captured at the same time from different viewpoints, fusing each joint's 2D locations, and estimating the 3D joint location. For higher accuracy, the probabilistic representation is used to extract the 2D location of the joints, considering each joint location extracted from images as a noisy partial observation. Then, this study estimates the 3D human pose by fusing the probabilistic 2D joint locations to maximize the likelihood. The proposed method was evaluated in both simulation and laboratory settings, and the results demonstrated the accuracy of estimation and the feasibility in practice. This study contributes to ensuring human safety in close-proximity human-robot collaboration by providing a novel method of 3D human pose estimation.

  • PDF