• Title/Summary/Keyword: Camera Work

Search Result 499, Processing Time 0.023 seconds

3DCGI workflow proposal for reduce rendering time of drama VFX (드라마 VFX의 렌더링 시간 단축을 위한 3DCGI 워크플로우 제안)

  • Baek, Kwang Ho;Ji, Yun;Lee, Byung Chun;Yun, Tae Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.8
    • /
    • pp.1006-1014
    • /
    • 2020
  • Consumer expectations for drama VFX increased as the influx of overseas dramas increased due to the growth of OTT service, but the current production time of the drama production system has a negative effect on the quality and completion of work. This paper proposes a background rendering using a game engine and a subject rendering using background HDRI as environmental lighting. The proposed workflow is expected to reduce the production time of VFX and improve the quality of VFX. The proposed workflow was verified in 3 steps. Comparing game engine and rendering software according to the same lighting environment and camera distance, comparing rendering with existing rendering method and background HDRI generation, and validating the proposed workflow. For verification, quantitative evaluation was performed using Structual similarity index and histogram. This study is expected to be one of the options to improve the quality of VFX and improve the risk.

B-COV:Bio-inspired Virtual Interaction for 3D Articulated Robotic Arm for Post-stroke Rehabilitation during Pandemic of COVID-19

  • Allehaibi, Khalid Hamid Salman;Basori, Ahmad Hoirul;Albaqami, Nasser Nammas
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.2
    • /
    • pp.110-119
    • /
    • 2021
  • The Coronavirus or COVID-19 is contagiousness virus that infected almost every single part of the world. This pandemic forced a major country did lockdown and stay at a home policy to reduce virus spread and the number of victims. Interactions between humans and robots form a popular subject of research worldwide. In medical robotics, the primary challenge is to implement natural interactions between robots and human users. Human communication consists of dynamic processes that involve joint attention and attracting each other. Coordinated care involves sharing among agents of behaviours, events, interests, and contexts in the world from time to time. The robotics arm is an expensive and complicated system because robot simulators are widely used instead of for rehabilitation purposes in medicine. Interaction in natural ways is necessary for disabled persons to work with the robot simulator. This article proposes a low-cost rehabilitation system by building an arm gesture tracking system based on a depth camera that can capture and interpret human gestures and use them as interactive commands for a robot simulator to perform specific tasks on the 3D block. The results show that the proposed system can help patients control the rotation and movement of the 3D arm using their hands. The pilot testing with healthy subjects yielded encouraging results. They could synchronize their actions with a 3D robotic arm to perform several repetitive tasks and exerting 19920 J of energy (kg.m2.S-2). The average of consumed energy mentioned before is in medium scale. Therefore, we relate this energy with rehabilitation performance as an initial stage and can be improved further with extra repetitive exercise to speed up the recovery process.

Analysis on Werner Bischof's Korean War Documentary Photos (베르너 비숍의 한국전쟁 다큐멘터리 사진 분석)

  • Jung, Eun Jin;Kim, Jin Soo;Yang, Jong Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.6
    • /
    • pp.160-174
    • /
    • 2022
  • This research analyzed military journalist activities and key photos of Werner Bischof, the military journalist during Korean war. Werner Bischof entered Korea twice in 1951 and 1952 for military journals. Through Korean war pictures, essay, multiple letters and photobooks, I shed a light on the background of his military journals and activities, and analyzed topics and characteristics based on key photos that he took during his first and second military journalist activities. Bischof created a work centered on 'Human'. Especially, he has this consciousness of 'What happens to civilians in war-ridden area?'. Bischof took civilians in Sanyangri in his camera to announce pain of civilians and tragedy of war coming from war during his first military journalist activity. During his second military journalist activity, he critiqued ideological brainwashing, life of prisoners of war in a humanist point of view. The latter showed characteristics of clear contrast of black and white, image arrangement based on characters, and outstanding photo views. His military journal activity is in search of civilian suffering, and humanism in Korean war. This consciousness crosses through generation over generation that cannot be compared to other military journalists.

Design of Robot Arm for Service Using Deep Learning and Sensors (딥러닝과 센서를 이용한 서비스용 로봇 팔의 설계)

  • Pak, Myeong Suk;Kim, Kyu Tae;Koo, Mo Se;Ko, Young Jun;Kim, Sang Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.5
    • /
    • pp.221-228
    • /
    • 2022
  • With the application of artificial intelligence technology, robots can provide efficient services in real life. Unlike industrial manipulators that do simple repetitive work, this study presented design methods of 6 degree of freedom robot arm and intelligent object search and movement methods for use alone or in collaboration with no place restrictions in the service robot field and verified performance. Using a depth camera and deep learning in the ROS environment of the embedded board included in the robot arm, the robot arm detects objects and moves to the object area through inverse kinematics analysis. In addition, when contacting an object, it was possible to accurately hold and move the object through the analysis of the force sensor value. To verify the performance of the manufactured robot arm, experiments were conducted on accurate positioning of objects through deep learning and image processing, motor control, and object separation, and finally robot arm was tested to separate various cups commonly used in cafes to check whether they actually operate.

Deep Learning-based Depth Map Estimation: A Review

  • Abdullah, Jan;Safran, Khan;Suyoung, Seo
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.1-21
    • /
    • 2023
  • In this technically advanced era, we are surrounded by smartphones, computers, and cameras, which help us to store visual information in 2D image planes. However, such images lack 3D spatial information about the scene, which is very useful for scientists, surveyors, engineers, and even robots. To tackle such problems, depth maps are generated for respective image planes. Depth maps or depth images are single image metric which carries the information in three-dimensional axes, i.e., xyz coordinates, where z is the object's distance from camera axes. For many applications, including augmented reality, object tracking, segmentation, scene reconstruction, distance measurement, autonomous navigation, and autonomous driving, depth estimation is a fundamental task. Much of the work has been done to calculate depth maps. We reviewed the status of depth map estimation using different techniques from several papers, study areas, and models applied over the last 20 years. We surveyed different depth-mapping techniques based on traditional ways and newly developed deep-learning methods. The primary purpose of this study is to present a detailed review of the state-of-the-art traditional depth mapping techniques and recent deep learning methodologies. This study encompasses the critical points of each method from different perspectives, like datasets, procedures performed, types of algorithms, loss functions, and well-known evaluation metrics. Similarly, this paper also discusses the subdomains in each method, like supervised, unsupervised, and semi-supervised methods. We also elaborate on the challenges of different methods. At the conclusion of this study, we discussed new ideas for future research and studies in depth map research.

A Study on Interior Simulation based on Real-Room without using AR Platforms (AR 플랫폼을 사용하지 않는 실제 방 기반 인테리어 시뮬레이션 연구)

  • Choi, Gyoo-Seok;Kim, Joon-Geon;Lim, Chang-Muk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.1
    • /
    • pp.111-120
    • /
    • 2022
  • It is essential to make a purchase decision to make sure that the furniture matches well with other structures in the room. Moreover, in the Untact Marketing situation caused by the COVID-19 crisis, this is becoming an even more impact factor. Accordingly, methods of measuring length using AR(Augmented Reality) are emerging with the advent of AR open sources such as ARCore and ARKit for furniture arrangement interior simulation. Since this existing method using AR generates a Depth Map based on a flat camera image and it also involves complex three-dimensional calculations, limitations are revealed in work that requires the information of accurate room size using a smartphone. In this paper, we propose a method to accurately measure the size of a room using only the accelerometer and gyroscope sensors built in smartphones without using ARCore or ARKit. In addition, as an example of application using the presented technique, a method for applying a pre-designed room interior to each room is presented.

Behavioral responses to cow and calf separation: separation at 1 and 100 days after birth

  • Sarah E. Mac;Sabrina Lomax;Cameron E. F. Clark
    • Animal Bioscience
    • /
    • v.36 no.5
    • /
    • pp.810-817
    • /
    • 2023
  • Objective: The aim was to compare the behavioral response to full separation of cows and calves maintained together for 100 days or 24 h. Methods: Twelve Holstein-Friesian cow-calf pairs were enrolled into either treatment or industry groups (n = 6 cow-calf pairs/group). The treatment cows and calves were maintained on pasture together for 106±8.6 d and temporarily separated twice a day for milking. The Industry cows and their calves, were separated within 24 h postpartum. Triaxial accelerometer neck-mounted sensors were fitted to cows 3 weeks before separation to measure hourly rumination and activity. Before separation, cow and calf behavior was observed by scan sampling for 15 min. During the separation process, frequency of vocalizations and turn arounds were recorded. At separation, cows were moved to an observation pen where behavior was recorded for 3 d. A CCTV camera was used to record video footage of cows within the observation pens and behavior was documented from the videos in 15 min intervals across the 3 d. Results: Before separation, industry calves were more likely to be near their mother than Treatment calves. During the separation process, vocalization and turn around behavior was similar between groups. After full separation, treatment cows vocalized three times more than industry cows. However, the frequency of time spent close to barrier, standing, lying, walking, and eating were similar between industry and treatment cows. Treatment cows had greater rumination duration, and were more active, than industry cows. Conclusion: These findings suggest a similar behavioral response to full calf separation and greater occurrence of vocalizations, from cows maintained in a long-term, pasture-based, cow-calf rearing system when ompared to cows separated within 24 h. However, further work is required to assess the impact of full separation on calf behavior.

Surface exposure age of (25143) Itokawa estimated from the number of mottles on the boulder

  • Jin, Sunho;Ishiguro, Masateru
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.45 no.1
    • /
    • pp.45.2-46
    • /
    • 2020
  • Various processes, such as space weathering and granular convection, are occurring on asteroids' surfaces. Estimation of the surface exposure timescale is essential for understanding these processes. The Hayabusa mission target asteroid, (25143) Itokawa (Sq-type) is the only asteroid whose age is estimated from remote sensing observations as well as sample analyses in laboratories. There is, however, an unignorable discrepancy between the timescale derived from these different techniques. The ages estimated based on the solar flare track density and the weathered rim thickness of regolith samples range between 102 and 104 years [1][2]. On the contrary, the ages estimated from the crater size distributions and the spectra cover from 106 to 107 years [3][4]. It is important to notice that there is a common drawback of both age estimation methods. Since the evidence of regolith migration is found on the surface of Itokawa [5], the surficial particles would be rejuvenated by granular convection. At the same time, it is expected that the erasure of craters by regolith migration would affect the crater size distribution. We propose a new technique to estimate surface exposure age, focusing on the bright mottles on the large boulders. Our technique is less prone to the granular convection. These mottles are expected to be formed by impacts of mm to cm-sized interplanetary particles. Together with the well-known flux model of interplanetary dust particles (e.g., Grün, 1985 [6]), we have investigated the timescale to form such mottles before they become dark materials again by the space weathering. In this work, we used three AMICA (Asteroid Multi-band Imaging Camera) v-band images. These images were taken on 2005 November 12 during the close approach to the asteroid. As a result, we found the surface exposure timescales of these boulders are an order of 106 years. In this meeting, we will introduce our data analysis technique and evaluate the consistency among previous research for a better understanding of the evolution of this near-Earth asteroid.

  • PDF

Study of Deep Learning Based Specific Person Following Mobility Control for Logistics Transportation (물류 이송을 위한 딥러닝 기반 특정 사람 추종 모빌리티 제어 연구)

  • Yeong Jun Yu;SeongHoon Kang;JuHwan Kim;SeongIn No;GiHyeon Lee;Seung Yong Lee;Chul-hee Lee
    • Journal of Drive and Control
    • /
    • v.20 no.4
    • /
    • pp.1-8
    • /
    • 2023
  • In recent years, robots have been utilized in various industries to reduce workload and enhance work efficiency. The following mobility offers users convenience by autonomously tracking specific locations and targets without the need for additional equipment such as forklifts or carts. In this paper, deep learning techniques were employed to recognize individuals and assign each of them a unique identifier to enable the recognition of a specific person even among multiple individuals. To achieve this, the distance and angle between the robot and the targeted individual are transmitted to respective controllers. Furthermore, this study explored the control methodology for mobility that tracks a specific person, utilizing Simultaneous Localization and Mapping (SLAM) and Proportional-Integral-Derivative (PID) control techniques. In the PID control method, a genetic algorithm is employed to extract the optimal gain value, subsequently evaluating PID performance through simulation. The SLAM method involves generating a map by synchronizing data from a 2D LiDAR and a depth camera using Real-Time Appearance-Based Mapping (RTAB-MAP). Experiments are conducted to compare and analyze the performance of the two control methods, visualizing the paths of both the human and the following mobility.

A Development of Unbalanced Box Stacking System with High Stability using the Center of Gravity Measurement (무게중심 측정을 이용한 불평형 상자의 고안정 적재 시스템 개발)

  • Seong-Woo Bae;Dae-Gyu Han;Jae-Ho Ryu;Hyeon-hui Lee;Chae-Hun An
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.1
    • /
    • pp.229-237
    • /
    • 2024
  • The logistics industry is converging with digital technology and growing into various logistics automation systems. However, inspection and loading/unloading, which are mainly performed in logistics work, depend on human resources, and the workforce is shrinking due to the decline in the productive population due to the low birth rate and aging. Although much research is being conducted on the development of automated logistics systems to solve these problems, there is a lack of research and development on load stacking stability, which has the potential to cause significant accidents. In this study, loading boxes with various sizes and positions of the center of gravity were set up, and a method for stacking that with high stability is presented. The size of the loading box is measured using a depth camera. The loading box's weight and center of gravity are measured and estimated by a developed device with four loadcells. The measurement error is measured through various repeated experiments and is corrected using the least squares method. The robot arm performs load stacking by determining the target position so that the centers of gravity of the loading boxes with unbalanced masses with a random sequence are transported in alignment. All processes were automated, and the results were verified by experimentally confirming load stacking stability.