• 제목/요약/키워드: 3D vision

검색결과 929건 처리시간 0.028초

이중 축열조를 갖는 축열식 지열원 히트펌프시스템의 노인공동주택 적용 분석연구 (Application analysis to a shared apartment house of heat storage type GSHP system with dual storage tank)

  • 박종우;이상훈;조성환
    • 대한설비공학회:학술대회논문집
    • /
    • 대한설비공학회 2008년도 동계학술발표대회 논문집
    • /
    • pp.27-32
    • /
    • 2008
  • The present study has been conducted economic analysis of heat storage type ground source heat pump system(HSGSHP) and normal ground source heat pump (GSHP) which are installed at the same building in the shared an apartment house. Cost items, such as initial cost, annual energy cost and maintenance cost of each system are considered to analyze life cycle cost (LCC) and simple payback period (SPP) with initial cost different are compared. The initial cost is a rule to the Government basic unit cost of production. LCC applied present value method is used to assess economical profit of both of them. Variables used to LCC analysis are prices escalation rate and interest rate mean values of during latest 10 years. The LCC result shows that HSGSHP (1,050,910,000won) is more profitable than GSHP by 68.9% initial cost. And SPP appeared 3.0 year overcome the different initial cost by different annual energy cost.

  • PDF

Real-Time Earlobe Detection System on the Web

  • Kim, Jaeseung;Choi, Seyun;Lee, Seunghyun;Kwon, Soonchul
    • International journal of advanced smart convergence
    • /
    • 제10권4호
    • /
    • pp.110-116
    • /
    • 2021
  • This paper proposed a real-time earlobe detection system using deep learning on the web. Existing deep learning-based detection methods often find independent objects such as cars, mugs, cats, and people. We proposed a way to receive an image through the camera of the user device in a web environment and detect the earlobe on the server. First, we took a picture of the user's face with the user's device camera on the web so that the user's ears were visible. After that, we sent the photographed user's face to the server to find the earlobe. Based on the detected results, we printed an earring model on the user's earlobe on the web. We trained an existing YOLO v5 model using a dataset of about 200 that created a bounding box on the earlobe. We estimated the position of the earlobe through a trained deep learning model. Through this process, we proposed a real-time earlobe detection system on the web. The proposed method showed the performance of detecting earlobes in real-time and loading 3D models from the web in real-time.

Off-axis self-reference digital holography in the visible and far-infrared region

  • Bianco, Vittorio;Paturzo, Melania;Finizio, Andrea;Ferraro, Pietro
    • ETRI Journal
    • /
    • 제41권1호
    • /
    • pp.84-92
    • /
    • 2019
  • Recent advances in digital holography in the far-infrared region of the spectrum have demonstrated the potential use of digital holography in homeland security as a tool to observe hostile environments in which smoke, flames, and dust impair vision. However, to make this application practical, it is necessary to simplify the optical setup. Here, we show an off-axis, self-reference scheme that spills the reference beam out from the object beam itself and avoids the need for a complex interferometric arrangement. We demonstrate that this scheme allows the reconstruction of high-quality holograms of objects captured under visible as well as far-infrared light exposure. This could pave the way to the industrialization of holographic systems to enable users to see through fire. Moreover, the quantitative nature of the holographic signal is preserved. Thus, the reported results demonstrate the possibility to use this setup for optical metrology.

차내 경험의 디지털 트랜스포메이션과 오디오 기반 인터페이스의 동향 및 시사점 (Trends and Implications of Digital Transformation in Vehicle Experience and Audio User Interface)

  • 김기현;권성근
    • 한국멀티미디어학회논문지
    • /
    • 제25권2호
    • /
    • pp.166-175
    • /
    • 2022
  • Digital transformation is driving so many changes in daily life and industry. The automobile industry is in a similar situation. In some cases, element techniques in areas called metabuses are also being adopted, such as 3D animated digital cockpit, around view, and voice AI, etc. Through the growth of the mobile market, the norm of human-computer interaction (HCI) has been evolving from keyboard-mouse interaction to touch screen. The core area was the graphical user interface (GUI), and recently, the audio user interface (AUI) has partially replaced the GUI. Since it is easy to access and intuitive to the user, it is quickly becoming a common area of the in-vehicle experience (IVE), especially. The benefits of a AUI are freeing the driver's eyes and hands, using fewer screens, lower interaction costs, more emotional and personal, effective for people with low vision. Nevertheless, when and where to apply a GUI or AUI are actually different approaches because some information is easier to process as we see it. In other cases, there is potential that AUI is more suitable. This is a study on a proposal to actively apply a AUI in the near future based on the context of various scenes occurring to improve IVE.

GPGPU 기반의 효율적인 카메라 ISP 구현 (Implementing Efficient Camera ISP Filters on GPGPUs Using OpenCL)

  • 박종태;;홍진건
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2010년도 추계학술발표대회
    • /
    • pp.1784-1787
    • /
    • 2010
  • General Purpose Graphic Processing Unit (GPGPU) computing is a technique that utilizes the high-performance many-core processors of high-end graphic cards for general-purpose computations such as 3D graphics, video/image processing, computer vision, scientific computing, HPC and many more. GPGPUs offer a vast amount of raw computing power, but programming is extremely challenging because of hardware idiosyncrasies. The open computing language (OpenCL) has been proposed as a vendor-independent GPGPU programming interface. OpenCL is very close to the hardware and thus does little to increase GPGPU programmability. In this paper we present how a set of digital camera image signal processing (ISP) filters can be realized efficiently on GPGPUs using OpenCL. Although we found ISP filters to be memory-bound computations, our GPGPU implementations achieve speedups of up to a factor of 64.8 over their sequential counterparts. On GPGPUs, our proposed optimizations achieved speedups between 145% and 275% over their baseline GPGPU implementations. Our experiments have been conducted on a Geforce GTX 275; because of OpenCL we expect our optimizations to be applicable to other architectures as well.

모션 캡처 시스템에 대한 고찰: 임상적 활용 및 운동형상학적 변인 측정 중심으로 (A Review of Motion Capture Systems: Focusing on Clinical Applications and Kinematic Variables)

  • 임우택
    • 한국전문물리치료학회지
    • /
    • 제29권2호
    • /
    • pp.87-93
    • /
    • 2022
  • To solve the pathological problems of the musculoskeletal system based on evidence, a sophisticated analysis of human motion is required. Traditional optical motion capture systems with high validity and reliability have been utilized in clinical practice for a long time. However, expensive equipment and professional technicians are required to construct optical motion capture systems, hence they are used at a limited capacity in clinical settings despite their advantages. The development of information technology has overcome the existing limit and paved the way for constructing a motion capture system that can be operated at a low cost. Recently, with the development of computer vision-based technology and optical markerless tracking technology, webcam-based 3D human motion analysis has become possible, in which the intuitive interface increases the user-friendliness to non-specialists. In addition, unlike conventional optical motion capture, with this approach, it is possible to analyze motions of multiple people at simultaneously. In a non-optical motion capture system, an inertial measurement unit is typically used, which is not significantly different from a conventional optical motion capture system in terms of its validity and reliability. With the development of markerless technology and advent of non-optical motion capture systems, it is a great advantage that human motion analysis is no longer limited to laboratories.

Updating BIM: Reflecting Thermographic Sensing in BIM-based Building Energy Analysis

  • Ham, Youngjib;Golparvar-Fard, Mani
    • 국제학술발표논문집
    • /
    • The 6th International Conference on Construction Engineering and Project Management
    • /
    • pp.532-536
    • /
    • 2015
  • This paper presents an automated computer vision-based system to update BIM data by leveraging multi-modal visual data collected from existing buildings under inspection. Currently, visual inspections are conducted for building envelopes or mechanical systems, and auditors analyze energy-related contextual information to examine if their performance is maintained as expected by the design. By translating 3D surface thermal profiles into energy performance metrics such as actual R-values at point-level and by mapping such properties to the associated BIM elements using XML Document Object Model (DOM), the proposed method shortens the energy performance modeling gap between the architectural information in the as-designed BIM and the as-is building condition, which improve the reliability of building energy analysis. The experimental results on existing buildings show that (1) the point-level thermography-based thermal resistance measurement can be automatically matched with the associated BIM elements; and (2) their corresponding thermal properties are automatically updated in gbXML schema. This paper provides practitioners with insight to uncover the fundamentals of how multi-modal visual data can be used to improve the accuracy of building energy modeling for retrofit analysis. Open research challenges and lessons learned from real-world case studies are discussed in detail.

  • PDF

사전 정보가 없는 배송지에서 장애물 탐지 및 배송 드론의 안전 착륙 지점 선정 기법 (Obstacle Detection and Safe Landing Site Selection for Delivery Drones at Delivery Destinations without Prior Information)

  • 서민철;한상익
    • 자동차안전학회지
    • /
    • 제16권2호
    • /
    • pp.20-26
    • /
    • 2024
  • The delivery using drones has been attracting attention because it can innovatively reduce the delivery time from the time of order to completion of delivery compared to the current delivery system, and there have been pilot projects conducted for safe drone delivery. However, the current drone delivery system has the disadvantage of limiting the operational efficiency offered by fully autonomous delivery drones in that drones mainly deliver goods to pre-set landing sites or delivery bases, and the final delivery is still made by humans. In this paper, to overcome these limitations, we propose obstacle detection and landing site selection algorithm based on a vision sensor that enables safe drone landing at the delivery location of the product orderer, and experimentally prove the possibility of station-to-door delivery. The proposed algorithm forms a 3D map of point cloud based on simultaneous localization and mapping (SLAM) technology and presents a grid segmentation technique, allowing drones to stably find a landing site even in places without prior information. We aims to verify the performance of the proposed algorithm through streaming data received from the drone.

스테레오 카메라를 이용한 이동객체의 실시간 추적과 거리 측정 시스템 (Real-time moving object tracking and distance measurement system using stereo camera)

  • 이동석;이동욱;김수동;김태준;유지상
    • 방송공학회논문지
    • /
    • 제14권3호
    • /
    • pp.366-377
    • /
    • 2009
  • 본 논문에서는 스테레오 카메라로부터 획득된 좌우 영상을 이용하여 3차원 공간좌표(x, y, z)를 획득하고, 이를 이용하여 제어되는 가상공간을 통하여 사용자에게 현실감을 제공하는 실시간 시스템을 구현한다. 일반적으로 관심영역의 변이를 추정할 때 관심영역내의 모든 화소(pixel)의 변이를 추정하지만, 제안한 시스템에서는 관심영역의 2차원 중심좌표(x, y)만을 변이추정에 사용하여 실시간으로 변이를 추정한다. 추정된 변이로부터 깊이정보(depth)를 구하여 관심영역의 3차원 공간좌표를 획득한다. 시스템은 손을 관심영역으로 설정하여 실시간으로 손의 움직임 정보를 획득하고, 가상공간(virtual space)에 적용하여 사용자가 가상공간을 조작할 수 있도록 한다. 실험을 통해 제안하는 실시간 시스템이 150cm 거리(distance) 내에서의 깊이측정 시 0.68cm의 평균오차를 가지고 손동작 인식률은 90%이상 보이는 것을 검증하였다.

이미지 인식을 이용한 비마커 기반 모바일 증강현실 기법 연구 (Non-Marker Based Mobile Augmented Reality Technology Using Image Recognition)

  • 조휘준;김대원
    • 융합신호처리학회논문지
    • /
    • 제12권4호
    • /
    • pp.258-266
    • /
    • 2011
  • 증강현실 기술이 많이 보편화 되고 사용 양태가 다양화됨에 따라 적용되는 분야 및 범위 또한 우리 생활 곳곳에서 쉽게 나타나고 볼 수 있게 되었다. 기존의 카메라 비전 기반 증강현실 기법들은 현실 세계의 실제 정보 이용 보다는 마커를 이용한 기술을 더 많이 사용하였다. 이러한 마커 인식을 통한 증강현실 기법은 그 응용 범위가 제한적이고 사용자가 해당 서비스 응용 프로그램에 몰입하는데 적절한 환경을 제공하는데 한계가 존재한다. 본 논문에서 스마트 모바일 단말 기반 증강현실 기술 구현을 위해 단말 장치에 내장된 카메라와 이미지 처리 기술을 활용하여 어떠한 마커도 없는 상태에서 사용자가 현실세계의 영상으로부터 객체를 인식하고 해당 객체에 연결된 3D 컨텐츠와 관련 정보를 현실 세계의 영상에 추가되게 함으로써 서비스가 구현되는 증강현실 가술을 제시하였다. 이미지로부터의 객체 인식은 미리 등록되어 있는 창조용 정보와 비교하는 과정을 통해 진행되었으며 이 과정에서 스마트 모바일 장치의 특성을 고려하여 구동 속도 향상을 목표로 유사도 측정을 위한 연산량을 줄이도록 하였다. 또한 3D 컨텐츠가 단말 화면상에 출력된 후 사용자는 스마트 모바일 장치를 이용한 터치 이벤트를 통해 상호작용이 가능하도록 설계되었다. 이 후 사용자의 선택에 따라 웹 브라우저를 통하여 객체와 연관된 정보를 얻을 수 있도록 하였다. 본 논문에 묘사된 시스템을 이용하여 기존 기술과의 객체 인식 및 동작 속도, 정확도, 인식 오류 검출 정도 등의 성능 차이를 비교 분석하였고 그 결과를 제시함으로써 스마트 모바일 환경에 적합한 증강현실 기법을 소개하고 실험을 통해 검증하고자 하였다.