• Title/Summary/Keyword: scene image

Search Result 947, Processing Time 0.032 seconds

Photorealistic Building Modelling and Visualization in 3D GIS (3차원 GIS의 현실감 부여 빌딩 모델링 및 시각화에 관한 연구)

  • Song, Yong Hak;Sohn, Hong Gyoo;Yun, Kong Hyun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.2D
    • /
    • pp.311-316
    • /
    • 2006
  • Despite geospatial information systems are widely used in many different fields as a powerful tool for spatial analysis and decision-making, their capabilities to handle realistic 3-D urban environment are very limited. The objective of this work is to integrate the recent developments in 3-D modeling and visualization into GIS to enhance its 3-D capabilities. To achieve a photorealistic view, building models are collected from a pair of aerial stereo images. Roof and wall textures are respectively obtained from ortho-rectified aerial image and ground photography. This study is implemented by using ArcGIS as the work platform and ArcObjects and Visual Basic as development tools. Presented in this paper are 3-D geometric modeling and its data structure, texture creation and its association with the geometric model. As the results, photorealistic views of Purdue University campus are created and rendered with ArcScene.

Object Detection Based on Deep Learning Model for Two Stage Tracking with Pest Behavior Patterns in Soybean (Glycine max (L.) Merr.)

  • Yu-Hyeon Park;Junyong Song;Sang-Gyu Kim ;Tae-Hwan Jun
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.89-89
    • /
    • 2022
  • Soybean (Glycine max (L.) Merr.) is a representative food resource. To preserve the integrity of soybean, it is necessary to protect soybean yield and seed quality from threats of various pests and diseases. Riptortus pedestris is a well-known insect pest that causes the greatest loss of soybean yield in South Korea. This pest not only directly reduces yields but also causes disorders and diseases in plant growth. Unfortunately, no resistant soybean resources have been reported. Therefore, it is necessary to identify the distribution and movement of Riptortus pedestris at an early stage to reduce the damage caused by insect pests. Conventionally, the human eye has performed the diagnosis of agronomic traits related to pest outbreaks. However, due to human vision's subjectivity and impermanence, it is time-consuming, requires the assistance of specialists, and is labor-intensive. Therefore, the responses and behavior patterns of Riptortus pedestris to the scent of mixture R were visualized with a 3D model through the perspective of artificial intelligence. The movement patterns of Riptortus pedestris was analyzed by using time-series image data. In addition, classification was performed through visual analysis based on a deep learning model. In the object tracking, implemented using the YOLO series model, the path of the movement of pests shows a negative reaction to a mixture Rina video scene. As a result of 3D modeling using the x, y, and z-axis of the tracked objects, 80% of the subjects showed behavioral patterns consistent with the treatment of mixture R. In addition, these studies are being conducted in the soybean field and it will be possible to preserve the yield of soybeans through the application of a pest control platform to the early stage of soybeans.

  • PDF

Development of a Prototype System for Aquaculture Facility Auto Detection Using KOMPSAT-3 Satellite Imagery (KOMPSAT-3 위성영상 기반 양식시설물 자동 검출 프로토타입 시스템 개발)

  • KIM, Do-Ryeong;KIM, Hyeong-Hun;KIM, Woo-Hyeon;RYU, Dong-Ha;GANG, Su-Myung;CHOUNG, Yun-Jae
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.4
    • /
    • pp.63-75
    • /
    • 2016
  • Aquaculture has historically delivered marine products because the country is surrounded by ocean on three sides. Surveys on production have been conducted recently to systematically manage aquaculture facilities. Based on survey results, pricing controls on marine products has been implemented to stabilize local fishery resources and to ensure minimum income for fishermen. Such surveys on aquaculture facilities depend on manual digitization of aerial photographs each year. These surveys that incorporate manual digitization using high-resolution aerial photographs can accurately evaluate aquaculture with the knowledge of experts, who are aware of each aquaculture facility's characteristics and deployment of those facilities. However, using aerial photographs has monetary and time limitations for monitoring aquaculture resources with different life cycles, and also requires a number of experts. Therefore, in this study, we investigated an automatic prototype system for detecting boundary information and monitoring aquaculture facilities based on satellite images. KOMPSAT-3 (13 Scene), a local high-resolution satellite provided the satellite imagery collected between October and April, a time period in which many aquaculture facilities were operating. The ANN classification method was used for automatic detecting such as cage, longline and buoy type. Furthermore, shape files were generated using a digitizing image processing method that incorporates polygon generation techniques. In this study, our newly developed prototype method detected aquaculture facilities at a rate of 93%. The suggested method overcomes the limits of existing monitoring method using aerial photographs, but also assists experts in detecting aquaculture facilities. Aquaculture facility detection systems must be developed in the future through application of image processing techniques and classification of aquaculture facilities. Such systems will assist in related decision-making through aquaculture facility monitoring.

Auto Frame Extraction Method for Video Cartooning System (동영상 카투닝 시스템을 위한 자동 프레임 추출 기법)

  • Kim, Dae-Jin;Koo, Ddeo-Ol-Ra
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.28-39
    • /
    • 2011
  • While the broadband multimedia technologies have been developing, the commercial market of digital contents has also been widely spreading. Most of all, digital cartoon market like internet cartoon has been rapidly large so video cartooning continuously has been researched because of lack and variety of cartoon. Until now, video cartooning system has been focused in non-photorealistic rendering and word balloon. But the meaningful frame extraction must take priority for cartooning system when applying in service. In this paper, we propose new automatic frame extraction method for video cartooning system. At frist, we separate video and audio from movie and extract features parameter like MFCC and ZCR from audio data. Audio signal is classified to speech, music and speech+music comparing with already trained audio data using GMM distributor. So we can set speech area. In the video case, we extract frame using general scene change detection method like histogram method and extract meaningful frames in the cartoon using face detection among the already extracted frames. After that, first of all existent face within speech area image transition frame extract automatically. Suitable frame about movie cartooning automatically extract that extraction image transition frame at continuable period of time domain.

Scene Text Extraction in Natural Images using Hierarchical Feature Combination and Verification (계층적 특징 결합 및 검증을 이용한 자연이미지에서의 장면 텍스트 추출)

  • 최영우;김길천;송영자;배경숙;조연희;노명철;이성환;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.420-438
    • /
    • 2004
  • Artificially or naturally contained texts in the natural images have significant and detailed information about the scenes. If we develop a method that can extract and recognize those texts in real-time, the method can be applied to many important applications. In this paper, we suggest a new method that extracts the text areas in the natural images using the low-level image features of color continuity. gray-level variation and color valiance and that verifies the extracted candidate regions by using the high-level text feature such as stroke. And the two level features are combined hierarchically. The color continuity is used since most of the characters in the same text lesion have the same color, and the gray-level variation is used since the text strokes are distinctive in their gray-values to the background. Also, the color variance is used since the text strokes are distinctive in their gray-values to the background, and this value is more sensitive than the gray-level variations. The text level stroke features are extracted using a multi-resolution wavelet transforms on the local image areas and the feature vectors are input to a SVM(Support Vector Machine) classifier for the verification. We have tested the proposed method using various kinds of the natural images and have confirmed that the extraction rates are very high even in complex background images.

Registration Technique of Partial 3D Point Clouds Acquired from a Multi-view Camera for Indoor Scene Reconstruction (실내환경 복원을 위한 다시점 카메라로 획득된 부분적 3차원 점군의 정합 기법)

  • Kim Sehwan;Woo Woontack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.39-52
    • /
    • 2005
  • In this paper, a registration method is presented to register partial 3D point clouds, acquired from a multi-view camera, for 3D reconstruction of an indoor environment. In general, conventional registration methods require a high computational complexity and much time for registration. Moreover, these methods are not robust for 3D point cloud which has comparatively low precision. To overcome these drawbacks, a projection-based registration method is proposed. First, depth images are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling up holes referring neighboring 3D points. Second, 3D point clouds acquired from two views are projected onto the same image plane, and two-step integer mapping is applied to enable modified KLT (Kanade-Lucas-Tomasi) to find correspondences. Then, fine registration is carried out through minimizing distance errors based on adaptive search range. Finally, we calculate a final color referring colors of corresponding points and reconstruct an indoor environment by applying the above procedure to consecutive scenes. The proposed method not only reduces computational complexity by searching for correspondences on a 2D image plane, but also enables effective registration even for 3D points which have low precision. Furthermore, only a few color and depth images are needed to reconstruct an indoor environment.

A Study on the Visual Expression of the Characters for the Narrative in Animation - A Focus on Skeleton Character in "Coco(2017)" by Pixar - (장편 애니메이션 내러티브를 위한 캐릭터의 시각적 표현에 관한 연구 -픽사(PIXAR) "코코(2017)"의 해골 캐릭터를 중심으로-)

  • Kim, Soong-Hyun
    • Journal of Digital Convergence
    • /
    • v.17 no.12
    • /
    • pp.451-459
    • /
    • 2019
  • This study is aims to examine how the skeleton character in Pixar's Animation is visualized for the narrative of the film and suggests the direction of attractive character development corresponding to the story. First of all, it was conducted the case studies on the narrative of animation, character design, character's emotion expression, and animations featuring skeleton character. Based on this study, it was derived the visual representation of the skeleton character featuring in and analyzed the role and function in the animation. As a result, the expressions by the skeleton's eyes, eyebrows, mouth, lips, and jaw played the most important role for the emotional expression and lines in , and the major characteristic for human facial expression was reflected in the design of the skeleton character. In addition, the various props were used to provide the detailed informations of the skeleton's character, and it was expressed the movement emphasizing its essential attribute. Finally, the skeleton's symbolic image was strengthened by composing and arranging the skeleton's image through Mise en scene. It is expected that this study will be used as a reference for the animation character related researchers and practitioners in the business and it helps develop attractive characters fir the narrative animation in the future.

A Study on the Expression of Sense of Space in 3D Architectural Visualization Animation (3D 건축 시각화 애니메이션의 공간감 표현에 관한 연구)

  • Kim, Jong Kouk
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.1
    • /
    • pp.369-376
    • /
    • 2021
  • 3D architectural visualization animation has become more important in architectural presentations due to the rapid development of digital technology. Unlike games and movies, architectural visualization animation most focuses on delivering visual information, and aims to express the sense of space that viewers feel in an architectural space, rather than simply providing an image of viewing buildings. The sense of space is affected not only by physical elements of architecture, but also by immaterial elements such as light, time, and human actions, and it is more advantageous to express it in animations that can contain temporality compared to a fixed image. Therefore, the purpose of this study is to search for elements to effectively convey a sense of space in architectural visualization animation. To this end, the works of renowned architectural visualization artists that are open to the public were selected and observed to search for elements to effectively convey a sense of space to viewers. The elements that convey the sense of space that are common to the investigated architectural animations can be classified into the movement and manipulation of the camera, the movement of surrounding objects, the change of the light environment, the change of the weather, the control of time, and the insertion of a surreal scene. It will be followed by a discussion on the immersion of architectural contents.

Research on factors influencing consumer trust in livestreaming e-commerce (라이브 스트리밍 전자 상거래에서 소비자 신뢰에 영향을 미치는 요인에 관한 연구)

  • Xiao yong Lyu;Jae-Yeon Sim
    • Industry Promotion Research
    • /
    • v.8 no.3
    • /
    • pp.181-199
    • /
    • 2023
  • E-commerce is gradually upgrading from traditional text and image formats to short video and livestreaming formats. Livestreaming e-commerce enriches the content and forms of information dissemination and product display, enhances the consumer's shopping experience, and gradually becomes the mainstream new consumer scene. However, there are many negative phenomena in the development of livestreaming e-commerce, such as false propaganda, counterfeit goods, and various negative events, which seriously affect the level of consumer trust in livestreaming e-commerce. Trust is the core competitive factor of livestreaming e-commerce. Based on previous research on trust theory and combined with the characteristic elements of "people, goods, and scenes" of livestreaming e-commerce, this article constructs a trust model for livestreaming e-commerce, proposes hypotheses, and proves through empirical research that factors such as store characteristics, livestream host characteristics, brand image, product information, platform reputation, livestreaming situation, and trust tendency have a significant positive impact on consumer trust. Based on the research conclusions, this article provides insights and management suggestions, such as emphasizing the construction of store characteristic indicators, creating desirable livestream host characteristics, focusing on product brand building and selection, maintaining the display of product information, selecting suitable livestreaming platforms, and creating rich content for livestreaming situations.

Comparison Among Sensor Modeling Methods in High-Resolution Satellite Imagery (고해상도 위성영상의 센서모형과 방법 비교)

  • Kim, Eui Myoung;Lee, Suk Kun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.6D
    • /
    • pp.1025-1032
    • /
    • 2006
  • Sensor modeling of high-resolution satellites is a prerequisite procedure for mapping and GIS applications. Sensor models, describing the geometric relationship between scene and object, are divided into two main categories, which are rigorous and approximate sensor models. A rigorous model is based on the actual geometry of the image formation process, involving internal and external characteristics of the implemented sensor. However, approximate models require neither a comprehensive understanding of imaging geometry nor the internal and external characteristics of the imaging sensor, which has gathered a great interest within photogrammetric communities. This paper described a comparison between rigorous and various approximate sensor models that have been used to determine three-dimensional positions, and proposed the appropriate sensor model in terms of the satellite imagery usage. Through the case study of using IKONOS satellite scenes, rigorous and approximate sensor models have been compared and evaluated for the positional accuracy in terms of acquirable number of ground controls. Bias compensated RFM(Rational Function Model) turned out to be the best among compared approximate sensor models, both modified parallel projection and parallel-perspective model were able to be modelled with a small number of controls. Also affine transformation, one of the approximate sensor models, can be used to determine the planimetric position of high-resolution satellites and perform image registration between scenes.