• Title/Summary/Keyword: The Photorealistic Rendering

Search Result 101, Processing Time 0.02 seconds

Artificial Neural Network Method Based on Convolution to Efficiently Extract the DoF Embodied in Images

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.51-57
    • /
    • 2021
  • In this paper, we propose a method to find the DoF(Depth of field) that is blurred in an image by focusing and out-focusing the camera through a efficient convolutional neural network. Our approach uses the RGB channel-based cross-correlation filter to efficiently classify the DoF region from the image and build data for learning in the convolutional neural network. A data pair of the training data is established between the image and the DoF weighted map. Data used for learning uses DoF weight maps extracted by cross-correlation filters, and uses the result of applying the smoothing process to increase the convergence rate in the network learning stage. The DoF weighted image obtained as the test result stably finds the DoF region in the input image. As a result, the proposed method can be used in various places such as NPR(Non-photorealistic rendering) rendering and object detection by using the DoF area as the user's ROI(Region of interest).

A Ray-Tracing Algorithm Based On Processor Farm Model (프로세서 farm 모델을 이용한 광추적 알고리듬)

  • Lee, Hyo Jong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.2 no.1
    • /
    • pp.24-30
    • /
    • 1996
  • The ray tracing method, which is one of many photorealistic rendering techniques, requires heavy computational processing to synthesize images. Parallel processing can be used to reduce the computational processing time. A parallel algorithm for the ray tracing has been implemented and executed for various images on transputer systems. In order to develop a scalable parallel algorithm, a processor farming technique has been exploited. Since each image is divided and distributed to each farming processor, the scalability of the parallel system and load balancing are achieved naturally in the proposed algorithm. Efficiency of the parallel algorithm is obtained up to 95% for nine processors. However, the best size of a distributed task is much higher in simple images due to less computational requirement for every pixel. Efficiency degradation is observed for large granularity tasks because of load unbalancing caused by the large task. Overall, transputer systems behave as good scalable parallel processing system with respect to the cost-performance ratio.

  • PDF

Color2Gray using Conventional Approaches in Black-and-White Photography (전통적 사진 기법에 기반한 컬러 영상의 흑백 변환)

  • Jang, Hyuk-Su;Choi, Min-Gyu
    • Journal of the Korea Computer Graphics Society
    • /
    • v.14 no.3
    • /
    • pp.1-9
    • /
    • 2008
  • This paper presents a novel optimization-based saliency-preserving method for converting color images to grayscale in a manner consistent with conventional approaches of black-and-white photographers. In black-and-white photography, a colored filter called a contrast filter has been commonly employed on a camera to lighten or darken selected colors. In addition, local exposure controls such as dodging and burning techniques are typically employed in the darkroom process to change the exposure of local areas within the print without affecting the overall exposure. Our method seeks a digital version of a conventional contrast filter to preserve visually-important image features. Furthermore, conventional burning and dodging techniques are addressed, together with image similarity weights, to give edge-aware local exposure control over the image space. Our method can be efficiently optimized on GPU. According to the experiments, CUDA implementation enables 1 megapixel color images to be converted to grayscale at interactive frames rates.

  • PDF

Fast Light Source Estimation Technique for Effective Synthesis of Mixed Reality Scene (효과적인 혼합현실 장면 생성을 위한 고속의 광원 추정 기법)

  • Shin, Seungmi;Seo, Woong;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.89-99
    • /
    • 2016
  • One of the fundamental elements in developing mixed reality applications is to effectively analyze and apply the environmental lighting information to image synthesis. In particular, interactive applications require to process dynamically varying lighting sources in real-time, reflecting them properly in rendering results. Previous related works are not often appropriate for this because they are usually designed to synthesize photorealistic images, generating too many, often exponentially increasing, light sources or having too heavy a computational complexity. In this paper, we present a fast light source estimation technique that aims to search for primary light sources on the fly from a sequence of video images taken by a camera equipped with a fisheye lens. In contrast to previous methods, our technique can adust the number of found light sources approximately to the size that a user specifies. Thus, it can be effectively used in Phong-illumination-model-based direct illumination or soft shadow generation through light sampling over area lights.

Texture-based Hatching for Color Image and Video

  • Yang, Hee-Kyung;Min, Kyung-Ha
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.4
    • /
    • pp.763-781
    • /
    • 2011
  • We present a texture-based hatching technique for color images and video. Whereas existing approaches produce monochrome hatching effects in considering of triangular mesh models by applying strokes of uniform size, our scheme produces color hatching effects from photographs and video using strokes with a range of sizes. We use a Delaunay triangulation to create a mesh of triangles with sizes that reflect the structure of an input image. At each vertex of this triangulation, the flow of the image is analyzed and a hatching texture is then created with the same alignment, based on real pencil strokes. This texture is given a modified version of a color sampled from the image, and then it is used to fill all the triangles adjoining the vertex. The three hatching textures that accumulate in each triangle are averaged and the result of this process across all the triangles forms the output image. We can also add a paper texture effect and enhance feature lines in the image. Our algorithm can also be applied to video. The results are visually pleasing hatching effects similar to those seen in color pencil drawings and oil paintings.

Video-based Stained Glass

  • Kang, Dongwann;Lee, Taemin;Shin, Yong-Hyeon;Seo, Sanghyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2345-2358
    • /
    • 2022
  • This paper presents a method to generate stained-glass animation from video inputs. The method initially segments an input video volume into several regions considered as fragments of glass by mean-shift segmentation. However, the segmentation predominantly results in over-segmentation, causing several tiny segments in a highly textured area. In practice, assembling significantly tiny or large glass fragments is avoided to ensure architectural stability in stained glass manufacturing. Therefore, we use low-frequency components in the segmentation to prevent over-segmentation and subdivide segmented regions that are oversized. The subdividing must be coherent between adjacent frames to prevent temporal artefacts, such as flickering and the shower door effect. To temporally subdivide regions coherently, we obtain a panoramic image from the segmented regions in input frames, subdivide it using a weighted Voronoi diagram, and thereafter project the subdivided regions onto the input frames. To render stained glass fragment for each coherent region, we determine the optimal match glass fragment for the region from a dataset consisting of real stained-glass fragment images and transfer its color and texture to the region. Finally, applying lead came at the boundary of the regions in each frame yields temporally coherent stained-glass animation.

Printmaking Style Effect using Image Processing Techniques (영상처리 기법을 이용한 판화 스타일 효과)

  • Kim, Seung-Wan;Gwun, Ou-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.4
    • /
    • pp.76-83
    • /
    • 2010
  • In this paper, we propose a method that converts a inputted real image to a image feeling like printmaking. That is, this method converts a inputted real image to man made rubber printmaking style image using image processing techniques such as spatial filters, image bit-block transfer, etc. The process is as follows. First, after detecting edges in source image, we get the first image by deleting noise lines and points, then by sharpening. Secondly, we get second image using the similar method to the first image. Finally, we blend the first and the second image by logical AND operation This processing enables us to represent rubber panel and knife effects. Also, the proposed method shows that double edge detecting is effective in enhancing line-width and removing the tiny lines.

A Technique of Applying Embedded Sensors to Intuitive Adjustment of Image Filtering Effect in Smart Phone (스마트폰에서 이미지 필터링 효과의 직관적 조정을 위한 내장센서의 적용 기법)

  • Kim, Jiyeon;Kwon, Sukmin;Jung, Jongjin
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.8
    • /
    • pp.960-967
    • /
    • 2015
  • In this paper, we propose a user interface technique based on embedded sensors applying to apps in smart phone. Especially, we implement avata generation application using image filtering technique for photo image in smart phone. In the application, The embedded sensors are used as intuitive user interface to adjust the image filtering effect for making user satisfied effect in real time after the system produced the image filtering effect for avatar. This technique provides not a simple typed method of parameter values adjustment but a new intuitively emotional adjustment method in image filtering applications. The proposed technique can use sound values from embedded mike sensor for adjusting key values of sketch filter effect if the smart phone user produces sound. Similiarly the proposed technique can use coordinate values from embedded acceleration sensor for adjusting masking values of oil painting filter effect and use brightness values from embedded light sensor for adjusting masking values of sharp filter effect. Finally, we implement image filtering application and evaluate efficiency and effectiveness for the proposed technique.

The Development of PC based Ink-and-wash Drawing System Using Wiimote (위모트를 이용한 PC 기반 수묵화적 드로잉 시스템 개발)

  • Oh, Eun-Byol;Ryoo, Seung-Taek
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.4
    • /
    • pp.1-10
    • /
    • 2011
  • The general technique of ink-and-wash drawing consists of brush, ink and paper modeling and brush movement, ink diffusion and paper material simulation. In this paper, we suggest the simplified Qing's tank model that can decrease the computational time of ink diffusion and absorption on korean paper. The suggested drawing system is classified the characteristics of ink-and-wash into ink-shade, diffusion, line and paper. Also, the user's movement using motion sensor and IR sensor in wiimote is transmitted to brush position and direction.

Computer Animation of Marine Process - Tsunami Events - (해양과정의 컴퓨터 동주화 -지진진파(쯔나미)의 경우-)

  • Choi, Byung-Ho;Lee, Ho-Jun;Fumihiko Imamura;Nobuo Shuto
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.5 no.1
    • /
    • pp.19-24
    • /
    • 1993
  • With the use of Supercomputer and engineering workstations, high quality computer graphic representation of the modeling of marine process is feasible. In this wort major tsunami events occurred during recent years were simulated by numerical models and the computed water levels were viewed as three-dimensional surfaces in an animated sequence. Photorealistic images are constructed by advanced rendering technique with light reflection and shadows. It has demonstrated that video animation of numerical results reproduced the behaviour of propagation of real tsunami events remarkably well.

  • PDF