• Title/Summary/Keyword: Artificial Intelligence Art

Search Result 167, Processing Time 0.028 seconds

A study of Artificial Life Art as Behavior-oriented Art (행동 지향적 예술로서의 인공생명 아트 연구)

  • Park, Nam-Sik;Jung, Moon-Ryul
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.1081-1086
    • /
    • 2009
  • 기술의 발전은 사회에 많은 변화를 일으키고 있다. 또한 기술의 발전은 예술 영역에 있어서도 형식과 내용에 많은 변화와 영향을 주고 있다. 컴퓨터 아트, 인터액티브 아트, 뉴 미디어 아트라고 불리는 새로운 예술 장르들이 탄생하였으며 예술가들은 다양한 기술과 접목하여 새로운 작품을 만들어 내고 있다. 뉴 미디어 아트의 중요한 특징 중 하나는 상호작용성인데 이것은 예술작품, 예술가, 그리고 관람자의 수용방식에 결정적인 변화를 가져왔다. 즉 뉴미디어 아트서의 예술작품은 완성태가 아닌 과정(process)으로 주어지고, 예술가는 작업의 초안자 또는 작업의 맥락을 규정하는 자로 규정되며, 작품과 관람자간의 상호작용이 무엇보다 강조된다. 그러나 기존의 뉴 미디어 작품에서 일어나는 상호작용성은 미리 계산된 범위 안에서 일어나는 제약이 있기에 진정한 상호작용성이라고 보기 힘들다는 비판도 있다. 이런 상호작용성은 공학적 세계관에 갇힌 닫힌 시스템으로서의 상호작용성이라고 말하며 미적인 상호 작용성의 도구로서 열린 시스템으로서의 새로운 작품의 필요성을 제시한 예술가들이 있다. 본 논문은 이러한 예술가들의 발자취를 따라 더 본질적인 미학적 상호작용성에 대한 고민과 함께 그에 따른 새로운 상호작용적 예술인 행동지향적 예술로서 인공지능, 인공생명 아트에 대하여 살펴보고자 한다.

  • PDF

Artificial Intelligence Art : A Case study on the Artwork An Evolving GAIA (대화형 인공지능 아트 작품의 제작 연구 :진화하는 신, 가이아(An Evolving GAIA)사례를 중심으로)

  • Roh, Jinah
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.5
    • /
    • pp.311-318
    • /
    • 2018
  • This paper presents the artistic background and implementation structure of a conversational artificial intelligence interactive artwork, "An Evolving GAIA". Recent artworks based on artificial intelligence technology are introduced. Development of biomimetics and artificial life technology has burred differentiation of machine and human. In this paper, artworks presenting machine-life metaphor are shown, and the distinct implementation of conversation system is emphasized in detail. The artwork recognizes and follows the movement of audience using its eyes for natural interaction. It listens questions of the audience and replies appropriate answers by text-to-speech voice, using the conversation system implemented with an Android client in the artwork and a webserver based on the question-answering dictionary. The interaction gives to the audience discussion of meaning of life in large scale and draws sympathy for the artwork itself. The paper shows the mechanical structure, the implementation of conversational system of the artwork, and reaction of the audience which can be helpful to direct and make future artificial intelligence interactive artworks.

A Study on Immersive Content Production and Storytelling Methods using Photogrammetry and Artificial Intelligence Technology (포토그래메트리 및 인공지능 기술을 활용한 실감 콘텐츠 제작과 스토리텔링 방법 연구)

  • Kim, Jungho;Park, JinWan;Yoo, Taekyung
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.654-664
    • /
    • 2022
  • Immersive content overcomes spatial limitations through convergence with extended reality, artificial intelligence, and photogrammetry technology along with interest due to the COVID-19 pandemic, presenting a new paradigm in the content market such as entertainment, media, performances, and exhibitions. However, it can be seen that in order for realistic content to have sustained public interest, it is necessary to study storytelling method that can increase immersion in content rather than technological freshness. Therefore, in this study, we propose a immersive content storytelling method using artificial intelligence and photogrammetry technology. The proposed storytelling method is to create a content story through interaction between interactive virtual beings and participants. In this way, participation can increase content immersion. This study is expected to help content creators in the accelerating immersive content market with a storytelling methodology through virtual existence that utilizes artificial intelligence technology proposed to content creators to help in efficient content creation. In addition, I think that it will contribute to the establishment of a immersive content production pipeline using artificial intelligence and photogrammetry technology in content production.

On the End and Core of Chinese Traditional Calligraphy Art

  • Zhang Yifan
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.2
    • /
    • pp.178-185
    • /
    • 2023
  • The Chinese calligraphy art, which still adheres to tradition, has fallen into the formalism deeper and deeper. The majority of studies on calligraphy still focus on the formal beauty and neglect the core spirit hidden behind the calligraphy art. The calligraphy art is an art defined by words. This definition is not only reflected in the form of the characters but also, and more importantly, in the meaning of the characters. It is not a form of writing, but a writing of lives, wills and feelings, a writing of the experience of daily life, and an improvised poetic writing. With the advent of the age of artificial intelligence, the Chinese traditional calligraphy art, which still adheres to the "supremacy of the brush and ink", has shown a sense of dystopia, and its end is inevitable. Only by truly understanding the core of the calligraphy art, by integrating it with contemporary daily life, and by focusing on the communication of ideas in calligraphy, will it be possible to obtain a new life.

Lightweight Attention-Guided Network with Frequency Domain Reconstruction for High Dynamic Range Image Fusion

  • Park, Jae Hyun;Lee, Keuntek;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.205-208
    • /
    • 2022
  • Multi-exposure high dynamic range (HDR) image reconstruction, the task of reconstructing an HDR image from multiple low dynamic range (LDR) images in a dynamic scene, often produces ghosting artifacts caused by camera motion and moving objects and also cannot deal with washed-out regions due to over or under-exposures. While there has been many deep-learning-based methods with motion estimation to alleviate these problems, they still have limitations for severely moving scenes. They also require large parameter counts, especially in the case of state-of-the-art methods that employ attention modules. To address these issues, we propose a frequency domain approach based on the idea that the transform domain coefficients inherently involve the global information from whole image pixels to cope with large motions. Specifically we adopt Residual Fast Fourier Transform (RFFT) blocks, which allows for global interactions of pixels. Moreover, we also employ Depthwise Overparametrized convolution (DO-conv) blocks, a convolution in which each input channel is convolved with its own 2D kernel, for faster convergence and performance gains. We call this LFFNet (Lightweight Frequency Fusion Network), and experiments on the benchmarks show reduced ghosting artifacts and improved performance up to 0.6dB tonemapped PSNR compared to recent state-of-the-art methods. Our architecture also requires fewer parameters and converges faster in training.

  • PDF

Digital immersive experiences with the future of shelf painting -From "Kandinsky, the Abstract Odyssey."

  • Feng Tianshi
    • International Journal of Advanced Culture Technology
    • /
    • v.12 no.1
    • /
    • pp.123-127
    • /
    • 2024
  • In the early 20th century, Walter Benjamin analyzed the changes in the value of traditional art forms under the industrial era and the changes in the aesthetic attitude of the masses. A century later, in the contemporary multi-art world, the traditional medium of shelf painting is once again experiencing a similar situation as the last century. Emerging technology display modes such as digital virtual reality and digital immersive experience can achieve digital reproduction of paintings on shelves and reach a certain level of performance, which once again shocks the public's aesthetic perception. This paper attempts to illustrate the outstanding characteristics of the new art form after digital reconstruction by exploring the transformation and sublimation of digital technology to shelf painting. We predict that art research on future reality and augmented reality according to the artificial intelligence era will be conducted in depth in the future.

Fast offline transformer-based end-to-end automatic speech recognition for real-world applications

  • Oh, Yoo Rhee;Park, Kiyoung;Park, Jeon Gue
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.476-490
    • /
    • 2022
  • With the recent advances in technology, automatic speech recognition (ASR) has been widely used in real-world applications. The efficiency of converting large amounts of speech into text accurately with limited resources has become more vital than ever. In this study, we propose a method to rapidly recognize a large speech database via a transformer-based end-to-end model. Transformers have improved the state-of-the-art performance in many fields. However, they are not easy to use for long sequences. In this study, various techniques to accelerate the recognition of real-world speeches are proposed and tested, including decoding via multiple-utterance-batched beam search, detecting end of speech based on a connectionist temporal classification (CTC), restricting the CTC-prefix score, and splitting long speeches into short segments. Experiments are conducted with the Librispeech dataset and the real-world Korean ASR tasks to verify the proposed methods. From the experiments, the proposed system can convert 8 h of speeches spoken at real-world meetings into text in less than 3 min with a 10.73% character error rate, which is 27.1% relatively lower than that of conventional systems.

A Research of User Experience on Multi-Modal Interactive Digital Art

  • Qianqian Jiang;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.80-85
    • /
    • 2024
  • The concept of single-modal digital art originated in the 20th century and has evolved through three key stages. Over time, digital art has transformed into multi-modal interaction, representing a new era in art forms. Based on multi-modal theory, this paper aims to explore the characteristics of interactive digital art in innovative art forms and its impact on user experience. Through an analysis of practical application of multi-modal interactive digital art, this study summarises the impact of creative models of digital art on the physical and mental aspects of user experience. In creating audio-visual-based art, multi-modal digital art should seamlessly incorporate sensory elements and leverage computer image processing technology. Focusing on user perception, emotional expression, and cultural communication, it strives to establish an immersive environment with user experience at its core. Future research, particularly with emerging technologies like Artificial Intelligence(AR) and Virtual Reality(VR), should not merely prioritize technology but aim for meaningful interaction. Through multi-modal interaction, digital art is poised to continually innovate, offering new possibilities and expanding the realm of interactive digital art.

State-of-the-Art AI Computing Hardware Platform for Autonomous Vehicles (자율주행 인공지능 컴퓨팅 하드웨어 플랫폼 기술 동향)

  • Suk, J.H.;Lyuh, C.G.
    • Electronics and Telecommunications Trends
    • /
    • v.33 no.6
    • /
    • pp.107-117
    • /
    • 2018
  • In recent years, with the development of autonomous driving technology, high-performance artificial intelligence computing hardware platforms have been developed that can process multi-sensor data, object recognition, and vehicle control for autonomous vehicles. Most of these hardware platforms have been developed overseas, such as NVIDIA's DRIVE PX, Audi's zFAS, Intel GO, Mobile Eye's EyeQ, and BAIDU's Apollo Pilot. In Korea, however, ETRI's artificial intelligence computing platform has been developed. In this paper, we discuss the specifications, structure, performance, and development status centering on hardware platforms that support autonomous driving rather than the overall contents of autonomous driving technology.

Attentive Transfer Learning via Self-supervised Learning for Cervical Dysplasia Diagnosis

  • Chae, Jinyeong;Zimmermann, Roger;Kim, Dongho;Kim, Jihie
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.453-461
    • /
    • 2021
  • Many deep learning approaches have been studied for image classification in computer vision. However, there are not enough data to generate accurate models in medical fields, and many datasets are not annotated. This study presents a new method that can use both unlabeled and labeled data. The proposed method is applied to classify cervix images into normal versus cancerous, and we demonstrate the results. First, we use a patch self-supervised learning for training the global context of the image using an unlabeled image dataset. Second, we generate a classifier model by using the transferred knowledge from self-supervised learning. We also apply attention learning to capture the local features of the image. The combined method provides better performance than state-of-the-art approaches in accuracy and sensitivity.