• Title/Summary/Keyword: 캡션

Search Result 57, Processing Time 0.019 seconds

The Evaluation Structure of Auditory Images on the Streetscapes - The Semantic Issues of Soundscape based on the Students' Fieldwork - (거리경관에 대한 청각적 이미지의 평가구조 - 대학생들의 음풍경 체험을 통한 의미론적 고찰 -)

  • Han Myung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.8
    • /
    • pp.481-491
    • /
    • 2005
  • The purpose of this study is to interpret the evaluation structure of auditory images about streetscapes in urban area on the basis of the semantic view of soundscapes. Using the caption evaluation method. which is a new method, from 2001 to 2005, a total of 45 college students participated in a fieldwork to find out the images of sounds while walking on the main streets of Namwon city. It was able get various data which include elements, features, impressions, and preferences about auditory scene. In Namwon city, the elements of the formation of auditory images are classified into natural sound and artificial sound which include machinery sounds, community sounds. and signal sounds. Also, the features of the auditory scene are classified by kind of sound, behavior, condition, character, relationship of circumference and image. Finally, the impression of auditory scene is classified into three categories, which are the emotions of humans, atmosphere of the streets, and the characteristics of the sound itself. From the relationship between auditory scene and estimation, the elements, features and impressions of auditory scene consist of the items which are positive, neutral, and negative images. Also, it was able to grasp the characteristics of auditory image of place or space through the evaluation model of streetscapes in Namwon city.

Analysis of the Reading Materials in the Chemistry Domain of Elementary School Science and Middle School Science Textbooks and Chemistry I and II Textbooks Developed Under the 2009 Revised National Science Curriculum (2009 개정 초등학교와 중학교 과학 교과서의 화학 영역 및 화학 I, II 교과서의 읽기자료 분석)

  • An, Jihyun;Jung, Yooni;Lee, Kyuyul;Kang, Sukjin
    • Journal of the Korean Chemical Society
    • /
    • v.63 no.2
    • /
    • pp.111-122
    • /
    • 2019
  • In this study, the characteristics of the reading materials in the chemistry domain of elementary school science and middle school science textbooks and chemistry I and II textbooks developed under the 2009 Revised National Science Curriculum were investigated. The criteria for classifying the reading materials were the types of theme, purpose, types of presentation, and students' activity. The inscriptions in the reading materials were also analyzed from the viewpoint of type, role, caption and index, and proximity type. The results indicated that more reading materials were included in the elementary science textbooks compared to middle school science, chemistry I, and/or chemistry II textbooks. The percentage of application in everyday life theme was high in the reading materials of elementary science textbooks, whereas the percentage of scientific knowledge theme was high in those of middle school science, chemistry I, and/or chemistry II textbooks. It was also found that the percentage of expanding concepts purpose was high in the reading materials of elementary science textbooks, whereas the percentage of supplementing concepts purpose was high in those of middle school science, chemistry I, and/or chemistry II textbooks. Several limitations in the use of inscriptions were found to exist; most inscriptions were photograph and/or illustration; most inscriptions were supplementing or elaborating texts; many inscriptions were presented without a caption or an index; there was a problem in the proximity of inscriptions to text.

Anomaly Detection Methodology Based on Multimodal Deep Learning (멀티모달 딥 러닝 기반 이상 상황 탐지 방법론)

  • Lee, DongHoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.101-125
    • /
    • 2022
  • Recently, with the development of computing technology and the improvement of the cloud environment, deep learning technology has developed, and attempts to apply deep learning to various fields are increasing. A typical example is anomaly detection, which is a technique for identifying values or patterns that deviate from normal data. Among the representative types of anomaly detection, it is very difficult to detect a contextual anomaly that requires understanding of the overall situation. In general, detection of anomalies in image data is performed using a pre-trained model trained on large data. However, since this pre-trained model was created by focusing on object classification of images, there is a limit to be applied to anomaly detection that needs to understand complex situations created by various objects. Therefore, in this study, we newly propose a two-step pre-trained model for detecting abnormal situation. Our methodology performs additional learning from image captioning to understand not only mere objects but also the complicated situation created by them. Specifically, the proposed methodology transfers knowledge of the pre-trained model that has learned object classification with ImageNet data to the image captioning model, and uses the caption that describes the situation represented by the image. Afterwards, the weight obtained by learning the situational characteristics through images and captions is extracted and fine-tuning is performed to generate an anomaly detection model. To evaluate the performance of the proposed methodology, an anomaly detection experiment was performed on 400 situational images and the experimental results showed that the proposed methodology was superior in terms of anomaly detection accuracy and F1-score compared to the existing traditional pre-trained model.

Analysis of Research Trends in Deep Learning-Based Video Captioning (딥러닝 기반 비디오 캡셔닝의 연구동향 분석)

  • Lyu Zhi;Eunju Lee;Youngsoo Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.35-49
    • /
    • 2024
  • Video captioning technology, as a significant outcome of the integration between computer vision and natural language processing, has emerged as a key research direction in the field of artificial intelligence. This technology aims to achieve automatic understanding and language expression of video content, enabling computers to transform visual information in videos into textual form. This paper provides an initial analysis of the research trends in deep learning-based video captioning and categorizes them into four main groups: CNN-RNN-based Model, RNN-RNN-based Model, Multimodal-based Model, and Transformer-based Model, and explain the concept of each video captioning model. The features, pros and cons were discussed. This paper lists commonly used datasets and performance evaluation methods in the video captioning field. The dataset encompasses diverse domains and scenarios, offering extensive resources for the training and validation of video captioning models. The model performance evaluation method mentions major evaluation indicators and provides practical references for researchers to evaluate model performance from various angles. Finally, as future research tasks for video captioning, there are major challenges that need to be continuously improved, such as maintaining temporal consistency and accurate description of dynamic scenes, which increase the complexity in real-world applications, and new tasks that need to be studied are presented such as temporal relationship modeling and multimodal data integration.

Interaction Between Seasons and Auditory Elements, Features and Impressions of Soundscape in Influencing Auditory Preferences (청각선호도에 미치는 청각적 경관의 요소, 특징, 인상 요인과 계절의 상호작용 효과)

  • Han, Myung-Ho;Oh, Yang-Ki
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.7
    • /
    • pp.306-316
    • /
    • 2007
  • Based on a concept of soundscape, this study aims to investigate Koreans' preference for auditory elements, features, and impressions depending upon the season, and examine how these auditory factors of soundstape and seasons interact with each other and attempt to discover their influence on people's auditory preferences. According to an environmental psychological approach called the caption evaluation method, 45 college students examined the soundscape of Namwon City while racing the streets in four seasons. In order to analyze the interactions between seasons and such auditory factors as elements, features, and impressions, it was conducted the GLM univariate analysis and the NPAR tests for independent samples. The results of the analyses show that there are interactive effects between seasons and auditory factors like elements, features, and impressions and that the auditory factors have an effect on auditory preference. Moreover, as for seasonal preference for auditory elements, it was found that people prefer natural sound in spring, summer, and fall while they prefer social sound in winter. Concerning seasonal preference for auditory features, people place a focus on the behaviors in spring, summer, and winter while they stress the surroundings in autumn, as for seasonal preference for auditory impressions, they make much of sound characteristics in spring and winter but they value the atmosphere of streets in summer and fall. The results of this study can he utilized as useful data in determining which auditory factors among elements, features, and impressions to take into consideration in a soundscape design.

Comparison Between Hidden Layers of Neural Networks and Topics for Hidden Layer Comprehension (인공신경망 은닉층 해석을 위한 토픽과의 비교)

  • Jeong, Young-Seob
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.910-913
    • /
    • 2017
  • 데이터의 양이 증가하면서 인공신경망을 통한 데이터 분석 기술이 주목받고 있으며, 텍스트, 그림, 동영상 등에 이르기까지 다양한 종류의 데이터를 자동으로 분석하여, 번역기, 채팅봇, 그림 캡션 자동 생성 등에 대한 연구 및 서비스 개발에 활용되고 있다. 인공신경망 기반으로 수행된 많은 연구들이 공통적으로 가진 한계가 있는데, 그것은 은닉층에 대한 해석이 어렵다는 것이다. 가령, 입력층, 은닉층, 그리고 결과층으로 이루어진 인공신경망을 임의의 데이터로 학습시키면, 입력층과 은닝층 사이에 존재하는 행렬은 해당 데이터에 존재하는 패턴 정보를 내포하게 된다. 따라서, 행렬에 존재하는 패턴 정보를 직접 분석할 수 있다면, 인공신경망 결과물에 대한 해석이 가능할 뿐만 아니라 성능을 높이기 위해 어떤 조정이 필요한지에 대한 직관도 얻을 수 있을 것이다. 하지만, 이 행렬의 실체는 숫자로 이루어진 벡터이므로 사람이 직접 해석하는 것은 불가능하며, 지금까지 수행되어온 대부분의 인공신경망 연구들은 공통적으로 이러한 한계점을 가지고 있다. 본 연구는 데이터에 존재하는 패턴을 잡아내면서도 해석이 가능한 토픽 모델과 인공신경망의 결과물을 비교함으로써, 인공신경망 은닉층 해석에 대한 실마리를 찾기 위한 연구이다. 실험을 통해 토픽과 은닉층 패턴의 유사성을 검증하고, 향후 인공신경망 연구에서 은닉층에 대한 가능성을 논한다.

Implementation of closed caption service S/W module on DTV receiver (DTV 수신기의 자막방송 S/W 모듈의 구현)

  • Kim Sun-Gwon;No Seung-Yong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.1
    • /
    • pp.69-76
    • /
    • 2004
  • Recently, The development of DTV receiver and the need of its additional services have been increased vastly. In this paper, we implement new closed caption engine for the deaf and hard of hearing person and languages studying on DTV receiver, The specification of domestic closed caption is almost adopted that of EIA-608A. In this paper, with fully following the specification, we will present how to implement functions of closed caption with new algorithm. the function includes paint-on, Pop-on, roll-up/down, etc. experimental results show that the proposed technique provides satisfactory performance on DTV receiver.

A analysis on visualization of advertisements for domestic real estate through Otto Kleppner′s visualization model (Focused on the creative of advertising in newspaper) (Otto Kleppner의 시각화 모델을 통한 국내 부동산광고의 시각화 분석(신문광고 크리에이티브를 중심으로))

  • 박광래
    • Archives of design research
    • /
    • v.15 no.2
    • /
    • pp.27-36
    • /
    • 2002
  • As interpretation of descriptive illustrations without caption differs from each viewer, it is deemed that the effect of attention is represented differently and the acceptance level of any advertisements will be ultimately different according to the visualization method of concept in the creative. The purpose of this research is to recognize the reality and status of visualization for real estate ads in newspapers via visualization analysis through Otto Kleppner's visualization model and to reasonably and efficiently conduct creative of real estate ads by representing problems and their solutions based on the surrey. These activities are to more efficiently produce real estate ads, which occupy a relatively important portion in newspaper ads.

  • PDF

A Study on the MARC Format for Holdings Data (소장데이터용 MARC 포맷에 관한 연구)

  • Oh Dong-Geun
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.33 no.3
    • /
    • pp.63-86
    • /
    • 1999
  • This article investigates the general characteristics and developments of the MARC format for holdings data. It also analyzes the record structure, content designation, and the content of it, mainly based on USMARC and KORMARC formats. Structure and content designation of them are almost same with those of the bibliographic and authority formats. The data fields divided into functional blocks based on their functions, but only 0XX, 5XX, 8XX fields are used in the holdings formats. Record contents of the data in the 008 fields include more elements related to the holdings and acquisition information. Variable fields can be grouped into several blocks, including those for numbers and codes; for notes fields, for location , and for holdings data. Holdings data fields include caption and pattern fields, enumeration and chronology fields, textual holdings fields, and item information fields. This article analyzes the content in each data fields in detail.

  • PDF

Design and Implementation of MPEG-2 Compressed Video Information Management System (MPEG-2 압축 동영상 정보 관리 시스템의 설계 및 구현)

  • Heo, Jin-Yong;Kim, In-Hong;Bae, Jong-Min;Kang, Hyun-Syug
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.6
    • /
    • pp.1431-1440
    • /
    • 1998
  • Video data are retrieved and stored in various compressed forms according to their characteristics, In this paper, we present a generic data model that captures the structure of a video document and that provides a means for indexing a video stream, Using this model, we design and implement CVIMS (the MPEG-2 Compressed Video Information Management System) to store and retrieve video documents, CVIMS extracts I-frames from MPEG-2 files, selects key-frames from the I -frames, and stores in database the index information such as thumbnails, captions, and picture descriptors of the key-frames, And also, CVIMS retrieves MPEG- 2 video data using the thumbnails of key-frames and v31ious labels of queries.

  • PDF