• Title/Summary/Keyword: character's network

Search Result 133, Processing Time 0.021 seconds

Jungian Character Network in Growing Other Character Archetypes in Films

  • Han, Youngsue
    • International Journal of Contents
    • /
    • v.15 no.2
    • /
    • pp.13-19
    • /
    • 2019
  • This research demonstrates a clear visual outline of character influence-relations in creating Jungian character archetypes in films using R computational technology. It contributes to the integration of Jungian analytical psychology into film studies by revealing character network relations in film. This paper handles character archetypes and their influence on developing other character archetypes in films in regards to network analysis drawn from Lynn Schmidt's analysis of 45 master characters in films. Additionally, this paper conducts a character network analysis visualization experiment using R open-source software to create an easily reproducible tutorial for scholars in humanities. This research is a pioneering work that could trigger the academic communities in humanities to actively adopt data science in their research and education.

An Empirical Study on the Sub-factors of Middle School Character Education using Social Network Analysis (사회 네트워크 분석을 이용한 중등 인성 교육의 세부요인에 관한 실증 연구)

  • Kim, Hyojung
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.13 no.2
    • /
    • pp.87-98
    • /
    • 2017
  • The advancements in scientific technology and information network in the 21st century allow us to easily acquire a desired knowledge. In the midst of today's informatization, globalization, and cultural diversification, adolescents experience emotional confusion while accommodating diverse cultures and information. This study aimed at examining three aspects of character suggested by the Ministry of Education, which are ethics, sociality, and emotion, and the actual sub-factors required for character education. To that end, a survey was conducted with adolescents who were at a character-building age, and social network analysis (SNA) was performed to determine the effect of character education on the sub-factors. The statistics program SPSS was used to investigate the general traits of the subjects and the validity of the research variables. The 2-mode data that were finally selected were converted to 2-mode data using NetMinder 4, which is a network analysis tool. Furthermore, a data network was established based on a quasi-network that represents the relationships between ethics, sociality, and emotion. The results of this study showed that the subjects considered honesty and justice to be the sub-domains of the ethics domain. In addition, they identified sympathy, communication, consideration for others, and cooperation as the sub-domains of the sociality domain. Finally, they believed that self-understanding and self-control were the sub-domains of the emotion domain.

A Study on the Estimation of Character Value in Media Works: Based on Network Centralities and Web-Search Data (미디어 작품 캐릭터 가치 측정 연구: 네트워크 중심성 척도와 검색 데이터를 활용하여)

  • Cho, Seonghyun;Lee, Minhyung;Choi, HanByeol Stella;Lee, Heeseok
    • Knowledge Management Research
    • /
    • v.22 no.4
    • /
    • pp.1-26
    • /
    • 2021
  • Measuring the intangible asset has been vigorously studied for its importance. Especially, the value of character in media industry is difficult to quantitatively evaluate in spite of the industry's rapid growth. Recently, the Social Network Analysis (i.e., SNA) has been actively applied to understand human usage patterns in a media field. By using SNA methodology, this study attempts to investigate how the character network characteristics of media works are linked to human search behaviors. Our analysis reveals the positive correlation and causality between character network centralities and character search data. This result implies that the character network can be used as a clue for the valuation of character assets.

A study on multi-persona fashion images in Instagram - Focusing on the case of "secondary-characters" - (인스타그램에 나타난 멀티 페르소나 패션이미지에 관한 연구 - "부캐" 사례를 중심으로 -)

  • Kim, Jongsun
    • The Research Journal of the Costume Culture
    • /
    • v.29 no.4
    • /
    • pp.603-615
    • /
    • 2021
  • The aim of this study was to analyze the semantic network structure of keywords and the visual composition of images extracted from Instagram in relation to the multi-persona phenomenon with in fashion imagery, which has recently been attracting attention. To this end, the concept of a 'secondary character', which forms a separate identity from a 'main character' on various social media platforms as well as on the airwaves, was considered as the spread of multi-persona and #SecondaryCharacter on Instagram was investigated. 3,801 keywords were collected after crawling the data using Python and morphological analysis was undertaken using KoNLP. The semantic network structure was then examined by conducting a CONCOR analysis using UCINET and Netdraw to determine the top 50 keywords. The results were then classified into a total of 6 clusters. In accordance with the meaning and context of the keywords included in each cluster, group names were assigned : virtual characters, relationship with the main character, hobbies, daily record, N-job person, media and marketing. Image analysis considered the technical, compositional, and social styles of the media based on Gillian Rose's visual analysis method. The results determined that Instagram uses fashion images that virtualize one's face to produce multi-persona representation s that show various occupations, describe different types of hobbies, and depict situations pertaining to various social roles.

A Unicode based Deep Handwritten Character Recognition model for Telugu to English Language Translation

  • BV Subba Rao;J. Nageswara Rao;Bandi Vamsi;Venkata Nagaraju Thatha;Katta Subba Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.101-112
    • /
    • 2024
  • Telugu language is considered as fourth most used language in India especially in the regions of Andhra Pradesh, Telangana, Karnataka etc. In international recognized countries also, Telugu is widely growing spoken language. This language comprises of different dependent and independent vowels, consonants and digits. In this aspect, the enhancement of Telugu Handwritten Character Recognition (HCR) has not been propagated. HCR is a neural network technique of converting a documented image to edited text one which can be used for many other applications. This reduces time and effort without starting over from the beginning every time. In this work, a Unicode based Handwritten Character Recognition(U-HCR) is developed for translating the handwritten Telugu characters into English language. With the use of Centre of Gravity (CG) in our model we can easily divide a compound character into individual character with the help of Unicode values. For training this model, we have used both online and offline Telugu character datasets. To extract the features in the scanned image we used convolutional neural network along with Machine Learning classifiers like Random Forest and Support Vector Machine. Stochastic Gradient Descent (SGD), Root Mean Square Propagation (RMS-P) and Adaptative Moment Estimation (ADAM)optimizers are used in this work to enhance the performance of U-HCR and to reduce the loss function value. This loss value reduction can be possible with optimizers by using CNN. In both online and offline datasets, proposed model showed promising results by maintaining the accuracies with 90.28% for SGD, 96.97% for RMS-P and 93.57% for ADAM respectively.

Story-based Information Retrieval (스토리 기반의 정보 검색 연구)

  • You, Eun-Soon;Park, Seung-Bo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.81-96
    • /
    • 2013
  • Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information retrieval and browsing. Content in video can be represented with content-based analysis techniques, which can extract various features from audio-visual data such as frames, shots, colors, texture, or shape. Moreover, similarity between videos can be measured through content-based analysis. However, a movie that is one of typical types of video data is organized by story as well as audio-visual data. This causes a semantic gap between significant information recognized by people and information resulting from content-based analysis, when content-based video analysis using only audio-visual data of low level is applied to information retrieval of movie. The reason for this semantic gap is that the story line for a movie is high level information, with relationships in the content that changes as the movie progresses. Information retrieval related to the story line of a movie cannot be executed by only content-based analysis techniques. A formal model is needed, which can determine relationships among movie contents, or track meaning changes, in order to accurately retrieve the story information. Recently, story-based video analysis techniques have emerged using a social network concept for story information retrieval. These approaches represent a story by using the relationships between characters in a movie, but these approaches have problems. First, they do not express dynamic changes in relationships between characters according to story development. Second, they miss profound information, such as emotions indicating the identities and psychological states of the characters. Emotion is essential to understanding a character's motivation, conflict, and resolution. Third, they do not take account of events and background that contribute to the story. As a result, this paper reviews the importance and weaknesses of previous video analysis methods ranging from content-based approaches to story analysis based on social network. Also, we suggest necessary elements, such as character, background, and events, based on narrative structures introduced in the literature. We extract characters' emotional words from the script of the movie Pretty Woman by using the hierarchical attribute of WordNet, which is an extensive English thesaurus. WordNet offers relationships between words (e.g., synonyms, hypernyms, hyponyms, antonyms). We present a method to visualize the emotional pattern of a character over time. Second, a character's inner nature must be predetermined in order to model a character arc that can depict the character's growth and development. To this end, we analyze the amount of the character's dialogue in the script and track the character's inner nature using social network concepts, such as in-degree (incoming links) and out-degree (outgoing links). Additionally, we propose a method that can track a character's inner nature by tracing indices such as degree, in-degree, and out-degree of the character network in a movie through its progression. Finally, the spatial background where characters meet and where events take place is an important element in the story. We take advantage of the movie script to extracting significant spatial background and suggest a scene map describing spatial arrangements and distances in the movie. Important places where main characters first meet or where they stay during long periods of time can be extracted through this scene map. In view of the aforementioned three elements (character, event, background), we extract a variety of information related to the story and evaluate the performance of the proposed method. We can track story information extracted over time and detect a change in the character's emotion or inner nature, spatial movement, and conflicts and resolutions in the story.

The Hangeul image's recognition and restoration based on Neural Network and Memory Theory (신경회로망과 기억이론에 기반한 한글영상 인식과 복원)

  • Jang, Jae-Hyuk;Park, Joong-Yang;Park, Jae-Heung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.4 s.36
    • /
    • pp.17-27
    • /
    • 2005
  • In this study, it proposes the neural network system for character recognition and restoration. Proposes system composed by recognition part and restoration part. In the recognition part. it proposes model of effective pattern recognition to improve ART Neural Network's performance by restricting the unnecessary top-down frame generation and transition. Also the location feature extraction algorithm which applies with Hangeul's structural feature can apply the recognition. In the restoration part, it composes model of inputted image's restoration by Hopfield neural network. We make part experiments to check system's performance, respectively. As a result of experiment, we see improve of recognition rate and possibility of restoration.

  • PDF

Motion generation using Center of Mass (무게중심을 활용한 모션 생성 기술)

  • Park, Geuntae;Sohn, Chae Jun;Lee, Yoonsang
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.2
    • /
    • pp.11-19
    • /
    • 2020
  • When a character's pose changes, its center of mass(COM) also changes. The change of COM has distinctive patterns corresponding to various motion types like walking, running or sitting. Thus the motion type can be predicted by using COM movement. We propose a motion generator that uses character's center of mass information. This generator can generate various motions without annotated action type labels. Thus dataset for training and running can be generated full-automatically. Our neural network model takes the motion history of the character and its center of mass information as inputs and generates a full-body pose for the current frame, and is trained using simple Convolutional Neural Network(CNN) that performs 1D convolution to deal with time-series motion data.

Optical Character Recognition for Hindi Language Using a Neural-network Approach

  • Yadav, Divakar;Sanchez-Cuadrado, Sonia;Morato, Jorge
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.117-140
    • /
    • 2013
  • Hindi is the most widely spoken language in India, with more than 300 million speakers. As there is no separation between the characters of texts written in Hindi as there is in English, the Optical Character Recognition (OCR) systems developed for the Hindi language carry a very poor recognition rate. In this paper we propose an OCR for printed Hindi text in Devanagari script, using Artificial Neural Network (ANN), which improves its efficiency. One of the major reasons for the poor recognition rate is error in character segmentation. The presence of touching characters in the scanned documents further complicates the segmentation process, creating a major problem when designing an effective character segmentation technique. Preprocessing, character segmentation, feature extraction, and finally, classification and recognition are the major steps which are followed by a general OCR. The preprocessing tasks considered in the paper are conversion of gray scaled images to binary images, image rectification, and segmentation of the document's textual contents into paragraphs, lines, words, and then at the level of basic symbols. The basic symbols, obtained as the fundamental unit from the segmentation process, are recognized by the neural classifier. In this work, three feature extraction techniques-: histogram of projection based on mean distance, histogram of projection based on pixel value, and vertical zero crossing, have been used to improve the rate of recognition. These feature extraction techniques are powerful enough to extract features of even distorted characters/symbols. For development of the neural classifier, a back-propagation neural network with two hidden layers is used. The classifier is trained and tested for printed Hindi texts. A performance of approximately 90% correct recognition rate is achieved.

Study on 2D Sprite *3.Generation Using the Impersonator Network

  • Yongjun Choi;Beomjoo Seo;Shinjin Kang;Jongin Choi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1794-1806
    • /
    • 2023
  • This study presents a method for capturing photographs of users as input and converting them into 2D character animation sprites using a generative adversarial network-based artificial intelligence network. Traditionally, 2D character animations have been created by manually creating an entire sequence of sprite images, which incurs high development costs. To address this issue, this study proposes a technique that combines motion videos and sample 2D images. In the 2D sprite generation process that uses the proposed technique, a sequence of images is extracted from real-life images captured by the user, and these are combined with character images from within the game. Our research aims to leverage cutting-edge deep learning-based image manipulation techniques, such as the GAN-based motion transfer network (impersonator) and background noise removal (U2 -Net), to generate a sequence of animation sprites from a single image. The proposed technique enables the creation of diverse animations and motions just one image. By utilizing these advancements, we focus on enhancing productivity in the game and animation industry through improved efficiency and streamlined production processes. By employing state-of-the-art techniques, our research enables the generation of 2D sprite images with various motions, offering significant potential for boosting productivity and creativity in the industry.