• Title/Summary/Keyword: Vision language model

Search Result 42, Processing Time 0.033 seconds

Deep-Learning Approach for Text Detection Using Fully Convolutional Networks

  • Tung, Trieu Son;Lee, Gueesang
    • International Journal of Contents
    • /
    • v.14 no.1
    • /
    • pp.1-6
    • /
    • 2018
  • Text, as one of the most influential inventions of humanity, has played an important role in human life since ancient times. The rich and precise information embodied in text is very useful in a wide range of vision-based applications such as the text data extracted from images that can provide information for automatic annotation, indexing, language translation, and the assistance systems for impaired persons. Therefore, natural-scene text detection with active research topics regarding computer vision and document analysis is very important. Previous methods have poor performances due to numerous false-positive and true-negative regions. In this paper, a fully-convolutional-network (FCN)-based method that uses supervised architecture is used to localize textual regions. The model was trained directly using images wherein pixel values were used as inputs and binary ground truth was used as label. The method was evaluated using ICDAR-2013 dataset and proved to be comparable to other feature-based methods. It could expedite research on text detection using deep-learning based approach in the future.

The influence of social capital on knowledge sharing behavior of mobile learners (사회적 자본이 이동학습자의 지식공유행위에 미치는 영향)

  • Qin, Ying;Lee, Kyeong-Rak;Lee, Sang-Joon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.9
    • /
    • pp.647-658
    • /
    • 2018
  • Modern society is complex and rapidly changing, and knowledge sharing is needed to acquire and create knowledge. Knowledge sharing is the act of providing information knowledge and know-how of their own in order to cooperate with or help their colleagues. This study presents a research model using social capital theory to explain the mobile knowledge sharing behavior of virtual community members. Based on previous studies, social capital theory is divided into structural, relational, and cognitive aspects. It was composed of social interaction ties as a measure of structural aspect, trust as a measure of cognitive aspect, shared language, shared vision and relational aspect. After collecting survey data, factor analysis and regression analysis were performed using SPSS 22. In this way, we examined how the detailed factors of social capital affect information sharing behavior and how the level of knowledge sharing affects community promotion. The results showed that social interaction ties, shared language, shared vision, and trust affect knowledge sharing. Knowledge sharing has had a positive impact on community promotion.

Text-to-Face Generation Using Multi-Scale Gradients Conditional Generative Adversarial Networks (다중 스케일 그라디언트 조건부 적대적 생성 신경망을 활용한 문장 기반 영상 생성 기법)

  • Bui, Nguyen P.;Le, Duc-Tai;Choo, Hyunseung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.764-767
    • /
    • 2021
  • While Generative Adversarial Networks (GANs) have seen huge success in image synthesis tasks, synthesizing high-quality images from text descriptions is a challenging problem in computer vision. This paper proposes a method named Text-to-Face Generation Using Multi-Scale Gradients for Conditional Generative Adversarial Networks (T2F-MSGGANs) that combines GANs and a natural language processing model to create human faces has features found in the input text. The proposed method addresses two problems of GANs: model collapse and training instability by investigating how gradients at multiple scales can be used to generate high-resolution images. We show that T2F-MSGGANs converge stably and generate good-quality images.

Three-Dimensional Convolutional Vision Transformer for Sign Language Translation (수어 번역을 위한 3차원 컨볼루션 비전 트랜스포머)

  • Horyeor Seong;Hyeonjoong Cho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.3
    • /
    • pp.140-147
    • /
    • 2024
  • In the Republic of Korea, people with hearing impairments are the second-largest demographic within the registered disability community, following those with physical disabilities. Despite this demographic significance, research on sign language translation technology is limited due to several reasons including the limited market size and the lack of adequately annotated datasets. Despite the difficulties, a few researchers continue to improve the performacne of sign language translation technologies by employing the recent advance of deep learning, for example, the transformer architecture, as the transformer-based models have demonstrated noteworthy performance in tasks such as action recognition and video classification. This study focuses on enhancing the recognition performance of sign language translation by combining transformers with 3D-CNN. Through experimental evaluations using the PHOENIX-Wether-2014T dataset [1], we show that the proposed model exhibits comparable performance to existing models in terms of Floating Point Operations Per Second (FLOPs).

Computer Vision Based Measurement, Error Analysis and Calibration (컴퓨터 시각(視覺)에 의거한 측정기술(測定技術) 및 측정오차(測定誤差)의 분석(分析)과 보정(補正))

  • Hwang, H.;Lee, C.H.
    • Journal of Biosystems Engineering
    • /
    • v.17 no.1
    • /
    • pp.65-78
    • /
    • 1992
  • When using a computer vision system for a measurement, the geometrically distorted input image usually restricts the site and size of the measuring window. A geometrically distorted image caused by the image sensing and processing hardware degrades the accuracy of the visual measurement and prohibits the arbitrary selection of the measuring scope. Therefore, an image calibration is inevitable to improve the measuring accuracy. A calibration process is usually done via four steps such as measurement, modeling, parameter estimation, and compensation. In this paper, the efficient error calibration technique of a geometrically distorted input image was developed using a neural network. After calibrating a unit pixel, the distorted image was compensated by training CMLAN(Cerebellar Model Linear Associator Network) without modeling the behavior of any system element. The input/output training pairs for the network was obtained by processing the image of the devised sampled pattern. The generalization property of the network successfully compensates the distortion errors of the untrained arbitrary pixel points on the image space. The error convergence of the trained network with respect to the network control parameters were also presented. The compensated image through the network was then post processed using a simple DDA(Digital Differential Analyzer) to avoid the pixel disconnectivity. The compensation effect was verified using known sized geometric primitives. A way to extract directly a real scaled geometric quantity of the object from the 8-directional chain coding was also devised and coded. Since the developed calibration algorithm does not require any knowledge of modeling system elements and estimating parameters, it can be applied simply to any image processing system. Furthermore, it efficiently enhances the measurement accuracy and allows the arbitrary sizing and locating of the measuring window. The applied and developed algorithms were coded as a menu driven way using MS-C language Ver. 6.0, PC VISION PLUS library functions, and VGA graphic functions.

  • PDF

Active Vision from Image-Text Multimodal System Learning (능동 시각을 이용한 이미지-텍스트 다중 모달 체계 학습)

  • Kim, Jin-Hwa;Zhang, Byoung-Tak
    • Journal of KIISE
    • /
    • v.43 no.7
    • /
    • pp.795-800
    • /
    • 2016
  • In image classification, recent CNNs compete with human performance. However, there are limitations in more general recognition. Herein we deal with indoor images that contain too much information to be directly processed and require information reduction before recognition. To reduce the amount of data processing, typically variational inference or variational Bayesian methods are suggested for object detection. However, these methods suffer from the difficulty of marginalizing over the given space. In this study, we propose an image-text integrated recognition system using active vision based on Spatial Transformer Networks. The system attempts to efficiently sample a partial region of a given image for a given language information. Our experimental results demonstrate a significant improvement over traditional approaches. We also discuss the results of qualitative analysis of sampled images, model characteristics, and its limitations.

Speaker Detection and Recognition for a Welfare Robot

  • Sugisaka, Masanori;Fan, Xinjian
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.835-838
    • /
    • 2003
  • Computer vision and natural-language dialogue play an important role in friendly human-machine interfaces for service robots. In this paper we describe an integrated face detection and face recognition system for a welfare robot, which has also been combined with the robot's speech interface. Our approach to face detection is to combine neural network (NN) and genetic algorithm (GA): ANN serves as a face filter while GA is used to search the image efficiently. When the face is detected, embedded Hidden Markov Model (EMM) is used to determine its identity. A real-time system has been created by combining the face detection and recognition techniques. When motivated by the speaker's voice commands, it takes an image from the camera, finds the face inside the image and recognizes it. Experiments on an indoor environment with complex backgrounds showed that a recognition rate of more than 88% can be achieved.

  • PDF

Image Caption Generation using Recurrent Neural Network (Recurrent Neural Network를 이용한 이미지 캡션 생성)

  • Lee, Changki
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.878-882
    • /
    • 2016
  • Automatic generation of captions for an image is a very difficult task, due to the necessity of computer vision and natural language processing technologies. However, this task has many important applications, such as early childhood education, image retrieval, and navigation for blind. In this paper, we describe a Recurrent Neural Network (RNN) model for generating image captions, which takes image features extracted from a Convolutional Neural Network (CNN). We demonstrate that our models produce state of the art results in image caption generation experiments on the Flickr 8K, Flickr 30K, and MS COCO datasets.

Design of Fuzzy Controller Based on Empirical Knowledge (실험적 지식에 기초한 퍼지제어기 설계)

  • Bae, Hyeon;Kim, Sung-Shin;Kim, Hae-Gyun
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2296-2298
    • /
    • 2000
  • Fuzzy control has been researched for application of industrial processes which have no accurate mathematical model and could not controlled by conventional methods because of a lack of quantitative input-output data. Intelligent control approach based on fuzzy logic could directly reflex human thinking and natural language to controller comparing with conventional methods. In this paper, fuzzy controller is implemented to acquire operator's knowledge. The tested system is constructed for sending a ball to the goal position using wind from two DC motors in the path. This system contains non-linearity and uncertainty because of the characteristic of aerodynamics inside the path. Ball position is measured by a vision camera. The system used in this experiment could be hardly modeled by mathematic methods and could not be easily controlled by linear control manners. The controller, in this paper is designed based on the input-output data and experimental knowledge obtained by trials.

  • PDF

Meme Analysis using Image Captioning Model and GPT-4

  • Marvin John Ignacio;Thanh Tin Nguyen;Jia Wang;Yong-Guk Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.628-631
    • /
    • 2023
  • We present a new approach to evaluate the generated texts by Large Language Models (LLMs) for meme classification. Analyzing an image with embedded texts, i.e. meme, is challenging, even for existing state-of-the-art computer vision models. By leveraging large image-to-text models, we can extract image descriptions that can be used in other tasks, such as classification. In our methodology, we first generate image captions using BLIP-2 models. Using these captions, we use GPT-4 to evaluate the relationship between the caption and the meme text. The results show that OPT6.7B provides a better rating than other LLMs, suggesting that the proposed method has a potential for meme classification.