• Title/Summary/Keyword: AI Voice Recognition Speaker

Search Result 11, Processing Time 0.031 seconds

Cyber Threats Analysis of AI Voice Recognition-based Services with Automatic Speaker Verification (화자식별 기반의 AI 음성인식 서비스에 대한 사이버 위협 분석)

  • Hong, Chunho;Cho, Youngho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.6
    • /
    • pp.33-40
    • /
    • 2021
  • Automatic Speech Recognition(ASR) is a technology that analyzes human speech sound into speech signals and then automatically converts them into character strings that can be understandable by human. Speech recognition technology has evolved from the basic level of recognizing a single word to the advanced level of recognizing sentences consisting of multiple words. In real-time voice conversation, the high recognition rate improves the convenience of natural information delivery and expands the scope of voice-based applications. On the other hand, with the active application of speech recognition technology, concerns about related cyber attacks and threats are also increasing. According to the existing studies, researches on the technology development itself, such as the design of the Automatic Speaker Verification(ASV) technique and improvement of accuracy, are being actively conducted. However, there are not many analysis studies of attacks and threats in depth and variety. In this study, we propose a cyber attack model that bypasses voice authentication by simply manipulating voice frequency and voice speed for AI voice recognition service equipped with automated identification technology and analyze cyber threats by conducting extensive experiments on the automated identification system of commercial smartphones. Through this, we intend to inform the seriousness of the related cyber threats and raise interests in research on effective countermeasures.

Artificial intelligence wearable platform that supports the life cycle of the visually impaired (시각장애인의 라이프 사이클을 지원하는 인공지능 웨어러블 플랫폼)

  • Park, Siwoong;Kim, Jeung Eun;Kang, Hyun Seo;Park, Hyoung Jun
    • Journal of Platform Technology
    • /
    • v.8 no.4
    • /
    • pp.20-28
    • /
    • 2020
  • In this paper, a voice, object, and optical character recognition platform including voice recognition-based smart wearable devices, smart devices, and web AI servers was proposed as an appropriate technology to help the visually impaired to live independently by learning the life cycle of the visually impaired in advance. The wearable device for the visually impaired was designed and manufactured with a reverse neckband structure to increase the convenience of wearing and the efficiency of object recognition. And the high-sensitivity small microphone and speaker attached to the wearable device was configured to support the voice recognition interface function consisting of the app of the smart device linked to the wearable device. From experimental results, the voice, object, and optical character recognition service used open source and Google APIs in the web AI server, and it was confirmed that the accuracy of voice, object and optical character recognition of the service platform achieved an average of 90% or more.

  • PDF

A Study on the Educational Uses of Smart Speaker (스마트 스피커의 교육적 활용에 관한 연구)

  • Chang, Jiyeun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.33-39
    • /
    • 2019
  • Edutech, which combines education and information technology, is in the spotlight. Core technologies of the 4th Industrial Revolution have been actively used in education. Students use an AI-based learning platform to self-diagnose their needs. And get personalized training online with a cloud learning platform. Recently, a new educational medium called smart speaker that combines artificial intelligence technology and voice recognition technology has emerged and provides various educational services. The purpose of this study is to suggest a way to use smart speaker educationally to overcome the limitation of existing education. To this end, the concept and characteristics of smart speakers were analyzed, and the implications were derived by analyzing the contents provided by smart speakers. Also, the problem of using smart speaker was considered.

Design and Implementation of Context-aware Application on Smartphone Using Speech Recognizer

  • Kim, Kyuseok
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.49-59
    • /
    • 2020
  • As technologies have been developing, our lives are getting easier. Today we are surrounded by the new technologies such as AI and IoT. Moreover, the word, "smart" is a very broad one because we are trying to change our daily environment into smart one by using those technologies. For example, the traditional workplaces have changed into smart offices. Since the 3rd industrial revolution, we have used the touch interface to operate the machines. In the 4th industrial revolution, however, we are trying adding the speech recognition module to the machines to operate them by giving voice commands. Today many of the things are communicated with human by voice commands. Many of them are called AI things and they do tasks which users request and do tasks more than what users request. In the 4th industrial revolution, we use smartphones all the time every day from the morning to the night. For this reason, the privacy using phone is not guaranteed sometimes. For example, the caller's voice can be heard through the phone speaker when accepting a call. So, it is needed to protect privacy on smartphone and it should work automatically according to the user context. In this aspect, this paper proposes a method to adjust the voice volume for call to protect privacy on smartphone according to the user context.

Customer Attitude to Artificial Intelligence Features: Exploratory Study on Customer Reviews of AI Speakers (인공지능 속성에 대한 고객 태도 변화: AI 스피커 고객 리뷰 분석을 통한 탐색적 연구)

  • Lee, Hong Joo
    • Knowledge Management Research
    • /
    • v.20 no.2
    • /
    • pp.25-42
    • /
    • 2019
  • AI speakers which are wireless speakers with smart features have released from many manufacturers and adopted by many customers. Though smart features including voice recognition, controlling connected devices and providing information are embedded in many mobile phones, AI speakers are sitting in home and has a role of the central en-tertainment and information provider. Many surveys have investigated the important factors to adopt AI speakers and influ-encing factors on satisfaction. Though most surveys on AI speakers are cross sectional, we can track customer attitude toward AI speakers longitudinally by analyzing customer reviews on AI speakers. However, there is not much research on the change of customer attitude toward AI speaker. Therefore, in this study, we try to grasp how the attitude of AI speaker changes with time by applying text mining-based analysis. We collected the customer reviews on Amazon Echo which has the highest share of AI speakers in the global market from Amazon.com. Since Amazon Echo already have two generations, we can analyze the characteristics of reviews and compare the attitude ac-cording to the adoption time. We identified all sub topics of customer reviews and specified the topics for smart features. And we analyzed how the share of topics varied with time and analyzed diverse meta data for comparisons. The proportions of the topics for general satisfaction and satisfaction on music were increasing while the proportions of the topics for music quality, speakers and wireless speakers were decreasing over time. Though the proportions of topics for smart fea-tures were similar according to time, the share of the topics in positive reviews and importance metrics were reduced in the 2nd generation of Amazon Echo. Even though smart features were mentioned similarly in the reviews, the influential effect on satisfac-tion were reduced over time and especially in the 2nd generation of Amazon Echo.

Positioning of Smart Speakers by Applying Text Mining to Consumer Reviews: Focusing on Artificial Intelligence Factors (텍스트 마이닝을 활용한 스마트 스피커 제품의 포지셔닝: 인공지능 속성을 중심으로)

  • Lee, Jung Hyeon;Seon, Hyung Joo;Lee, Hong Joo
    • Knowledge Management Research
    • /
    • v.21 no.1
    • /
    • pp.197-210
    • /
    • 2020
  • The smart speaker includes an AI assistant function in the existing portable speaker, which enables a person to give various commands using a voice and provides various offline services associated with control of a connected device. The speed of domestic distribution is also increasing, and the functions and linked services available through smart speakers are expanding to shopping and food orders. Through text mining-based customer review analysis, there have been many proposals for identifying the impact on customer attitudes, sentiment analysis, and product evaluation of product functions and attributes. Emotional investigation has been performed by extracting words corresponding to characteristics or features from product reviews and analyzing the impact on assessment. After obtaining the topic from the review, the effect on the evaluation was analyzed. And the market competition of similar products was visualized. Also, a study was conducted to analyze the reviews of smart speaker users through text mining and to identify the main attributes, emotional sensitivity analysis, and the effects of artificial intelligence attributes on product satisfaction. The purpose of this study is to collect blog posts about the user's experiences of smart speakers released in Korea and to analyze the attitudes of customers according to their attributes. Through this, customers' attitudes can be identified and visualized by each smart speaker product, and the positioning map of the product was derived based on customer recognition of smart speaker products by collecting the information identified by each property.

The Effect of AI Agent's Multi Modal Interaction on the Driver Experience in the Semi-autonomous Driving Context : With a Focus on the Existence of Visual Character (반자율주행 맥락에서 AI 에이전트의 멀티모달 인터랙션이 운전자 경험에 미치는 효과 : 시각적 캐릭터 유무를 중심으로)

  • Suh, Min-soo;Hong, Seung-Hye;Lee, Jeong-Myeong
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.92-101
    • /
    • 2018
  • As the interactive AI speaker becomes popular, voice recognition is regarded as an important vehicle-driver interaction method in case of autonomous driving situation. The purpose of this study is to confirm whether multimodal interaction in which feedback is transmitted by auditory and visual mode of AI characters on screen is more effective in user experience optimization than auditory mode only. We performed the interaction tasks for the music selection and adjustment through the AI speaker while driving to the experiment participant and measured the information and system quality, presence, the perceived usefulness and ease of use, and the continuance intention. As a result of analysis, the multimodal effect of visual characters was not shown in most user experience factors, and the effect was not shown in the intention of continuous use. Rather, it was found that auditory single mode was more effective than multimodal in information quality factor. In the semi-autonomous driving stage, which requires driver 's cognitive effort, multimodal interaction is not effective in optimizing user experience as compared to single mode interaction.

Perception of Virtual Assistant and Smart Speaker: Semantic Network Analysis and Sentiment Analysis (가상 비서와 스마트 스피커에 대한 인식과 기대: 의미 연결망 분석과 감성분석을 중심으로)

  • Park, Hohyun;Kim, Jang Hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.213-216
    • /
    • 2018
  • As the advantages of smart devices based on artificial intelligence and voice recognition become more prominent, Virtual Assistant is gaining popularity. Virtual Assistant provides a user experience through smart speakers and is valued as the most user friendly IoT device by consumers. The purpose of this study is to investigate whether there are differences in people's perception of the key virtual assistant brand voice recognition. We collected tweets that included six keyword form three companies that provide Virtual Assistant services. The authors conducted semantic network analysis for the collected datasets and analyzed the feelings of people through sentiment analysis. The result shows that many people have a different perception and mainly about the functions and services provided by the Virtual Assistant and the expectation and usability of the services. Also, people responded positively to most keywords.

  • PDF

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

Analysis of unfairness of artificial intelligence-based speaker identification technology (인공지능 기반 화자 식별 기술의 불공정성 분석)

  • Shin Na Yeon;Lee Jin Min;No Hyeon;Lee Il Gu
    • Convergence Security Journal
    • /
    • v.23 no.1
    • /
    • pp.27-33
    • /
    • 2023
  • Digitalization due to COVID-19 has rapidly developed artificial intelligence-based voice recognition technology. However, this technology causes unfair social problems, such as race and gender discrimination if datasets are biased against some groups, and degrades the reliability and security of artificial intelligence services. In this work, we compare and analyze accuracy-based unfairness in biased data environments using VGGNet (Visual Geometry Group Network), ResNet (Residual Neural Network), and MobileNet, which are representative CNN (Convolutional Neural Network) models of artificial intelligence. Experimental results show that ResNet34 showed the highest accuracy for women and men at 91% and 89.9%in Top1-accuracy, while ResNet18 showed the slightest accuracy difference between genders at 1.8%. The difference in accuracy between genders by model causes differences in service quality and unfair results between men and women when using the service.