• Title/Summary/Keyword: AI Tutors

Search Result 3, Processing Time 0.018 seconds

An Inquiry into Prediction of Learner's Academic Performance through Learner Characteristics and Recommended Items with AI Tutors in Adaptive Learning (적응형 온라인 학습환경에서 학습자 특성 및 AI튜터 추천문항 학습활동의 학업성취도 예측력 탐색)

  • Choi, Minseon;Chung, Jaesam
    • Journal of Information Technology Services
    • /
    • v.20 no.4
    • /
    • pp.129-140
    • /
    • 2021
  • Recently, interest in AI tutors is rising as a way to bridge the educational gap in school settings. However, research confirming the effectiveness of AI tutors is lacking. The purpose of this study is to explore how effective learner characteristics and recommended item learning activities are in predicting learner's academic performance in an adaptive online learning environment. This study proposed the hypothesis that learner characteristics (prior knowledge, midterm evaluation) and recommended item learning activities (learning time, correct answer check, incorrect answer correction, satisfaction, correct answer rate) predict academic achievement. In order to verify the hypothesis, the data of 362 learners were analyzed by collecting data from the learning management system (LMS) from the perspective of learning analytics. For data analysis, regression analysis was performed using the regsubset function provided by the leaps package of the R program. The results of analyses showed that prior knowledge, midterm evaluation, correct answer confirmation, incorrect answer correction, and satisfaction had a positive effect on academic performance, but learning time had a negative effect on academic performance. On the other hand, the percentage of correct answers did not have a significant effect on academic performance. The results of this study suggest that recommended item learning activities, which mean behavioral indicators of interaction with AI tutors, are important in the learning process stage to increase academic performance in an adaptive online learning environment.

Analysis of e-Learning Style Based on Learner Characteristics

  • In-Suk RYU;Jin-Gon SHON
    • Fourth Industrial Review
    • /
    • v.4 no.2
    • /
    • pp.1-9
    • /
    • 2024
  • Purpose: While most studies focus on learning styles in face-to-face education, research on online learning environments, especially by age in lifelong education, is limited. This study aims to propose a direction for online learning by analyzing digital literacy and e-Learning learning styles by age in lifelong education. Research design, data and methodology: The study surveyed 100 online learners from an open university in Seoul. Using an e-Learning learning styles test, frequency analysis was conducted by gender, age, and digital literacy. A learning plan was then proposed based on the results. Results: The study found no age-related differences in digital literacy. Both men and women shared similar ratios of Environment-dependent and self-directed learning styles, reflecting the characteristics of online learners using digital devices. Conclusions: In lifelong education, e-Learning design should accommodate diverse learning styles: web/app designs for Environment-independent and self-directed learners, short/long formats for Passive learners, real-time (LMS)/non-real-time (ZOOM) systems for Positive and cooperative learners, and AI/human tutors for Environment-dependent and self-directed learners.

One-shot multi-speaker text-to-speech using RawNet3 speaker representation (RawNet3를 통해 추출한 화자 특성 기반 원샷 다화자 음성합성 시스템)

  • Sohee Han;Jisub Um;Hoirin Kim
    • Phonetics and Speech Sciences
    • /
    • v.16 no.1
    • /
    • pp.67-76
    • /
    • 2024
  • Recent advances in text-to-speech (TTS) technology have significantly improved the quality of synthesized speech, reaching a level where it can closely imitate natural human speech. Especially, TTS models offering various voice characteristics and personalized speech, are widely utilized in fields such as artificial intelligence (AI) tutors, advertising, and video dubbing. Accordingly, in this paper, we propose a one-shot multi-speaker TTS system that can ensure acoustic diversity and synthesize personalized voice by generating speech using unseen target speakers' utterances. The proposed model integrates a speaker encoder into a TTS model consisting of the FastSpeech2 acoustic model and the HiFi-GAN vocoder. The speaker encoder, based on the pre-trained RawNet3, extracts speaker-specific voice features. Furthermore, the proposed approach not only includes an English one-shot multi-speaker TTS but also introduces a Korean one-shot multi-speaker TTS. We evaluate naturalness and speaker similarity of the generated speech using objective and subjective metrics. In the subjective evaluation, the proposed Korean one-shot multi-speaker TTS obtained naturalness mean opinion score (NMOS) of 3.36 and similarity MOS (SMOS) of 3.16. The objective evaluation of the proposed English and Korean one-shot multi-speaker TTS showed a prediction MOS (P-MOS) of 2.54 and 3.74, respectively. These results indicate that the performance of our proposed model is improved over the baseline models in terms of both naturalness and speaker similarity.