• Title/Summary/Keyword: Video-based learning environment

Search Result 112, Processing Time 0.03 seconds

Exploration of Predictive Model for Learning Achievement of Behavior Log Using Machine Learning in Video-based Learning Environment (동영상 기반 학습 환경에서 머신러닝을 활용한 행동로그의 학업성취 예측 모형 탐색)

  • Lee, Jungeun;Kim, Dasom;Jo, Il-Hyun
    • The Journal of Korean Association of Computer Education
    • /
    • v.23 no.2
    • /
    • pp.53-64
    • /
    • 2020
  • As online learning forms centered on video lectures become more common and constantly increasing, the video-based learning environment applying various educational methods is also changing and developing to enhance learning effectiveness. Learner's log data has emerged for measuring the effectiveness of education in the online learning environment, and various analysis methods of log data are important for learner's customized learning prescriptions. To this end, the study analyzed learner behavior data and predictions of achievement by machine learning in video-based learning environments. As a result, interactive behaviors such as video navigation and comment writing, and learner-led learning behaviors predicted achievement in common in each model. Based on the results, the study provided implications for the design of the video learning environment.

Exploring the Relationships Between Emotions and State Motivation in a Video-based Learning Environment

  • YU, Jihyun;SHIN, Yunmi;KIM, Dasom;JO, Il-Hyun
    • Educational Technology International
    • /
    • v.18 no.2
    • /
    • pp.101-129
    • /
    • 2017
  • This study attempted to collect learners' emotion and state motivation, analyze their inner states, and measure state motivation using a non-self-reported survey. Emotions were measured by learning segment in detailed learning situations, and they were used to indicate total state motivation with prediction power. Emotion was also used to explain state motivation by learning segment. The purpose of this study was to overcome the limitations of video-based learning environments by verifying whether the emotions measured during individual learning segments can be used to indicate the learner's state motivation. Sixty-eight students participated in a 90-minute to measure their emotions and state motivation, and emotions showed a statistically significant relationship between total state motivation and motivation by learning segment. Although this result is not clear because this was an exploratory study, it is meaningful that this study showed the possibility that emotions during different learning segments can indicate state motivation.

Accurate Pig Detection for Video Monitoring Environment (비디오 모니터링 환경에서 정확한 돼지 탐지)

  • Ahn, Hanse;Son, Seungwook;Yu, Seunghyun;Suh, Yooil;Son, Junhyung;Lee, Sejun;Chung, Yongwha;Park, Daihee
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.7
    • /
    • pp.890-902
    • /
    • 2021
  • Although the object detection accuracy with still images has been significantly improved with the advance of deep learning techniques, the object detection problem with video data remains as a challenging problem due to the real-time requirement and accuracy drop with occlusion. In this research, we propose a method in pig detection for video monitoring environment. First, we determine a motion, from a video data obtained from a tilted-down-view camera, based on the average size of each pig at each location with the training data, and extract key frames based on the motion information. For each key frame, we then apply YOLO, which is known to have a superior trade-off between accuracy and execution speed among many deep learning-based object detectors, in order to get pig's bounding boxes. Finally, we merge the bounding boxes between consecutive key frames in order to reduce false positive and negative cases. Based on the experiment results with a video data set obtained from a pig farm, we confirmed that the pigs could be detected with an accuracy of 97% at a processing speed of 37fps.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

e-Learning Education System on Web

  • Choi, Sung;Han, Jung-Lan;Chung, Ji-Moon
    • 한국디지털정책학회:학술대회논문집
    • /
    • 2004.11a
    • /
    • pp.283-294
    • /
    • 2004
  • Within the rapidly changing environment of global economics, the environment of higher education in the universities & companies, also, has been, encountering various changes. Popularization on higher education related to lifetime education system, putting emphasis on the productivity of education services and the acquisition of competitiveness through the market of open education, the breakdown of the ivory tower and the Multiversitization of universities & companies, importance of obtaining information in the universities & companies, and cooperation between domestic and oversea universities, industry and educational system must be acquired. Therefore, in order to adequately cope with these kinds of rapid changes in the education environment, operating E-Learning Education & company by utilizing various information technologies and its fixations such as Internet, E-mail. CD-ROMs. Interactive Video Networks (Video Conferencing, Video on Demand), CableTV etc., which has no time or location limitation, is needed.

  • PDF

Low-Light Invariant Video Enhancement Scheme Using Zero Reference Deep Curve Estimation (Zero Deep Curve 추정방식을 이용한 저조도에 강인한 비디오 개선 방법)

  • Choi, Hyeong-Seok;Yang, Yoon Gi
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.991-998
    • /
    • 2022
  • Recently, object recognition using image/video signals is rapidly spreading on autonomous driving and mobile phones. However, the actual input image/video signals are easily exposed to a poor illuminance environment. A recent researches for improving illumination enable to estimate and compensate the illumination parameters. In this study, we propose VE-DCE (video enhancement zero-reference deep curve estimation) to improve the illumination of low-light images. The proposed VE-DCE uses unsupervised learning-based zero-reference deep curve, which is one of the latest among learning based estimation techniques. Experimental results show that the proposed method can achieve the quality of low-light video as well as images compared to the previous method. In addition, it can reduce the computational complexity with respect to the existing method.

Development of An Intelligent G-Learning Virtual Learning Platform Based on Real Video (실 화상 기반의 지능형 G-러닝 가상 학습 플랫폼 개발)

  • Jae-Yeon Park;Sung-Jun Park
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.2
    • /
    • pp.79-86
    • /
    • 2024
  • In this paper, we propose a virtual learning platform based on various interactions that occur during real class activities, rather than the existing content delivery-oriented learning metaverse platform. In this study, we provide a learning environment that combines AI and a virtual environment to solve problems by talking to real-time AI. Also, we applied G-learning techinques to improve class immersion. The Virtual Edu platform developed through this study provides an effective learning experience combining self-directed learning, simulation of interest through games, and PBL teaching method. And we propose a new educational method that improves student participation learning effectiveness. Experiment, we test performance on learninng activity based on real-time video classroom. As a result, it was found that the class progressing stably.

Design and Development of m-Learning Service Based on 3G Cellular Phones

  • Chung, Kwang-Sik;Lee, Jeong-Eun
    • Journal of Information Processing Systems
    • /
    • v.8 no.3
    • /
    • pp.521-538
    • /
    • 2012
  • As the knowledge society matures, not only distant, but also off-line universities are trying to provide learners with on-line educational contents. Particularly, high effectiveness of mobile devices for e-Learning has been demonstrated by the university sector, which uses distant learning that is based on blended learning. In this paper, we analyzed previous m-Learning scenarios and future technology prospects. Based on the proposed m-Learning scenario, we designed cellular phone-based educational contents and service structure, implemented m-Learning system, and analyzed m-Learning service satisfaction. The design principles of the m-Learning service are 1) to provide learners with m-Learning environment with both cellular phones and desktop computers; 2) to serve announcements, discussion boards, Q&A boards, course materials, and exercises on cellular phones and desktop computers; and 3) to serve learning activities like the reviewing of full lectures, discussions, and writing term papers using desktop computers and cellular phones. The m-Learning service was developed on a cellular phone that supports H.264 codex in 3G communication technology. Some of the functions of the m-Learning design principles are implemented in a 3G cellular phone. The contents of lectures are provided in the forms of video, text, audio, and video with text. One-way educational contents are complemented by exercises (quizzes).

Design and Implementation on-line Real-time Video Communication Learning System(RVCLS) for Web-based Project Learning (웹기반 프로젝트 학습을 지원하는 실시간 화상학습시스템의 설계 및 구현)

  • Choi, Gil-Su;Kim, Dong-Ho
    • Journal of The Korean Association of Information Education
    • /
    • v.7 no.1
    • /
    • pp.80-90
    • /
    • 2003
  • In this paper, we designed the on-line Real-time Video Communication Learning System(RVCLS) for Web-based project learning and develop programs for test groups. And we have also analyzed how the Web-based learning using RVCLS change the students' perception on the classroom environment, using the WIHIC which is the classroom environment examination tool As a result, the Web-based project learning activities using RVCLS had affirmative effects on the eight areas of classroom environment such as students unity, teacher support, participation in class, spontaneity, exploration activities, task-oriented mind, cooperative attitude, and equality. Also, the web -based project learning using RVCLS is expected to help students enhance the self-directed learning capacity and increase abilities to use ICT.

  • PDF

Design and Implementation of Problem-Based Learning System Based on Video Communication Technology (화상통신기술을 활용한 문제중심학습 시스템 설계 및 구현)

  • Kim, Bum-Shik;An, Sung-Hun;Kim, Dong-Ho
    • Journal of The Korean Association of Information Education
    • /
    • v.8 no.2
    • /
    • pp.167-176
    • /
    • 2004
  • Due to the development of information communication technology, educational environment has undergone much change and various types of teaching and learning methods based on information communication technology has been suggested. Recently, remote education using the internet are also spreading. However, in current classrooms, students are asked to do an teacher-centered assignment, which they are required to collect and report some information using the internet. This method does not help students use the advantages of learning using the internet, which stimulate students-students interaction and teacher-students interaction.Thus, this study focused on the problem-based learning system based on video communication technology. The researcher designed the problem-based learning system based on video communication technology and applied the system to classes at elementary school. The results were analyzed in terms of students-students interaction and teacher-students interaction in the internet. This research found that the problem-based learning system stimulates teacher and students communication and has positive effects on students' attitude and interest in learning. This research proposes that the traditional teacher-centered teaching method can be supplemented with cyber space learning, which has the merit of problem-based learning model.

  • PDF