• Title/Summary/Keyword: Videos

Search Result 1,564, Processing Time 0.029 seconds

The Current State and Legal Issues of Online Crimes Related to Children and Adolescents

  • Hyoung-ryul Kim
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.34 no.4
    • /
    • pp.222-228
    • /
    • 2023
  • There are two categories of online crimes related to children and adolescents: those committed by adolescents and those committed against children and adolescents. While recent trends in criminal law show consensus on strengthening punishment in cases of crimes against children and adolescents, there are mixed stances in cases of juvenile delinquency. One perspective emphasizes strict punishment, whereas the other emphasizes dispositions aligned with human rights. While various forms of online crime share the commonality in that the main part of the criminal act occurs online, they can be categorized into three types: those seeking financial gain, those driven by sexual motives, and those engaged in bullying. Among these, crimes driven by sexual motives are the most serious. Second-hand trading fraud and conditional (sexual) meeting fraud fall under the category of seeking financial gain and occur frequently. Crimes driven by sexual motives include obscenity via telecommunication, filming with discrete cameras, child and adolescent sexual exploitation material, fake video distribution, and blackmail/coercion using intimate images/videos ("sextortion"). These crimes lead to various legal issues such as whether to view vulgar acronyms or body cams that teenagers frequently use as simple subcultures or crimes, what criteria should be applied to judge whether a recorded material induces sexual desire or shame, and at what stage sexual grooming becomes punishable. For example, sniping posts, KakaoTalk prisons, and chat room explosions are tricky issues, as they may or may not be punished depending on the case. Particular caution should be exercised against the indiscriminate application of a strict punishment-oriented approach to the juvenile justice system, which is being discussed in relation to online sexual offenses. In the punishment case of online crime, juvenile offenders with a high potential for future improvement and reform must be treated with special consideration.

Multicontents Integrated Image Animation within Synthesis for Hiqh Quality Multimodal Video (고화질 멀티 모달 영상 합성을 통한 다중 콘텐츠 통합 애니메이션 방법)

  • Jae Seung Roh;Jinbeom Kang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.257-269
    • /
    • 2023
  • There is currently a burgeoning demand for image synthesis from photos and videos using deep learning models. Existing video synthesis models solely extract motion information from the provided video to generate animation effects on photos. However, these synthesis models encounter challenges in achieving accurate lip synchronization with the audio and maintaining the image quality of the synthesized output. To tackle these issues, this paper introduces a novel framework based on an image animation approach. Within this framework, upon receiving a photo, a video, and audio input, it produces an output that not only retains the unique characteristics of the individuals in the photo but also synchronizes their movements with the provided video, achieving lip synchronization with the audio. Furthermore, a super-resolution model is employed to enhance the quality and resolution of the synthesized output.

An In-depth Analysis of Head-on Collision Accidents for Frontal Crash Tests of Automated Driving Vehicles (자율주행자동차 정면충돌평가방안 마련을 위한 국내 정면충돌사고 심층분석 연구)

  • Yohan Park;Wonpil Park;Seungki Kim
    • Journal of Auto-vehicle Safety Association
    • /
    • v.15 no.4
    • /
    • pp.88-94
    • /
    • 2023
  • The seating postures of passengers in the automated driving vehicle are possible in atypical forms such as rear-facing and lying down. It is necessary to improve devices such as airbags and seat belts to protect occupants from injury in accidents of the automated driving vehicle, and collision safety evaluation tests must be newly developed. The purpose of this study is to define representative types of head-on collision accidents to develop collision standards for autonomous vehicles that take into account changes in driving behavior and occupants' postures. 150 frontal collision cases remained by filtering (accident videos, images, AIS 2+, passenger car, etc…) and random sampling from approximately 320,000 accidents claimed by a major insurance company over the past 5 years. The most frequent accident type is a head-on collision between a vehicle going straight and a vehicle turning left from the opposite side, accounting for 54.7% of all accidents, and most of these accidents occur in permissive left turns. The next most common frontal collision is the center-lane violation by drowsy driving and careless driving, accounting for 21.3% of the total. For the two types above, data such as vehicle speed, contact point/area, and PDOF at the moment of impact are obtained through accident reconstruction using PC-Crash. As a result, two types of autonomous vehicle crash safety test scenarios are proposed: (1) a frontal oblique collision test based on the accident types between a straight vehicle and a left-turning vehicle, and (2) a small overlap collision test based on the head-on accidents of center-lane violation.

Nursing students' satisfaction and clinical competence by type of pediatric nursing practicum during the COVID-19 pandemic (코로나19 팬데믹 상황에서 간호대학생의 아동간호학 임상실습유형별 만족도, 학습만족도와 임상수행능력)

  • Ju, Hyeon Ok;Lee, Jung Hwa
    • The Journal of Korean Academic Society of Nursing Education
    • /
    • v.30 no.1
    • /
    • pp.29-38
    • /
    • 2024
  • Purpose: This study aimed to investigate student nurses' satisfaction by type of clinical practicum and to determine predictors of clinical competence in pediatric nursing. Methods: A total of 189 Junior and Senior student nurses across seven colleges in the Busan Metropolitan City were enrolled in the study. The participants completed a structured questionnaire containing items about their learning satisfaction with different types of pediatric nursing practicums and their clinical competence. Data were analyzed using the mean, standard deviation, independent t-test, ANOVA, and multiple regression analysis. Results: Regarding satisfaction with each type of clinical practicum, the mean satisfaction score (out of 10) was 8.18±2.26 for on-site clinical rotations and 7.35±2.20 for alternative practicums. Among the different types of alternative practicum approaches, those with a satisfaction score of 7 or higher included fundamental nursing skills, watching videos, simulation etc., while those with a satisfaction score of less than 6 were virtual simulation and problem-based learning. The predictors of clinical competence in pediatric nursing were learning satisfaction with practice, school year, and alternative practicum, accounting for 35.0% of the variance in clinical competency. Conclusion: It would be helpful to combine on-site clinical rotations with alternative practicum approaches and to develop various alternative practice programs using simulation practice, virtual reality, immersive interactive systems, and standardized patients to enhance students' clinical competency.

Lip and Voice Synchronization Using Visual Attention (시각적 어텐션을 활용한 입술과 목소리의 동기화 연구)

  • Dongryun Yoon;Hyeonjoong Cho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.166-173
    • /
    • 2024
  • This study explores lip-sync detection, focusing on the synchronization between lip movements and voices in videos. Typically, lip-sync detection techniques involve cropping the facial area of a given video, utilizing the lower half of the cropped box as input for the visual encoder to extract visual features. To enhance the emphasis on the articulatory region of lips for more accurate lip-sync detection, we propose utilizing a pre-trained visual attention-based encoder. The Visual Transformer Pooling (VTP) module is employed as the visual encoder, originally designed for the lip-reading task, predicting the script based solely on visual information without audio. Our experimental results demonstrate that, despite having fewer learning parameters, our proposed method outperforms the latest model, VocaList, on the LRS2 dataset, achieving a lip-sync detection accuracy of 94.5% based on five context frames. Moreover, our approach exhibits an approximately 8% superiority over VocaList in lip-sync detection accuracy, even on an untrained dataset, Acappella.

Kernel-Based Video Frame Interpolation Techniques Using Feature Map Differencing (특성맵 차분을 활용한 커널 기반 비디오 프레임 보간 기법)

  • Dong-Hyeok Seo;Min-Seong Ko;Seung-Hak Lee;Jong-Hyuk Park
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.17-27
    • /
    • 2024
  • Video frame interpolation is an important technique used in the field of video and media, as it increases the continuity of motion and enables smooth playback of videos. In the study of video frame interpolation using deep learning, Kernel Based Method captures local changes well, but has limitations in handling global changes. In this paper, we propose a new U-Net structure that applies feature map differentiation and two directions to focus on capturing major changes to generate intermediate frames more accurately while reducing the number of parameters. Experimental results show that the proposed structure outperforms the existing model by up to 0.3 in PSNR with about 61% fewer parameters on common datasets such as Vimeo, Middle-burry, and a new YouTube dataset. Code is available at https://github.com/Go-MinSeong/SF-AdaCoF.

A Research on Cylindrical Pill Bottle Recognition with YOLOv8 and ORB

  • Dae-Hyun Kim;Hyo Hyun Choi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.13-20
    • /
    • 2024
  • This paper introduces a method for generating model images that can identify specific cylindrical medicine containers in videos and investigates data collection techniques. Previous research had separated object detection from specific object recognition, making it challenging to apply automated image stitching. A significant issue was that the coordinate-based object detection method included extraneous information from outside the object area during the image stitching process. To overcome these challenges, this study applies the newly released YOLOv8 (You Only Look Once) segmentation technique to vertically rotating pill bottles video and employs the ORB (Oriented FAST and Rotated BRIEF) feature matching algorithm to automate model image generation. The research findings demonstrate that applying segmentation techniques improves recognition accuracy when identifying specific pill bottles. The model images created with the feature matching algorithm could accurately identify the specific pill bottles.

Computer Vision-Based Car Accident Detection using YOLOv8 (YOLO v8을 활용한 컴퓨터 비전 기반 교통사고 탐지)

  • Marwa Chacha Andrea;Choong Kwon Lee;Yang Sok Kim;Mi Jin Noh;Sang Il Moon;Jae Ho Shin
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.1
    • /
    • pp.91-105
    • /
    • 2024
  • Car accidents occur as a result of collisions between vehicles, leading to both vehicle damage and personal and material losses. This study developed a vehicle accident detection model based on 2,550 image frames extracted from car accident videos uploaded to YouTube, captured by CCTV. To preprocess the data, bounding boxes were annotated using roboflow.com, and the dataset was augmented by flipping images at various angles. The You Only Look Once version 8 (YOLOv8) model was employed for training, achieving an average accuracy of 0.954 in accident detection. The proposed model holds practical significance by facilitating prompt alarm transmission in emergency situations. Furthermore, it contributes to the research on developing an effective and efficient mechanism for vehicle accident detection, which can be utilized on devices like smartphones. Future research aims to refine the detection capabilities by integrating additional data including sound.

Training Feedback effect of team-based CPR using a mobile video recording device body camera (이동용 영상촬영기기 바디캠을 활용한 팀단위 심폐소생술의 교육피드백 효과)

  • Seong bin Im
    • Smart Media Journal
    • /
    • v.13 no.5
    • /
    • pp.62-71
    • /
    • 2024
  • This study conducted a team-based CPR simulation with 32 fourth-year emergency rescue students to determine the effectiveness of training feedback using body cameras used at emergency rescue sites, and measured awareness, training feedback effectiveness, and satisfactio+n before and after body camera feedback. , preferences and difficulties in using body camera devices were identified. Data analysis was performed using SPSS 27.0 program, including descriptive statistics, frequency analysis, paried t-test, and Wilcoxon signed rank test. As a result of the study, the perception of body camera use showed a positive change from 3.73±0.62 points to 4.45±0.54 points, and a positive satisfaction level of 3.98±0.51 was shown (p<.001). Additionally, there was a significant increase in self-check accuracy and performance score after body camera feedback (p<.001). Therefore, during team-based simulation resuscitation training, positive feedback effects in improving self-inspection ability and performance can be achieved by watching body camera videos and using self-checklists without direct feedback from the instructor.

Improving Accuracy of Chapter-level Lecture Video Recommendation System using Keyword Cluster-based Graph Neural Networks

  • Purevsuren Chimeddorj;Doohyun Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.7
    • /
    • pp.89-98
    • /
    • 2024
  • In this paper, we propose a system for recommending lecture videos at the chapter level, addressing the balance between accuracy and processing speed in chapter-level video recommendations. Specifically, it has been observed that enhancing recommendation accuracy reduces processing speed, while increasing processing speed decreases accuracy. To mitigate this trade-off, a hybrid approach is proposed, utilizing techniques such as TF-IDF, k-means++ clustering, and Graph Neural Networks (GNN). The approach involves pre-constructing clusters based on chapter similarity to reduce computational load during recommendations, thereby improving processing speed, and applying GNN to the graph of clusters as nodes to enhance recommendation accuracy. Experimental results indicate that the use of GNN resulted in an approximate 19.7% increase in recommendation accuracy, as measured by the Mean Reciprocal Rank (MRR) metric, and an approximate 27.7% increase in precision defined by similarities. These findings are expected to contribute to the development of a learning system that recommends more suitable video chapters in response to learners' queries.