• Title/Summary/Keyword: AI 역량 측정

Search Result 8, Processing Time 0.019 seconds

Development of checklist questions to measure AI capabilities of elementary school students (초등학생의 AI 역량 측정을 위한 체크리스트 문항 개발)

  • Eun Chul Lee;YoungShin Pyun
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.3
    • /
    • pp.7-12
    • /
    • 2024
  • The development of artificial intelligence technology changes the social structure and educational environment, and the importance of artificial intelligence capabilities continues to increase. This study was conducted with the purpose of developing a checklist of questions to measure AI capabilities of elementary school students. To achieve the purpose of the study, a Delphi survey was used to analyze literature and develop questions. For literature analysis, two domestic studies, five international studies, and the Ministry of Education's curriculum report were collected through a search. The collected data was analyzed to construct core competency measurement elements. The core competency measurement elements consisted of understanding artificial intelligence (6 elements), artificial intelligence thinking (4 elements), artificial intelligence ethics (4 elements), and artificial intelligence social-emotion (3 elements). Considering the knowledge, skills, and attitudes of the constructed measurement elements, 19 questions were developed. The developed questions were verified through the first Delphi survey, and 7 questions were revised according to the revision opinions. The validity of 19 questions was verified through the second Delphi survey. The checklist items developed in this study are measured by teacher evaluation based on performance and behavioral observations rather than a self-report questionnaire. This has the implication that the measurement results of competency are raised to a reliable level.

Development of checklist questions to measure AI core competencies of middle school students (중학생의 AI 핵심역량 측정을 위한 체크리스트 문항 개발)

  • Eun Chul Lee;JungSoo Han
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.3
    • /
    • pp.49-55
    • /
    • 2024
  • This study was conducted with the purpose of developing a checklist of questions to measure middle school students' AI capabilities. To achieve the goal of the study, literature analysis and question development Delphi survey were used. For literature analysis, two domestic studies, five international studies, and the Ministry of Education's curriculum report were collected through a search. The collected data was analyzed to construct core competency measurement elements. The core competency measurement elements are understanding of artificial intelligence (5 elements), artificial intelligence thinking (5 elements), utilization of artificial intelligence (4 elements), artificial intelligence ethics (6 elements), and artificial intelligence social-emotion (6 elements). elements). Considering the knowledge, skills, and attitudes of the constructed measurement elements, 31 questions were developed. The developed questions were verified through the first Delphi survey, and 10 questions were revised according to the revision opinions. The validity of 31 questions was verified through the second Delphi survey. The checklist items developed in this study are measured by teacher evaluation based on performance and behavioral observations rather than a self-report questionnaire. This has the implication that the level of reliability of measurement results increases.

The Study on Test Standard for Measuring AI Literacy

  • Mi-Young Ryu;Seon-Kwan Han
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.7
    • /
    • pp.39-46
    • /
    • 2023
  • The purpose of this study is to design and develop the test standard to measure AI literacy abilities. First, we selected key areas of AI literacy through the related studies and expert FGI and designed detailed standard. The area of the test standard is divided into three categories: AI concept, practice, and impact. In order to confirm the validity of the test standard, we conducted twice expert validity tests and then modified and supplemented the test index. To confirm the validity of the test standard, we conducted an expert validity test twice and then modified and supplemented the test standard. The final AI literacy test standard consisted of a total of 30 questions. The AI literacy test standard developed in this study can be an important tool for developing self-checklists or AI competency test questions for measuring AI literacy ability.

Designing the Framework of Evaluation on Learner's Cognitive Skill for Artificial Intelligence Education through Computational Thinking (Computational Thinking 기반 인공지능교육을 통한 학습자의 인지적역량 평가 프레임워크 설계)

  • Shin, Seungki
    • Journal of The Korean Association of Information Education
    • /
    • v.24 no.1
    • /
    • pp.59-69
    • /
    • 2020
  • The purpose of this study is to design the framework of evaluation on learner's cognitive skill for artificial intelligence(AI) education through computational thinking. To design the rubric and framework for evaluating the change of leaner's intrinsic thinking, the evaluation process was consisted of a sequential stage with a) agency that cognitive learning assistance for data collection, b) abstraction that recognizes the pattern of data and performs the categorization process by decomposing the characteristics of collected data, and c) modeling that constructing algorithms based on refined data through abstraction. The evaluating framework was designed for not only the cognitive domain of learners' perceptions, learning, behaviors, and outcomes but also the areas of knowledge, competencies, and attitudes about the problem-solving process and results of learners to evaluate the changes of inherent cognitive learning about AI education. The results of the research are meaningful in that the evaluating framework for AI education was developed for the development of individualized evaluation tools according to the context of teaching and learning, and it could be used as a standard in various areas of AI education in the future.

A Study on Development and Validation of Digital Literacy Measurement Tool (디지털 리터러시 측정도구의 개발 및 예측타당성 검증 연구)

  • Chung, Mi-hyun;Kim, Jaehyoun;Hwang, Ha-sung
    • Journal of Internet Computing and Services
    • /
    • v.22 no.4
    • /
    • pp.51-63
    • /
    • 2021
  • Recently, virtual communication has become a standard tool due to the outbreak of COVID-19. Likewise online communication is emerging as an essential competency. In this study, we aimed to develop a comprehensive and systematic digital literacy measurement tool reflecting the changes and needs of society. Construct variables were drawn by characterizing existing digital literacy measurement tools. Thirty-four items corresponding to the concept of each variable were developed. The developed measurement tool was then evaluated in the form of surveys from university students belonging to the digital native generation, and the reliability and validity were performed through exploratory and confirmatory factor analysis. The digital literacy measurement tool contained five sub-factors and twenty-five questions. In addition, hierarchical regression analysis was performed to verify the predictive validity of digital literacy sub-factors. Based on these findings, the implication of future research is discussed.

Analysis of Chemistry Teachers' Perceptions of AI Utilization in Education: Focusing on Participants in First-Grade Teacher Qualification Level Training (AI 활용 교육에 대한 화학 교사의 인식 분석 -1급 정교사 자격 연수 참여자를 중심으로-)

  • Sungki Kim
    • Journal of The Korean Association For Science Education
    • /
    • v.44 no.5
    • /
    • pp.511-518
    • /
    • 2024
  • This study investigated the perceptions of chemistry teachers regarding the use of AI in education, focusing on their stages of concern, expected effects, and factors impeding implementation. Data were collected through a survey of 79 chemistry teachers who participated in first-grade teacher qualification training in 2024. The stages of concern were analyzed both overall and individually, and differences in stages of concern based on background variables were examined using the Kruskal-Wallis H test. The expected effects were measured across seven aspects, with differences were analyzed using repeated measures ANOVA and the Bonferroni method. Factors impeding implementation were analyzed through keyword analysis, focusing on internal and external factors. The results showed that overall concern was relatively low, with informational concern (Stage 1) and unconcerned (Stage 0) being high at 35.4% and 34.2%, respectively. Among active teachers, significant differences in stages of concern were observed depending on whether they had training experience (p<.05). The expected effects of AI in education showed significant statistical differences across the seven aspects (p<.05). Teachers rated 'providing diverse learning experiences' as the highest effect, while 'enhancing understanding of scientific concepts', 'improving scientific inquiry skills', and 'cultivating scientific literacy' were rated relatively low (p<.05). Internal factors were found to impede implementation more than external factors, with key internal factors including 'resistance to change', 'lack of capability', and 'teachers' negative perceptions of AI in education'. Based on these findings, recommendations were made to enhance the implementation of AI in educational settings.

A Study on the Differentiation of Policy Instruments According to the Characteristic Factors of Apparel Sewing Micro Manufacturers Clusters in Seoul (서울시 의류봉제 소공인클러스터의 특성요인에 따른 정책수단 차별화에 관한 연구)

  • Young-Su Jung;Joo-Sung Hwang
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.26 no.3
    • /
    • pp.238-255
    • /
    • 2023
  • In this study, we derived the characteristic factors of the cluster as measurable variables, and attempted to clarify the characteristics of the apparel sewing areas in Changsin-dong, Doksan-dong, and Jangwi-dong. Based on these results, a comparative analysis was conducted to see how the demand for the government's support policy differs for each agglomeration area. Materials were collected through face-to-face questionnaires targeting tenant companies in the three regions. As a result of the analysis, Changsin-dong was identified as an "innovative growth type," Doksan-dong as a "networking type," and Jangwi-dong as a "specialized localization type." As a result of the research on policy demands, the policy demands of the three agglomerations appeared different, but Changsin-dong preferred capacity building, Doksan-dong preferred information provision, and Jangwi-dong favored policy means of benefit. It was confirmed that even among clusters of the same apparel sewing industry, the formation process and characteristics are different, and as a result, the demand for policy instruments is also different. Policy recommendations include understanding the characteristics and policy demands of each agglomeration area through periodic fact-finding surveys, and recommending the establishment and implementation of differentiated support policies that match the characteristics of each agglomeration area.

Data-Driven Technology Portfolio Analysis for Commercialization of Public R&D Outcomes: Case Study of Big Data and Artificial Intelligence Fields (공공연구성과 실용화를 위한 데이터 기반의 기술 포트폴리오 분석: 빅데이터 및 인공지능 분야를 중심으로)

  • Eunji Jeon;Chae Won Lee;Jea-Tek Ryu
    • The Journal of Bigdata
    • /
    • v.6 no.2
    • /
    • pp.71-84
    • /
    • 2021
  • Since small and medium-sized enterprises fell short of the securement of technological competitiveness in the field of big data and artificial intelligence (AI) field-core technologies of the Fourth Industrial Revolution, it is important to strengthen the competitiveness of the overall industry through technology commercialization. In this study, we aimed to propose a priority related to technology transfer and commercialization for practical use of public research results. We utilized public research performance information, improving missing values of 6T classification by deep learning model with an ensemble method. Then, we conducted topic modeling to derive the converging fields of big data and AI. We classified the technology fields into four different segments in the technology portfolio based on technology activity and technology efficiency, estimating the potential of technology commercialization for those fields. We proposed a priority of technology commercialization for 10 detailed technology fields that require long-term investment. Through systematic analysis, active utilization of technology, and efficient technology transfer and commercialization can be promoted.