• Title/Summary/Keyword: Rubric Assessment

Search Result 63, Processing Time 0.026 seconds

Development of Rubric for Assessing Computational Thinking Concepts and Programming Ability (컴퓨팅 사고 개념 학습과 프로그래밍 역량 평가를 위한 루브릭 개발)

  • Kim, Jae-Kyung
    • The Journal of Korean Association of Computer Education
    • /
    • v.20 no.6
    • /
    • pp.27-36
    • /
    • 2017
  • Today, a computational thinking course is being introduced in elementary, secondary and higher education curriculums. It is important to encourage a creative talent built on convergence of computational thinking and various major fields. However, proper analysis and evaluation of computational thinking assessment tools in higher education are currently not sufficient. In this study, we developed a rubric to evaluate computational thinking skills in university class from two perspectives: conceptual learning and practical programming training. Moreover, learning achievement and relevance between theory and practice were assessed. The proposed rubric is based on Computational Thinking Practices for assessing the higher education curriculum, and it is defined as a two-level structure which consists of four categories and eight items. The proposed rubric has been applied to a liberal art class in university, and the results were discussed to make future improvements.

Student Discussion or Expert Example? How to Enhance Peer Assessment Accuracy (동료평가 정확도 향상 방안의 비교: 평가 기준에 대한 학생들 간 토론 대 전문가 평가 사례 제시)

  • Park, Jung Ae;Park, Jooyong
    • Korean Journal of Cognitive Science
    • /
    • v.30 no.4
    • /
    • pp.175-197
    • /
    • 2019
  • Writing is an activity known to enhance higher level thinking. It allows the writer to utilize, apply, and actively expand the acquired knowledge. One way to increase writing activity in classroom setting is to use peer assessment. In this study, we sought to increase the accuracy of peer assessment by having students discuss about the scoring rubric or by referring to an expert's assessment. One hundred and fifty college students participated in the experiment. In the group that referred to the expert's assessment, the accuracy of peer assessment increased when the same piece of writing was evaluated; however, no such increase was observed when another piece of writing was assessed. On the other hand, in the group that discussed about the scoring rubric, the accuracy of peer assessment remained the same when the same piece of writing was evaluated, but increased when another piece of writing was assessed. Also, in the discussion group, the accuracy increased in proportion to the number of comments during the discussion. The results suggest that active and voluntary participation of students increase the accuracy of peer assessment.

A Study on the Development of the Model for the Process-focused Assessment Using Manipulatives -Focused on Middle School Mathematics- (교구를 활용한 수학적 과정의 평가모델 개발에 관한 연구 -중학교 수학을 중심으로-)

  • Choi-Koh, Sang Sook;Han, Hye Sook;Lee, Chang Yean
    • Communications of Mathematical Education
    • /
    • v.27 no.4
    • /
    • pp.581-609
    • /
    • 2013
  • Students' learning processes and mathematical levels should be correctly diagnosed in many different methods of assessment to help students learn mathematics. The study developed the model for the process-based assessment while using manipulatives in the middle school in order to improve problem solving, reasoning and communication which are emphasized in 2009 reformed curriculum as the areas of mathematical process. Identifying the principles of assessment, we created the assessment model for each area and carried out a preliminary study. Based on this, we revised the representative items and the observation checklist and then conducted a main study. Through the results of assessment, we found that students' thinking processes were well presented in scoring rubric for their responses on each item. It meant that the purpose of the assessment as a criterion-referenced test was achieved.

Developing Scoring Rubric and the Reliability of Elementary Science Portfolio Assessment (초등 과학과 포트폴리오의 채점기준 개발과 신뢰도 검증)

  • Kim, Chan-Jong;Choi, Mi-Aee
    • Journal of The Korean Association For Science Education
    • /
    • v.22 no.1
    • /
    • pp.176-189
    • /
    • 2002
  • The purpose of the study is to develop major types of scoring rubrics of portfolio system, and estimate the reliability of the rubrics developed. The portfolio system was developed by Science Education Laboratory, Chongju National University of Education in summer, 2000. The portfolio is based on the Unit 2, The Layer and Fossil, and Unit 4, Heat and Change of Objects at fourth-grade level. Four types of scoring rubrics, holistic-general, holistic-specific, analytical-general, and analytical-specific, were developed. Students' portfolios were scored and inter-rater and intra-rater reliability were calculated. To estimate inter-rater reliability, 3 elementary teachers per each rubric(total 12) scored 12 students' portfolios. Teachers who used analytical-specific rubric scored only six portfolios because it took much more time than other rubrics. To estimate intra-rater reliability, second scoring was administered by two raters per rubric in two and half month. The results show that holistic-general rubric has high inter-rater and moderate intra-rater reliability. Holistic-specific rubric shows moderate inter- and intra-rater reliability. Analytical-general rubric has high inter-rater and moderate intra-rater reliability. Analytical-specific rubric shows high inter- and intra-rater reliability. The raters feel that general rubrics seems to be practical but not clear. Specific rubrics provide more clear guidelines for scoring but require more time and effort to develop the rubrics. Analytical-specific rubric requires more than two times of time to score each portfolio and is proved to be highly reliable but less practical.

A Study on the Development of Program Outcomes Assessment System using Reflection Journal (성찰저널을 활용한 프로그램 학습성과 평가체계 개발)

  • Lee, Youngtae;Lim, Cheolil
    • Journal of Engineering Education Research
    • /
    • v.16 no.3
    • /
    • pp.42-50
    • /
    • 2013
  • The main purpose of this study was to develop a program outcomes assessment tool using reflection journal. Reflection journal has recently come to gain more attention from school as an alternative assessment tool. Although numerous studies reconfirmed the educational importance and value of reflection journal as an assessment tool, research on the assessment tool of the engineering accreditation, based on education view is scarce. After literature reviews about the case studies on the program outcomes assessment, this study, to analyse the current assessment tools, and then, examined the educational implications of reflection journal as a program outcomes assessment tool. This study suggested the assessment tool using reflection journal for PO6 (teamwork) and PO11 (engineering ethics), one of the most important assessment items in engineering accreditation. In this study, we used the performance criteria, assessment criteria, rubric, and closed the loop to measure the teamwork and engineering ethics. The result of this study is significant in terms of guiding the future evaluation system development for program outcomes.

An implementation of performance assessment system based on academic achievement analysis for promotion of self-directed learning ability (자기주도적 학습능력 촉진을 위한 학업성취도 분석 기반의 수행평가 시스템 구현)

  • Kim, Hyun-Jeong;Choi, Jin-Seek
    • Journal of The Korean Association of Information Education
    • /
    • v.13 no.3
    • /
    • pp.313-323
    • /
    • 2009
  • The objective of this paper is an implementation of analysing and predicting functions to promote self-directed learning for student's performance assessment system in programming subjects. By adapting Rubric model, the proposed functions inform a student of the assessment criteria and level to be carried out with respects to two-way specifications such as rational ability, problem solving ability and creativity. The proposed system also provides a graphical results of each ability instead of assessment result, for better understanding and analyzing himself/herself based on to the performance assessment and the result. Moreover, the proposed system contains a method to predict future achievement result with moving average technique. Therefore, an academic achievement can be precisely determined by himself/herself to estimate self-directed learning. The teacher can provide different level of educational resources such as supplement learning, problem explains and private instructor etc., in order to maximize efficiency of education.

  • PDF

Development of Competence-based Assessment System for Lifelong Vocational Competency Development (CBAS-LVCD) (평생직업능력개발을 위한 역량기반 평가 시스템 개발)

  • Heo, Sun-Young;Im, Tami;Kwon, Oh-Young
    • Journal of Practical Engineering Education
    • /
    • v.10 no.1
    • /
    • pp.57-62
    • /
    • 2018
  • The recognition of the importance of lifelong vocational competency development, the proliferation of MOOC, and interest in online education have increased. As a result, efforts are continuously being made to develop an education system for lifelong vocational competency development. However, research on design and development of competency-based evaluation tools and systems in the field of technology engineering is still insufficient. In this paper, we designed and implemented a Competency-based Assessment System for Lifelong Vocational Competency Development(CBAS-LVCD). CBAS-LVCD utilizes NCS-based rubric-based assessment tools to evaluate learners and provides simulation tools for use in technology engineering. This is expected to be of great help in assessing the competencies required for practical affairs in the field of technology engineering, where practical work and on-line testing are limited.

Development of Framework and Rubric for Measuring Students' Level of Systems Thinking (학생들의 시스템 사고 수준 측정을 위한 Framework와 Rubric의 개발)

  • Lee, Hyonyong;Jeon, Jaedon;Lee, Hyundong
    • Journal of The Korean Association For Science Education
    • /
    • v.38 no.3
    • /
    • pp.355-367
    • /
    • 2018
  • The purposes of this study are 1) to identify systems thinking level and definition, 2) to develop a framework for the assessment of systems thinking level, and 3) to develop a rubric for scoring open-ended written responded test. In order to achieve these purposes, a total of 60 articles were analyzed by using the literature analysis framework. The systems thinking level and definition are identified through the results of systems thinking literature analysis. Based on the systems thinking level and definitions, the research derived a framework that includes the core ideas and evaluation content of each level. In addition, rubric for the scoring of open-ended response test items was revised and supplemented. It is concluded that a content validity test on the tools (systems thinking level and definition, framework for item development, rubric) has been developed in the study. The content validity was verified by 7-science education experts. According to the result of CVI, it was found to be more than .95 in all three tools. Based on the results of this study, the research will develop items that can measure students' level of systems thinking. The construct validity and criterion validity of the developed items should be verified systematically. The research could carry out a validation study for the systems thinking measurement related to the core competence emphasized in the 2015 revised curriculum.

A Study on the evaluation technique rubric suitable for the characteristics of digital design subject (디지털 디자인 과목의 특성에 적합한 평가기법 루브릭에 관한 연구)

  • Cho, Hyun Kyung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.525-530
    • /
    • 2023
  • Digital drawing subjects require the subdivision of evaluation elements and the graduality of evaluation according to the recent movement of the innovative curriculum. The purpose of this paper is to present the criteria for evaluating the drawing and to propose it as a rubric evaluation. In the text, criteria for beginner evaluation were technical skills such as the accuracy and consistency of the line, the ratio and balance of the picture, and the ability to effectively utilize various brushes and tools at the intermediate levels. In the advanced evaluation section, it is a part of a new perspective or originality centered on creativity and originality, and a unique perspective or interpretation of a given subject. In addition, as an understanding of design principles, the evaluation of completeness was derived focusing on the ability to actively utilize various functions of digital drawing software through design principles such as placement, color, and shape. The importance of introducing rubric evaluation is to allow instructors to make objective and consistent evaluations, and the key to research in rubric evaluation in these art subjects is to help learners clearly grasp their strengths and weaknesses, and learners can identify what needs to be improved and develop better drawing skills accordingly through feedback on each item.

Development and Validation of the Strengths Assessment Indicators for Daycare Centers (어린이집 강점평가지표 개발 및 타당화)

  • Hong, Sung hee;Hwang, Hae-Ik
    • Korean Journal of Childcare and Education
    • /
    • v.14 no.6
    • /
    • pp.143-170
    • /
    • 2018
  • Objective: The purpose of this study was to develop assessment indicators and to verify the validity and reliability of the developed assessment indicators. Methods: A Delphi survey, focus group interviews, and content verification were conducted in order for experts to develop an evaluation index of the strengths of the day care center. A main survey was conducted on 438 daycare center principals and teachers to test their item quality, validity and reliability. Results: The final assessment indicators consisted of three areas, seven assessment criteria, 19 evaluation factors, 41 assessment items and a five-point rubric rating scale. As for the common strengths indicators, there were three assessment areas, five assessment criteria, 12 assessment elements and 22 assessment items. In regard to the selective strengths indicators, there were 3 assessment areas, 5 assessment criteria, 12 assessment elements and 16 assessment items. Conclusion/Implications: The efforts to confirm the strengths of daycare centers are expected to facilitate the identity building of the daycare center itself and for its organizational members to make a contribution to the qualitative improvement of childcare.