INTRODUCTION
Assessment in a school setting is a tool with which teachers can measure students’ current abilities, and simultaneously a tool that enables teaching at a higher level by providing student information. For students, assessment allows them to realize their abilities and reconfirm their goals for a class, which helps them comprehend the importance of the content taught in class. Therefore, their learning attitude may be affected by what is measured in an assessment.1
Previous systems, with multiple-choice and short-answer assessments, mainly evaluate the memorization of fragmentary factual knowledge, whereas descriptive assessments adopted in the second semester of 2005 aimed to promote higher mental functions in students, such as creative problem solving skills. Such changes in the assessment system have the positive impact of encouraging a higher level of learning in students, rather than the memorization of simple concepts. Hence, the government has increased the extent to which descriptive assessments are required, expecting that this would improve students’ learning in school.2−4
The government is politically enforcing the use of descriptive assessment in schools, although previous studies have shown that the purpose of descriptive assessment has not been properly realized in school despite that the surface appearance that the system is working stably. 5-10 When a change to the educational system as innovative as descriptive assessment is first introduced, a common but serious mistake that administrative authorities make in understanding teachers’ acceptance and use of such new education programs is that they assume that full implementation of this innovation has occurred after its introduction and early training stages.11 Thus, the new education system is not used in accord with its initial planning and expectations and teachers may not fully understand the program’s goals. The implementation of programs in such conditions is undesirable from the perspective of fidelity and no benefit is created from the introduction of a new program that does not meet its goal.12−15 Ultimately, what determines the success of a new education program is not only the program itself, but also, most importantly, the role of teachers who are on the front line in delivering the program to students.10,16 Therefore, in order to successfully establish a new program in schools, teachers, the final deliverers of education programs, are a significant variable.
Studies of teachers’ stages of concern (SoC) and levels of use (LoU) when introducing a new educational program have long existed. Fuller found that teachers’ SoC significantly affect the use of new education programs.17 Ornstein and Hankins claimed that the key reason for stagnancy in the field of education, despite the introduction of numerous innovative education programs, is the failure of meeting the goals of these new programs due to a lack of change in teachers.18
Hall and Rutherford pointed out that a new education program can be successfully implemented when SoC for the program are analyzed and proper support is provided. They considered teachers to be the most important factor in implementing an educational system and developed the concerns-based adoption model (CBAM).11 Here, a concern is a complex expression of feelings, thoughts, and effort that teachers have towards the new education program; each of them perceives the program differently depending on their ideas, knowledge, and experience. CBAM presents teachers’ SoC and LoU toward a new education program in stages and focuses on determining how much concern teachers have as users of the program and presenting guidelines. In other words, rather than simply measuring teachers’ SoC and LoU of a new program, CBAM places emphasis on analyzing teachers’ SoC and LoU and providing them with proper support accordingly.19 There are a number of domestic and international studies that have analyzed teachers’ SoC and LoU of education programs based on CBAM, and these studies are still ongoing.19−24
Previous studies of descriptive assessment have mostly focused on the development of an assessment tool or the perception of students and teachers toward descriptive assessments.25−28,31,32 A previous study on descriptive assessment in science examined the questions in these assessments and analyzed factors regarding their issues.10 However, it was limited in its encouragement of teachers to understand the original goal of descriptive assessment and in promotion of teachers’ professional growth regarding the program, which would guide them in achieving higher LoU. Thus, this study aims to analyze science teachers’ SoC and LoU of descriptive assessment using CBAM, to examine the current status of science teachers toward descriptive assessment, and to help them advance to a higher level by providing guidelines. CBAM was developed for new education policies. However, the criterion that should define an education system as being new—time since its introduction or its familiarity to users—has been controversial.33 Hall and George argued that a new education system can be radical and new for everyone but also can open up discussions of existing programs.34 They also mentioned that it can be a subject of study when teachers’ SoC and LoU need to be determined. Hord et al. claimed that the term innovation is selected to show that a program is implemented to bring about change.35 Thus, although it is still gaining attention as an alternative assessment, descriptive assessment that is not achieving its original purpose can be applied to CBAM, as shown by previous studies.
Therefore, we analyzed science teachers’ SoC and LoU of descriptive assessment and performed statistical analysis to determine the variables affecting SoC and LoU. We then obtained the Spearman correlation coefficient to determine the correlation between SoC and LoU. Thus, we aim to search for practical measures that can enhance science teachers’ LoU of descriptive assessment.
Methods
Study subjects
In this study, in order to examine science teachers’ SoC toward and LoU of descriptive assessment, a survey was conducted of teachers who had taken science-related lectures from the K University graduate school in Chungcheongbukdo during the first semester of 2015. From the 154 questionnaires collected, 16 of lacked consistency in their answers or had missing answers and were excluded, and a total of 138 questionnaires were analyzed. The backgrounds of the teachers who responded are shown in Table 1.
Table 1.Teachers’ backgrounds (N = 138)
Study tools
Questionnaire measuring teachers’ SoC: The questionnaire measuring teachers’ SoC about descriptive assessment used in this study was translated and modified from the questionnaire measuring SoC developed by Hall et al. to fit the current status of the education field.29 Teachers’ SoC toward descriptive assessment is shown in Table 2.
Table 2.Typical expressions of SoC
For each question, the respondents were asked to mark a score from 0 to 7 that best describes their situation. The questionnaire contains a total of 35 questions, with 5 questions allocated to each stage from 0 (unconcerned) to 6 (refocusing). To test the validity of the contents in this questionnaire, 2 experts, each with a doctoral degree in science education, examined the validity of each question, and a first round of modifications was made based on the result. Then, a second round of modification was made after a preliminary test was conducted on 7 teachers and their opinions were collected. The range of Cronbach α in the questionnaire is between .681 and .863 for each stage, and .923 for all questions.
Table 3.Question categories
Questionnaire measuring LoU of descriptive assessment: For the Questionnaire measuring LoU of descriptive assessment, the 8 LoU proposed by Hall et al. were used.29 CBAM is a diagnostic tool that describes the behaviors of users involved in the implementation of an educational program. It was developed to elucidate what users are actually doing and provides useful information to sponsors of the education system.11 Although it is divided into 8 LoU, some studies have removed the non-use (level 0), orientation (level I), preparation (level II), and renewal (level VI, in which an alternative program is implemented through improvements) levels, since the Korean education system is run at the national level and the implementation of these education programs is mandatory. However, the levels described above included were included in this study in order to accurately examine the teachers’ LoU in actual school settings. Each level was described in a sentence and the respondents could mark only one item that best described them. Expressions of each LoU are shown in Table 4.
Table 4.Typical expressions of LoU
Data processing
SoC analysis: Using the collected questionnaires, total scores were obtained by adding the scores from the 5 questions categorized for each SoC stage according to the SoC scoring device. Then, those scores were converted to relative intensity scores presented on the score sheet. They were then analyzed using the peak stage score interpretation method proposed by CBAM.11 According to the peak stage score interpretation, a higher SoC score indicates teachers’ greater concern regarding the corresponding stage, whereas a lower score indicates less concern. However, “higher” and “lower” in this context are not absolute, but rather relative to other SoC stages for each individual.
Based on the peak stage score interpretation, the SoC questionnaires were analyzed in two ways. First, in order to create an SoC profile, the overall SoC, the integration of individual teacher data obtained by calculating the mean scores from each SoC stage, was analyzed. Then, the profiles of non-users and the overall trend of the study group were compared. In this way, the overall SoC allows us to understand the status of the current program in comparison to the typical patterns of non-users.
Second, we analyzed individual SoC, which is the sum of the number of individual teachers who scored the highest in each SoC. This can be used for determining the distribution of the individual SoC, statistical analysis on the variables of the teachers, and the correlation comparison between SoC and LoU. According to the peak stage score interpretation, for an individual teacher’s SoC, the stage with the highest relative intensity score among the seven stages for each teacher was considered to be their SoC toward descriptive assessment. If there were more than two identical scores, the higher stage was considered to be their SoC, in accordance with the analysis methods used in prior studies. Then, a frequency analysis was performed for each SoC. In addition, a χ2 test was performed to find any differences in SoC based on teacher demographics, such as sex, training experience on descriptive assessment, career in education, academic degree, number of classrooms, and workplace.
LoU analysis: To analyze science teachers’ LoU of descriptive assessment, frequency and percentile (%) were obtained for each level. Then, for the teachers who responded as using descriptive assessments (excluding level 0, which indicates no use of descriptive assessment), a χ2 test was performed to analyze whether LoU differed with teachers’ demographic variables, such as sex, training experience on descriptive assessment, career in education, academic degree, number of classrooms, and workplace.
Dependence of LoU on SoC: To examine the relationship between science teachers’ SoC toward and LoU of descriptive assessment, each LoU was analyzed according to teachers’ SoC and a χ2 test was performed to determine whether this result was statistically significant. Because SoC and LoU variables are ordinal scales, the Spearman correlation coefficient was measured to analyze the correlation between the two variables.
RESULTS AND DISCUSSION
Science teachers’ overall SoC and Individual SoC toward descriptive assessment
Science teachers’ overall SoC toward descriptive assessment is shown in Table 5 and the profile of science teachers’ SoC toward descriptive assessment generated from this result is shown in Fig. 1. The dotted line in Fig. 1 corresponds to a typical non-user SoC profile rising from the wave of a progressive SoC in the innovation of education systems proposed by Hall and Hord.29 The solid line indicates the SoC profile of the science teachers from this study.
Table 5.Teachers’ average relative intensity
Figure 1.Profile of interest.
As shown in Table 5, the relative intensity of science teachers’ SoC toward descriptive assessment was the highest at 84% in Stage 0 (unconcernced) and Stage 1 (informational concerns), followed by Stage 2 (personal concerns, 80%), Stage 6 (refocusing, 73%), Stage 3 (management concerns, 60%), and Stage 5 (collaboration concerns, 55%). Stage 4 (consequence concerns) showed the lowest relative intensity at 48%. The most important factor in interpreting relative intensity and the SoC profile is to verify the highest and lowest values.29 Based on this, we focused on analyzing the overall trend rather than the numbers from each stage. When the LoU of descriptive assessment was compared with a typical profile of non-users, there were some similarities and differences, which we interpret as follows.
Comparing the profiles of non-users and science teachers shows that they peak at Stages 0 and 1, which correspond to low levels of concern in both groups. This suggests that the teachers are either unconcerned with descriptive assessment regardless of its use or interested in its characteristics and effects, and the basic information required for its use. The overall SoC of the science teachers toward descriptive assessment is lacking, similar to that of typical non-user. In addition, the relative intensity of Stage 2 (personal concerns) and Stage 3 (management concerns) was higher than that of Stage 4 and 5. This is a typical pattern of early stages, caused by growing concerns about oneself and the management of the program when it is first introduced.29,30 The fact that the SoC show a similar pattern to the level from when descriptive assessment was initially introduced 10 years ago indicates that descriptive assessment has not been successfully established in schools.
While the pattern of non-users tails off in relative intensity with the increase in SoC, SoC toward descriptive assessment tails up, as it is higher in Stages 5 and 6 compared to Stage 4. This suggests that non-users are not interested in revising and replacing innovation in the educational system, whereas the teachers in this study are showing interests in alternatives or improvements to descriptive assessment. Such a pattern—showing no concerns or high informational concerns as well as relatively high concerns regarding alternatives—is typical of those observed during the introduction of a new educational program.29 This indicates the negative viewpoints of science teachers toward descriptive assessment.
In conclusion, science teachers expressed the attitudes of non-users, lacking overall concern regarding descriptive assessment while they were highly concerned about the changes that may be brought to it, due to its features and the tasks accompanying its management. Meanwhile, concern toward how descriptive assessments would change the students were lacking, as were teachers’ interest in sharing information and collaborating with colleagues and experts. However, it can be interpreted that the teachers are more interested in modifying, complementing, and improving descriptive assessment for its application or in other assessment tools.
The second method for interpreting peak stage scores is to analyze individual SoC of the science teachers by determining the stages with the highest relative intensity from their answers, setting them as the SoC of the teachers toward descriptive assessment, and analyzing the frequency of each of the SoC. Table 6 shows the distribution of individual science teachers’ SoC regarding descriptive assessment. It shows that the majority of teachers (74.8%) showed no (N = 55, 39.9%) or informational concerns (N = 48, 34.8%). This suggests teachers’ low level of concern toward descriptive assessment, similarly to the overall SoC of the teachers shown in Table 5.
Table 6.Teachers’ stages of concern
Considering that the subjects of this study are taking lectures from the K University graduate school, this group is likely to be motivated to advance their skills and be more active towards novelty, which can lead to overestimation of SoC. Despite this, the pattern of the initial stage of program introduction was observed, suggesting that the SoC of average science teachers may be even closer to that of non-users.
Differences in SoC based on teacher variables
Analysis of teachers’ SoC toward descriptive assessment based on the difference in teachers’ sex, career in education, academic degree, training experience with descriptive assessment, number of classrooms, and workplace showed no statistical significance. However, teachers’ SoC based on training experience did show a statistically significant difference, as shown in Table 7 (p < 0.05). The majority of the science teachers’ SoC toward descriptive assessment were Stages 0 and 1, showing no correlation with training experience. A post-test showed that there were more teachers who did not have training experience with descriptive assessment in Stage 1 (p < 0.05), whereas there were more teachers who had training experience with descriptive assessment in Stage 4 (p < 0.05). This suggests a correlation between training experience with descriptive assessment and teachers’ SoC.
Table 7.The results of χ2 on stages of concern (Training experience)
Science teachers’ LoU on descriptive assessment
By analyzing the science teachers’ LoU of descriptive assessment, we can gain information about how descriptive assessment is used in the field of education. Table 8 shows the frequency analysis of the science teachers’ LoU of descriptive assessment. Level III, mechanical use, had the highest frequency (N = 49, 35.5%) followed by Levels IVA. routine use (N = 48, 34.8%); IVB., refinement (N = 18, 13.0%); and V., integration (N = 7, 5.1%). Interestingly, despite that Korean education programs are implemented at the national level under government enforcement, there were 16 teachers (11.6%) in this study that fell into Levels 0–II and VI, which correspond to non-users and were excluded from the options given in previous studies. However, we cannot presume that these 16 science teachers rejected and did not adopt the government’s educational program policy. Rather, it is possible that, when multiple teachers are in charge of one subject, they divide the job of writing descriptive assessment problems for exam, and hence these 16 teachers may have not performed any descriptive assessment yet or have been preparing to use it. Alternatively, if the teachers have made problems for descriptive assessment, they may have made them thinking that its assessment is not different from the previous assessment tools; thus, they wrote conventional exam questions without awareness of the new system of descriptive assessment.
Table 8.Teachers’ levels of use (N = 138)
Because mechanical and routine use accounted for the majority of cases, it appears that the descriptive assessment system has been soundly established in schools. However, it has not reached the point of active implementation in which teachers add changes or collaborate with other teachers for better student learning.
Differences in LoU based on teacher variables
For the 122 science teachers who responded as using descriptive assessment (Levels III–V) among the 138 teachers, differences in LoU were analyzed according to teacher demographics, such as sex, career in education, academic degree, training experience with descriptive assessment, number of classrooms, and workplace. Similar to the differences in SoC, the results showed that the differences in teachers’ LoU according to their sex, career in education, academic degree, number of classrooms, and workplace were statistically insignificant. Similar to the case of SoC, training experience also caused significant differences in teachers’ LoU of descriptive assessment (p < 0.001). Post test revealed that there were more teachers who did not have training experience in LoU III (p < 0.001), and there were more teachers who had training experience in IVB (p < 0.001). This suggests that training can be an important variable in the implementation of a new education program when it is first introduced. In addition to SoC, training is also likely to affect LoU. In conclusion, this result shows the importance of teacher training when a new education program is introduced.
Table 9.The results of χ2 on level of use (Training experience)
Analysis of teachers’ LoU according to SoC
The results of analyzing science teachers’ LoU on descriptive assessment according to their SoC are shown in Table 10. A total of 122 teachers who were actually using descriptive assessment were analyzed and the 18 teachers who responded as being non-users were excluded from the analysis. Analysis of the teachers’ LoU according to their SoC showed positive correlation, with a Spearman correlation coefficient of 0.299 (p < 0.01).
Table 10.Spearman correlation results
This is in line with a study by Lee et al. (2012) showing a positive correlation between teachers’ SoC toward and LoU of descriptive assessment, and with a study by Jieun Lee and Jaehan Shin showing higher LoU in teachers who have high SoC regarding the new educational format of 2007.31,32 In other words, LoU slightly increases when SoC is high, and SoC also increases when LoU is high.
Conclusion and proposal
In this study, we analyzed science teachers’ SoC toward and LoU of descriptive assessment, differences in SoC and LoU according to individual characteristics of the teachers, and LoU based on SoC. From this, we aimed to examine the status of descriptive assessment in schools more than 10 years after it was first introduced as an innovation of the education program. Additionally, this study provides information for the sponsors of this innovation, to aid them in offering proper support that suits teachers’ SoC and LoU, thereby contributing to the effective use of descriptive assessment. The study results are summarized as follows.
First, analysis of science teachers’ relative intensity for each SoC using SoCQ showed that most teachers were unconcerned with descriptive assessment. In overall SoC, personal concerns (Stage 2) and management concerns (Stage 3) had a higher relative intensity than concerns regarding consequences (Stage 4) and collaboration (Stage 5), showing a pattern typical of the initial stage of a program. The only teacher characteristic that showed significant differences in SOC toward descriptive assessment was training experience (p < 0.05). These results suggest that the current level of concern regarding descriptive assessment is still equivalent to that when it was first introduced, more than 10 years ago. Hence, we must find methods for increasing the science teachers’ concern toward descriptive assessment. Since teacher training is a significant variable, we should consider the alternatives that can increase teachers’ SoC through training.
Second, the analysis of LoU of descriptive assessment showed that teachers not using descriptive assessment (Stages 0, 1, 2, and 6) comprised 11.6% (N = 16) of our questionnaire respondents. They have not adopted descriptive assessment, even though it is used at the national level. This means that they are not in charge of descriptive assessment when designing assessment problems or they feel that the descriptive assessment they were using was not different from previous assessment tools and returned to writing traditional exam problems. Among 122 teachers who responded that they were using descriptive assessment, mechanical (N = 49, 35.5%) and routine (N = 48, 34.8%) use accounted for the majority. This suggests that descriptive assessment appears to have been stably established, but it has not reached the point of active implementation, in which teachers add changes to descriptive assessments or collaborate with other teachers to improve student learning.
Analyzing LoU based on teacher demographic variables, only training experience resulted in statistically significant differences in use, as was the case with SoC (p < 0.05). Although descriptive assessment seems to be managed well on the surface, it has not transitioned to a higher level of use. This suggests that the teachers are still lacking professionalism in descriptive assessment. Training was a significant variable that can increase LoU, and hence an alternative to increasing LoU by utilizing training and promoting teachers’ professionalism in descriptive assessment is required.
Third, the analysis of the correlation between science teachers’ SoC toward and LoU of descriptive assessment showed a positive correlation, with Spearman correlation coefficient 0.299.
The conclusion obtained from this study is as follows. In order to increase the LoU of descriptive assessment, we need to increase the SoC of teachers who are using it. To increase SoC, a well-selected teacher training program is required, as the training on descriptive assessment was a significant variable that can increase SoC.
We propose the following based on the results of this study.
First, a customized training that accounts for teachers’ SoC and LoU is needed. When a new education program is introduced in schools, universal training is performed. Such training does not consider teachers’ SoC toward and LoU of the education program and may be only effective for certain groups. Because CBAM is a model developed for prescriptive diagnosis, training is required that allows teachers at certain stages or levels to move forward to a higher level through an accurate diagnosis of SoC and LoU in teachers undergoing training.
Second, the establishment of a systematic training support system is needed for the growth of teachers. Current training does not take into account the level of teachers, provide follow-up guidance, or consider teachers’ subsequent growth. True growth of the teachers is enabled through feedback they receive in school after having been trained. Practical follow-up training that covers difficulties teachers encounter after applying what they have learned in school settings is needed. Furthermore, it should not be a one-time follow-up training but should provide constant feedback, until the teachers are equipped with assessment professionalism and can use the system with truly high LoU. Administrative and financial support systems that allow such training must be developed.
Third, studies are needed to create a training program that is optimized for teachers’ levels. For instance, the education curriculum for lessons about acids and bases in science was designed to be hierarchical, with different levels for elementary, middle, and high schools. Such a hierarchy was developed in response to numerous study results. A systematic teacher training program also needs to be developed to diagnose teachers’ growth stages and provide suitable training accordingly to promote their advancement to the next stage. Therefore, sufficient studies should be conducted in order to develop a training program optimized for a variety of levels.
References
-
Paik, S. H.; Kim, S. K.
J. Learner-Centered Curriculum and Instruction .2014 ,14 , 41. - Choi, S. W. The Effect of Method in an Essay Formative Testing Science Studies. Thesis, Ewha Womans University, 2001.
- Hong, A. N. Effects of Science Writing Instruction using Visual Materials on High School Student’s Achievement and Awareness toward Science Essay Test. Thesis, Kwangwon University, 2013.
-
Yang, G. S. A
Direction of Student Evaluation and Manufacture of Descriptive Assessment ; Pusan Education Office: Pusan, 2006. -
Do, J. W.; Oh, J. Y.; Kong, J. I.; Joo, M. J.; Kim, M. Y.; Lee, D. H.; Bak, M. G.
Edu. Primary School Math .2009 ,12 , 63. -
Kim, L. Y.; Lee, M. H. J.
Edu. Res. Math .2013 ,23 , 533. -
Kim, S. H.
J. Korean Edu .2013 ,40 , 479. - Sim, A. S. Study on the Recognition of Teachers and Students & Actual Conditions about Middle School Science Essay Test. Thesis, Korea National University of Education, 2008.
- Choi, S. W. The Effect of Method in an Essay Formative Testing Science Studies. Thesis, Kwangwon University, 2011.
-
Kim, S. K.; Choi, E. J.; Paik, S. H.
J. Korea Chem. Soc .2015 ,59 , 445. https://doi.org/10.5012/jkcs.2015.59.5.445 -
Hord, S. M.; Rutherford, W. L.; Hall, G. E.
Taking charge of change ; Alexandira, VA: Association for Supervision and Curriculum Development, 1987. -
Han, M. H.
The Background and Reason of Curriculum ; Education Science Press: Seoul, 1995. - Jun, Y. M. Analysis of curriculum level in teaching-learning process. Thesis, Ewha Womans University, 1988.
- Lee, H. W. A Study on teacher’s autonomy in curriculum implementation. Thesis, Ewha Womans University, 1987.
-
Kim, D. J.
Kor. J. Ed. Res .1992 ,30 , 167. -
Fullan, M.
The New Meaning of Educational Change ; Columbia University Press: London, 2001. -
Fuller, F. F.
Amer. Edu. Res. J. 1969 ,6 , 207. https://doi.org/10.3102/00028312006002207 -
Ornstein, A. C.; Hunkins, F. P.
Curriculum: Foundation, Principles, and Issues (4th ed,). Boston: Pearson Education, 2004. -
Chae. J. H.; Whang, S. K.
Kor. Home Eco. Edu. Ass .2003 ,14 , 37. -
Kim, S. W. Kor. Soc.
Fisheries and Marine Sci. Edu .2014 ,26 , 81. https://doi.org/10.13000/JFMSE.2014.26.1.81 -
CHoe, H. J.; Jeong, J. S.; Kim, S. H.
J. Sci. Edu .2015 ,39 , 28. https://doi.org/10.21796/jse.2015.39.1.28 -
Kim, S. W.; Lee, D. Y.; Kang, Y. Y.
J. Edu. Eval .2011 ,24 , 31. -
Jang, J. H.; Kim, S. W.; Lee, S. B.
J. Curri. Eval .2015 ,18 , 105. -
Bak, H. S.
J. Learner-Centered Curriculum and Instruction .2013 ,13 , 417. -
Kim, S. Y.
J. Edu. Eval .2007 ,20 , 1. -
Lee, J. K.; Rim, N. R. Whang, S. S.
J. Sci. Edu .2004 ,29 , 115. -
Kim, Y. H.; Kim, Y. S.
Bio. Edu .2012 ,40 , 167. https://doi.org/10.15717/bioedu.2012.40.1.167 -
Hong, M. J.; Chung, H. S.
J. Sci. Edu .2006 ,30 , 65. -
Hall, G. E.; Hord, S. M.
Implementing Change: Patterns, Principles, and Potholes (2nd ed.). Boston: Pearson, 2005. -
Lee, I. S.; Park, M. J.; Chae, J. H.
Kor. Home Eco. Edu. Ass .2011 , 235. -
Lee, D. Y.; Lee, S. W.; Kim, S. W.
J. Elementary Edu .2011 ,24 , 235. -
Lee, J. E.; Shin, J. H.
Teacher Edu. Res .2012 ,51 , 137. https://doi.org/10.15812/ter.51.1.201204.137 - Kim, H. N. The Development and Issues of Three Diagnostic Dimensions of CBAM. Thesis, Pusan University, 2011.
- Hall, G. E.; Dirksen, D. J.; George, A. A. The use of innovation configuration maps in assessing implementation: The bridge between development and student outcomes. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA, 2000.
-
Hord, S. M.; Stiegelbauer, S. M.; Hall, G. E.; George, A. A.
Measuring Implementation in Schools: Innovation Configurations . Austin: SEDL, 2006.
Cited by
- Exploring the science teachers’ assessment literacy for performance assessment vol.24, pp.1, 2021, https://doi.org/10.29221/jce.2021.24.1.231