Working Groups again provided the opportunities to work together on a particular theme. Readers are encouraged to read the summaries included below; contacts for the coordinators of the groups are included for further follow up as required. Unfortunately, due to the mode of working of WG1 – Design with technology no summary is available at this time.
WG2 – Curriculum Working Group
Co-chairs Leslie Dietiker and Susan McKenney
The curriculum working group identified two key themes to discuss, and broke out into sub-groups specialising on each theme. One theme was concerned with designs that support teachers’ use of curriculum materials. The other theme took Design-based (Implementation) Research (DB[I]R) as starting point.
The group focused on supporting the curricular use of teachers considered the questions, "How do we design so as to support teachers’ use (and knowledge) of curriculum materials?’ and, "In what ways can technology support teachers’ use of curriculum?” In so doing, they drew from Davis and Krajcik's (2005) framework of features of educative materials (Helping teachers to anticipate student thinking; Supporting teachers’ learning of subject matter; Helping teachers consider ways to relate units of content throughout the year; Making visible the curriculum developer’s pedagogical judgment; Promoting teachers’ pedagogical design capacity). The group discussed these ideas in general and shared how the different curriculum projects represented in the group designed these features (e.g., Kosima (Germany); IM (Isreal); Connected Mathematics Project (USA); College Preparatory Mathematics (USA); Cambridge Mathematics Education Project (UK); Nature Life and Technology (Netherlands); Multimedia Educative Curriculum Research Project (USA)). The result of this group time was a 10-page google doc that highlights ways in which each of these curriculum projects have designed teacher educative features based on the Davis & Krajcik (2005) framework. This document may serve as the basis for an Educational Designer article. Participants in the supporting teachers theme group: Leslie Dietiker, Phil Daro, Betty Phillips, Alex Friedlander, Sara Walkup, Lynne McClure, Berenice Michels, Raymond Johnson and Larissa Zwetzschler.
The DB(I)R group focused on the kinds of theoretical inputs sought by curriculum designers, such as implementation theories, theories about over versus under-designing, and theories about infrastructuring (What travels? What are the currents? How to they flow? For more information on infrastructuring, see the keynote by Bill Penuel. They identified a need for a curated set of theories that could inform design. They considered which kinds of curriculum designers want which kinds of theories (especially related to concerns of scale). A matrix was developed that articulated different designer types within a design-marketing-scaleup organization (content, media designers, technology, learning scientists, assessment experts). Relevant theories for each type were discussed. To learn more about the kinds of designers that might use a curated set of theories, it was proposed to start by understanding the user group. This could be done by conducting a survey and possibly some focus groups. This activity could starting with ISDDE members and classify designer types or start with an existing set of data (such as the Design Dimensions portfolio review set of designers). A subsequent step would be to identify the theories (or craft wisdom) that are already in use, as well as the kinds craved. This could be investigated by asking (e.g. questionnaire, lunch meeting @ design house, interview) or by observing (e.g. live or lab study of questions raised while tackling design challenges). These ideas may be used to inspire more formal investigation to understand and support curriculum designers. Participants in the DB(I)R strand: Susan McKenney, Max Gerick, Debra Bernstein, Maarten Pieters, Gillian Puttick, Frans van Galen, Bill Penuel and Dan Zales.
WG3 – Summative Assessment Working Group – Why do teachers fail to make productive instructional use of the outcomes of the summative assessments?
Working Group members: Jim Minstrell, Max Stephens (who kindly prepared this summary), Chris Schunn, David Webb, Rita Crust, Elizabeth Coyner, Mac Cannady, Phil Daro
There are many reasons why teachers fail to make productive use of the results of summative assessments. For outside-created tests, some reasons are: a disconnect between the design intent and what was actually delivered; not knowing how to interpret results; poorly designed items; not knowing how to process the data. Sometimes the detailed information arrives too late to be useful. In cases where teachers themselves have prepared tests, there can be other obstacles, such as: “That content is done, I have to move on to the next topic”; strict adherence to pacing guides; lack of time to process the details; not seeing conceptual connections across content areas; lack of training in assessment design; different teachers having different perspectives on teaching and learning.
Reasons to be hopeful
But are there possible cases where teachers just might be able to do something productive? Opportunities can be created for comparison of outcomes across teachers: to verify similar scoring; to discuss how teaching approaches may have affected results; to compare expectations to actual results; to set goals to consider alternative approaches; to seek help; to analyze evident weaknesses that may influence the next unit learning; to deepen understanding of former content and to link it with new content (especially with more use of peer and whole class discussion); to embark on collaborative scoring; and to engage in more detailed interpreted reporting, including that generated for other stakeholders (students, parents, administrators, other teachers(from the same and different disciplines).
Viewing students as a potential audience of teacher reports, there were some practical suggestions for engaging students in responding to and making sense of summative assessments:
- Get students to build from feedback received; for example, a student agrees to write an evidence-based argument for the consensus understanding from the class
- One or more student-peers then receive that written argument and give feedback on the argument and the evidence used as to whether it makes sense and convinces them.
- The original student then fine tunes his/her argument and shares it with the teacher and the class for some sort of credit.
Viewing parents as a potential audience of teacher reports, it is helpful to move attention from scores to the content of what has been learned; and to provide links to resources that explain content and that suggest additional home learning opportunities. This helps move attention away from parents thinking first of remediation to deepening their appreciation of where a particular course is heading.
Viewing other teachers as a potential audience of teacher reports, it is helpful to report results to other teachers of same students who teach overlapping content (e.g., science->math, science->ELA); to identify common difficulties on shared skills, with examples showing difficult items and errors; to report successes by some studies, with sample items and demonstrated solutions; to argue for a change in, or greater coordination of, teaching strategies to address shared challenges.
In reporting to other teachers, be careful to avoid paper overload. This could be prevented by using a matrix of who sends reports and to whom. Be careful, too, in gaining support of and confirming expectations of school leaders and administrators in the process.
Whole school context
None of these strategies can succeed without the active support of school leaders and administrators. There is nothing less effective than a small group of individuals who are trying to work against the prevailing culture of a school with little support from above.
Administrators are crucially responsible for: formulating and maintaining a school wide policy on teaching and learning; inviting teachers to look beyond the numbers/scores and to see opportunities for action and improvement in teaching; providing time and space for teachers in the first instance to respond to the results of summation assessments and to indicate how teaching may need to change; inviting teachers to think how students can be helped to be more responsible, and challenging students to meet those challenges; identifying where new teachers and others may need support to make better interpretations; and integrating the results of these discussions into the school's ongoing program of teacher professional development and providing specific training where necessary.
Supporting teachers to move ahead
What information do teachers need to make productive instructional decisions after receiving assessments of student performance? In the first instance, they need a brief overview highlighting correct and incorrect responses, identifying minor errors, and being able to access results easily using available software. They need to be helped to impose some clustering of problem types (by concept, within reasoning level). Where scoring rubrics have been applied, teachers need to know what the minimum level is, what is below this level, what is regarded as proficient, and what is clearly more advanced. To do this, they may need some brief qualitative analysis of students’ performances, including students’ work samples especially those illustrating “proficient” and “highly proficient” responses; they may need training in using suitable software to execute more advanced analytics. The instructional benefits of using these analytics should be evident to teachers, individually and collaboratively, and also feasible in terms of teachers’ time. In the longer term, teachers need to be confident that time invested in working through the results of summative assessments will bring about improvement in teaching and a shift in quality of students’ work.