dylan arena

dylan arena bw

co-founder and chief learning scientist at kidaptive:

kidaptive site

we’re good at doing things we’re interested in, and we’re good at forgetting things we’re not interested in

much greater reach with cross – team

assessment used to help you decide what to do next

perhaps rather than using that mapping to design a curriculum.. using it (curiosity) as the curriculum.

Dylan’s work is the focus for this connectedlearning hangout: http://connectedlearning.tv/dylan-arena-measuring-what-matters-most-choice-based-assessments-digital-age

http://mitpress.mit.edu/books/measuring-what-matters-most

free download of book – measuring what matters most – c0-authored with Dan Schwartz:

http://mitpress.mit.edu/sites/default/files/titles/free_download/9780262518376_Measuring_What_Matters_Most.pdf

At the end of the day, whether a
student has “good knowledge” will be crucial only to the degree
that knowledge leads to good choices, so why not measure
choices directly in educational assessments? In the meantime,
scientific efforts can continue to see if a rational, knowledgebased account can provide a sufficient explanation of choice,
which we highly doubt given the centrality of emotion in
choice (see Damasio 1994).

There is instrumental value to using knowledge-based assessments. Measuring what a student knows can help a great deal in
deciding what to do next during instruction.

unless – the issue is no longer about instruction… right? what if the focus is simply choice and facilitating curiosity?

as
much as we love knowledge, and as much as we wish we had
more of it, knowledge is a mismatch for the practical aims of
assessment. Assessment is about shaping the direction of society and its members.

As we build our leverage to pry assessment from the grasp of
knowledge, it may be useful to recognize that knowledge has
not always been the focus of assessment. Assessment in the
United States has had many purposes, ranging from student
tracking to individualized instruction to program evaluation to
holding schools accountable (Haertel and Herman 2005). The
purposes and methods of assessment can change, and we propose that now is a good time to change again, given the advent
of new technologies and methods. Early on, assessment
attempted to measure intelligence. Alfred Binet’s original goal
in developing the first intelligence test was to help teachers

objectively identify children who needed special considerations.
This approach failed as a model of assessment across education,
in part because it confused purportedly unchangeable individual differences with contextual sources of group variability,
including culture and socioeconomic status. Subsequent behaviorist approaches measured performance. These approaches
emphasized the decomposition and mastery of observable skills,
and were an advance in that they focused on what people
learned rather than traits that predetermined learning. The
behavioral assessments, however, were training oriented and
too narrow to help evaluate whether students were being prepared for life beyond the specific tasks. More recently, cognitive
approaches have concentrated on assessing knowledge.
Knowledge assessments are an improvement over training
and intelligence tests because they are more flexible. Knowledge
assessments assume that adding more knowledge is possible,
unlike intelligence tests. And unlike behaviorist assessments,
knowledge-based ones can also examine sources of learner confusion and do not require performance on a narrowly described
set of trained tasks. Despite the relative value of knowledgebased assessments, the construct of knowledge has limitations
that have hampered further advances. For example, knowledge
is often conceptualized as a sort of “mental text,” so instructional metaphors frequently suggest (incorrectly) that teaching
is something like transmitting the text from the mind of the
instructor into the mind of the learner, much like the monks
transcribed letters from one volume to the next. With choice as
the central construct, it becomes harder to develop simplistic
and potentially ineffective metaphors like this one.

The language of knowledge theories and that of social
theories do not readily make contact. States provide reports of
student performance broken out by school. The reports indicate
how the students at the school are scoring on average. If your
child’s school is doing poorly, you want to take action. Unfortunately, the knowledge tests are based on what students have in
their head. So you have discovered that your child’s school is
not doing a great job, but there is nothing in the assessment
that suggests what you might do on the social plane to help
improve the state of affairs. This is because an assessment of
knowledge is not an assessment of processes, and what you care
about in classrooms is the process, not whether there is “knowledge in the air.” We return to this point in chapter 9, on new
types of process assessments.
In some cases, scholars do use knowledge, or the lack
thereof, to help explain the choices that people make (e.g.,
Tversky and Kahneman 1974), but knowledge is only properly
a means to an end. The goal in the social sciences is to account
for human behavior, which is made manifest in choices. Treating knowledge as the primary outcome has left assessment as
an isolated minority.

If the PISA used choice as its
main construct, then context could not be an afterthought in
an assessment item, because choice does not exist independently of the decision-making context.

indeed – beyond spinach.. no?

Consider the case of low-achieving students. A knowledge
assessment points out that they do not have strong knowledge.
But ideally, an assessment would help predict what choices
would lead to better learning, and what contexts would help
promote those choices

On a posttest of learning, the teaching condition outperformed the self condition. When separating students based on 
their prior achievement, low-achieving students in the teaching
condition performed as well as the high-achieving students in
the self condition, and they did much better than the lowachieving students in the self condition. It is hard to explain
these differences by appealing to differences in the low-achieving students’ knowledge in the two conditions, either beforehand or afterward. After all, they were low knowledge to start
with. Instead, the key assessment involved examining students’
choices of whether and how to learn. The logs from their use of
the software revealed what happened. The low-achieving teaching students did well because they chose to spend more time
working on their maps; they read the relevant resources more
and edited their maps’ links and nodes in accordance. In contrast, the low-achieving students in the self condition spent
more of their time chatting and playing the available game.

what?

What these scenarios actually demonstrate is that SPS assessments do not tell the whole story, though they do capture the
public mind for what it means to have learned. Imagine, instead,
what would happen if both groups of students were given access
to learning resources during their quizzes

The company decides whom
to hire by using a paper-and-pencil test of basic Excel operations
that just happen to have been covered in Tom’s course. Tom 52 Chapter 5
would probably do better on this SPS test. We suspect, however,
that Sig would be more likely to serve the company well in the
long run. His deeper understanding of spreadsheet structure and
capacity to learn independently will enable him to learn and
adapt on the job—for example, when the company switches to
a new software package or when the employees are asked to
learn advanced features of Excel on their own. The failure of SPS
tests is one reason that all employers would prefer to hire people
for a trial period to see if the employee adapts to and learns in
their local context. 

huge:

Edwin Ghiselli (1966) reported that aptitude
tests only predict 9 percent of job performance immediately
after training, and subsequently drop to 5 percent after time on
the job. This is because people learn on the job, and sequestered
assessments like aptitude tests are not designed to predict people’s future learning

Bransford and Schwartz (1999), who were concerned that
theories of transfer were only focusing on the application of
knowledge rather than learning, proposed a dynamic assessment format they termed preparation for future learning (PFL). In
a PFL assessment, there are resources for learning during the
test, and the question is whether students learn from them

How does a mastery emphasis interact with the goal of seeing
whether students are prepared for future learning? One assumption appears to be that if we want to assess someone’s preparation for future learning, we should see if they have mastered the
past. This seems like the rationale behind the Scholastic Assessment Test (SAT). The test tries to predict college success by
seeing if students have mastered the mathematics, reading, and
writing from earlier lessons.

The idea of looking at prior mastery to predict future learning is reasonable, but there is a catch. Tests of mastery presuppose knowledge in a mature form, implying that anything short
of mastery does not count as knowledge and cannot be assessed.

and – it’s only focusing on a prescribed content to have mastery on.. huge difference if that content is per choice..per curiosity.. no?

Yet this is not true. First, people have earlier forms of understanding that do not comprise full-blown, declarative or procedural knowledge but that are nevertheless crucial for future
learning. Michael Polyani (1966) referred to this as tacit knowledge. Harry Broudy (1977) described it as knowing with, as distinguished from knowing that and knowing how. Second, it is
possible to assess these earlier forms of understanding, if we
create assessments that allow learning during the test

huge:

Assessments seem
to be built on the presupposition that people will never need to
learn anything new after the test, because current assessments
miss so many aspects of what it means to be prepared for future
learning. These frozen-moment assessments have influenced
what people think counts as useful learning, which then shows
up in curricula, standards, instructional technologies, and people’s pursuits.

interesting…

____________

also interesting in this video – Ricky Van Veen notes that we are shifting from experiencing things and then documenting that, to documenting things as proof of an experience/status/et al…

Advertisements