This is part 3, last of a series. Back to part 2
In this post I look at the ways Small applies his work to self- and peer- learner assessments, and how that might be useful for my research.
The practical application of the theoretical framework and analytical methods described in the article is in techniques for assessing learners that respect the continuum from aesthetic and experiential to efferent, without necessarily privileging one end of the spectrum. Small states that his approach "enables the products of enquiry-based learning and creative activity to be assessed objectively, but by criteria sympathetic to the experience they represent, rather than simply by arbitrary, external standards of accuracy, correctness or comparison." (p. 269)
This feels very much in accordance with where I am coming out with my research: a methodology for self- and peer- assessment of participatory representational practice, based on a set of values that foreground aesthetic experience and engagement (especially since they are usually passed over or marginalized), but don't leave out the "practical", "shared" and "constructive" aspects of working with groups in some sort of applied setting.
The main drive in Small's paper is developing criteria for learning assessment. Essentially, he proposes 'criteria for the criteria', providing a basis for the set of criteria he proposes. He looks for assessment criteria that, first, recognize and uphold the personal drive and responsibility of the learner (p. 261). For "learner" you can certainly substitute "practitioner." Second, the criteria should reflect the dynamic nature of the "journey" a learner experiences, between "self and text", "inner and outer", etc. For a practitioner, each session is in effect such a "journey", moving between (in varying ways) the aesthetic and efferent poles.
Third, he wants to give "safe passage" to the personal knowledge being created. In the analyses I've done to date, this part is perhaps less applicable, since practitioners in sessions are not there to develop their own personal knowledge, though they do draw on and perhaps build up such knowledge. However, this could be very applicable to the kind of self/peer assessment exercise that I took a first step toward at my IFVP session.
Finally, the criteria should respect the (quoting Fish 1980) "authority of the interpretive community". That part also is probably not as applicable to my research since the kind of practice I'm looking at isn't embedded in one particular institution or context. That is, there is not yet an existing "interpretive community", though I sure wish there was one. Helping to create such a community might be a possible future outcome.
In my research, I've come up with dimensions and criteria and observed them in practice, saying how they are exemplified in practitioner moves and choices. Is the next step to have these be self- and peer-assessed? As Small argues, engagement and commitment can only really be "accessed" through "subjective" self-evaluation (p. 268). Does this also mean that, for example, CEU can't really be assessed (different from "observed") except by either participants or practitioners themselves?
This conundrum does seem to keep recurring in some discussions of what I've done to date. What good does it do for an external observer (as I've been doing) to make assessments? Doesn't the whole point then hinge on how good the external observer is at making such assessments, rather than the value of the assessments, dimensions, and criteria themselves?
I don't want that to be the whole point. Does this argue that I should stop doing my own analyses and just move right to having people do them for themselves? Does it kind of negate some or much of the value of doing (and finishing!) the video analyses? Or can I legitimately say that I did the foundational work of discovering the dimensions and applying/iterating them, and the next step/future work is having people self-assess using them?
To move beyond this conundrum, I need to have practitioners apply my constructs to their own work for themselves, as the IFVP session was a first step towards. I may try to do at least one more such session before the final submission of my thesis. I do think that there is value for practitioners themselves to engage in such assessment exercises, if well facilitated. As Small puts it, "In involving learners systematically in reﬂection upon what they have achieved, we can ask them to question their own purpose, consider their stance, trace the movement of their selective attention, address their limitations and review their strategies." (p. 268) This kind of work will surely lead to improved practice, especially in the area of better understanding how one's actions as a practitioner affect the people one works with and for.
Last of a series. Tim Small's original article was published in the Curriculum Journal, Volume 20, Issue 3 September 2009 , pages 253 - 270.