In the mapping linked above, I'd ranked or grouped each of the 8 sessions for each of the dimensions, adding rationale for each ranking and sometimes example artifacts, such as screenshots, photos, or transcript quotes.
The Excel work was originally to see if any patterns emerged by comparing across the dimensions. I think some are. At least it gives some compelling visuals.
In the spreadsheet, for each dimension, I arrayed the sessions left to right, assigning a unique color to each (blue for Ames Group 1 (AG1), orange for the Hab Crew session (Hab), etc.). Where the dimensions were more simple groupings (such as dimension A1, which just shows the practitioner choice of method), I indicate that rather than a high-to-low array. There is a lot more info and description of each dimension and ranking method in the maps.
Group A are 5 dimensions concerning aspects of a session's pre-existing plan and other pre-session factors such as choice of method and approach. The results look like this:
The 9 Group B dimensions cover aspects of practitioner interaction with participants:
Group C comprises 5 dimensions showing various aspects of the sessions as meetings and discussions:
And Group D is 8 dimensions that focus more specifically on the shaping of the representation:
What was most interesting about moving the analysis into Excel was better being able to compare the dimensions themselves and think about what they might be showing as a whole. It's still early going but it does seem like there are some interesting patterns from a participatory shaping point of view.
For example, it occurred to me that some of the dimensions that came out of the data group into categories that could be used as a kind of index of 'goodness' at least from a shaping point of view. I looked over all the dimensions and identified these 13 that could contribute to such an index:
- A5: Degree of practitioner adherence to the intended method during the session
- A6: Participant adherence/faithfulness to the intended plan
- B8: Practitioner willingness to intervene – frequency and depth of intervention
- B10: Degree of practitioner-asked clarifying questions to participant input
- B11: Degree which practitioners requested validation of changes to representation
- B13: Degree of intervention to get participants to look at the representation
- B14: Degree of collaboration between multiple practitioners (if applicable)
- B15: Degree of collaboration/co-construction between practitioners and participants
- C17: How “good”/successful was the session?
- D22: How much attention to textual refinement of shaping
- D23: How much attention to visual/spatial refinement of shaping
- D24: How much attention to hypertextual refinement of shaping
- D25: Degree of ‘finishedness’ of the artifacts
By doing this, an overall ranking of sessions along this "shaping index" emerges. As you can see in this table, not surprisingly the Hab Crew and Remote Science team sessions are at the top (since they had the most experienced practitioners and were in situ sessions drawn from actual projects). But the Rutgers and Ames sessions also array along this meta-dimension. I then derived a "software proficiency rank" and a "facilitation proficiency rank" from the questionnaire data to compare to the shaping index, and (even to my statistically untutored eye) there appear to be some good correlations, especially with facilitation proficiency.
Here are graphical views of the (self-reported) facilitation proficiency and software proficiency comparison scores:
This made me think there are more interesting comparisons to be derived from the other composite comparisons from the questionnaire data. I could probably spend months going through all the possibilities:
However, I don't want to drown in all this data or let it take me too far off course of really getting at the aesthetics, ethics, and experience aspects of these sessions. Next I will be looking at the sensemaking moment analyses from a comparative perspective, and see what that yields.
No comments:
Post a Comment