Sunday, March 28, 2010
And the final 'raw' cross-session analysis artifacts
Cross-session Shaping comparisons (table version) (Compendium version)
Cross-session Framing comparisons (table version) (Compendium version)
CEU comparisons
Some 'raw' comparisons of the "CEU" (Coherence, Engagement, and Usefulness) analyses across sessions are here.
Next and final step for the comparative analysis -- create a summary of the cross-session Framing analysis.
Next and final step for the comparative analysis -- create a summary of the cross-session Framing analysis.
Saturday, March 27, 2010
Comparative framing analysis
Compendium version here.
One more piece of cross-session analysis to go -- CEU comparisons. I hope to finish those tomorrow, then it's on to start writing up the first draft of the analysis chapter.
One more piece of cross-session analysis to go -- CEU comparisons. I hope to finish those tomorrow, then it's on to start writing up the first draft of the analysis chapter.
Monday, March 15, 2010
Saturday, March 06, 2010
Preliminary results from cross-session 'shaping' analysis
I've finished the first round of comparative analysis (earlier steps described here). This morning I translated the mapping of each session along the 29 shaping dimensions into Excel. It's not a perfect or exhaustive study (that would take another set of years) but did yield some interesting comparisons.
In the mapping linked above, I'd ranked or grouped each of the 8 sessions for each of the dimensions, adding rationale for each ranking and sometimes example artifacts, such as screenshots, photos, or transcript quotes.
The Excel work was originally to see if any patterns emerged by comparing across the dimensions. I think some are. At least it gives some compelling visuals.
In the spreadsheet, for each dimension, I arrayed the sessions left to right, assigning a unique color to each (blue for Ames Group 1 (AG1), orange for the Hab Crew session (Hab), etc.). Where the dimensions were more simple groupings (such as dimension A1, which just shows the practitioner choice of method), I indicate that rather than a high-to-low array. There is a lot more info and description of each dimension and ranking method in the maps.
Group A are 5 dimensions concerning aspects of a session's pre-existing plan and other pre-session factors such as choice of method and approach. The results look like this:
The 9 Group B dimensions cover aspects of practitioner interaction with participants:
Group C comprises 5 dimensions showing various aspects of the sessions as meetings and discussions:
And Group D is 8 dimensions that focus more specifically on the shaping of the representation:
What was most interesting about moving the analysis into Excel was better being able to compare the dimensions themselves and think about what they might be showing as a whole. It's still early going but it does seem like there are some interesting patterns from a participatory shaping point of view.
For example, it occurred to me that some of the dimensions that came out of the data group into categories that could be used as a kind of index of 'goodness' at least from a shaping point of view. I looked over all the dimensions and identified these 13 that could contribute to such an index:
By doing this, an overall ranking of sessions along this "shaping index" emerges. As you can see in this table, not surprisingly the Hab Crew and Remote Science team sessions are at the top (since they had the most experienced practitioners and were in situ sessions drawn from actual projects). But the Rutgers and Ames sessions also array along this meta-dimension. I then derived a "software proficiency rank" and a "facilitation proficiency rank" from the questionnaire data to compare to the shaping index, and (even to my statistically untutored eye) there appear to be some good correlations, especially with facilitation proficiency.
Here are graphical views of the (self-reported) facilitation proficiency and software proficiency comparison scores:
This made me think there are more interesting comparisons to be derived from the other composite comparisons from the questionnaire data. I could probably spend months going through all the possibilities:
However, I don't want to drown in all this data or let it take me too far off course of really getting at the aesthetics, ethics, and experience aspects of these sessions. Next I will be looking at the sensemaking moment analyses from a comparative perspective, and see what that yields.
In the mapping linked above, I'd ranked or grouped each of the 8 sessions for each of the dimensions, adding rationale for each ranking and sometimes example artifacts, such as screenshots, photos, or transcript quotes.
The Excel work was originally to see if any patterns emerged by comparing across the dimensions. I think some are. At least it gives some compelling visuals.
In the spreadsheet, for each dimension, I arrayed the sessions left to right, assigning a unique color to each (blue for Ames Group 1 (AG1), orange for the Hab Crew session (Hab), etc.). Where the dimensions were more simple groupings (such as dimension A1, which just shows the practitioner choice of method), I indicate that rather than a high-to-low array. There is a lot more info and description of each dimension and ranking method in the maps.
Group A are 5 dimensions concerning aspects of a session's pre-existing plan and other pre-session factors such as choice of method and approach. The results look like this:
The 9 Group B dimensions cover aspects of practitioner interaction with participants:
Group C comprises 5 dimensions showing various aspects of the sessions as meetings and discussions:
And Group D is 8 dimensions that focus more specifically on the shaping of the representation:
What was most interesting about moving the analysis into Excel was better being able to compare the dimensions themselves and think about what they might be showing as a whole. It's still early going but it does seem like there are some interesting patterns from a participatory shaping point of view.
For example, it occurred to me that some of the dimensions that came out of the data group into categories that could be used as a kind of index of 'goodness' at least from a shaping point of view. I looked over all the dimensions and identified these 13 that could contribute to such an index:
- A5: Degree of practitioner adherence to the intended method during the session
- A6: Participant adherence/faithfulness to the intended plan
- B8: Practitioner willingness to intervene – frequency and depth of intervention
- B10: Degree of practitioner-asked clarifying questions to participant input
- B11: Degree which practitioners requested validation of changes to representation
- B13: Degree of intervention to get participants to look at the representation
- B14: Degree of collaboration between multiple practitioners (if applicable)
- B15: Degree of collaboration/co-construction between practitioners and participants
- C17: How “good”/successful was the session?
- D22: How much attention to textual refinement of shaping
- D23: How much attention to visual/spatial refinement of shaping
- D24: How much attention to hypertextual refinement of shaping
- D25: Degree of ‘finishedness’ of the artifacts
By doing this, an overall ranking of sessions along this "shaping index" emerges. As you can see in this table, not surprisingly the Hab Crew and Remote Science team sessions are at the top (since they had the most experienced practitioners and were in situ sessions drawn from actual projects). But the Rutgers and Ames sessions also array along this meta-dimension. I then derived a "software proficiency rank" and a "facilitation proficiency rank" from the questionnaire data to compare to the shaping index, and (even to my statistically untutored eye) there appear to be some good correlations, especially with facilitation proficiency.
Here are graphical views of the (self-reported) facilitation proficiency and software proficiency comparison scores:
This made me think there are more interesting comparisons to be derived from the other composite comparisons from the questionnaire data. I could probably spend months going through all the possibilities:
However, I don't want to drown in all this data or let it take me too far off course of really getting at the aesthetics, ethics, and experience aspects of these sessions. Next I will be looking at the sensemaking moment analyses from a comparative perspective, and see what that yields.
Thursday, March 04, 2010
Comparing the questionnaire data across sessions
I put up some charts that show how the practitioners in each session group compare along skill/experience lines.
This is not the main cross-session analysis I'm doing, which is more qualitative in nature, but it will help in the overall comparisons.
For example, it's easy to see that Ames Group 2, which was in many ways the least successful of all 8 groups in terms of getting participants to engage with the representation, also had the lowest self-assigned scores in the facilitation measures (slides 2, 3, 4, 5, 6, 7, and 11), although they had a relatively high level of skill/experience with software and hypermedia (slides 1, 9, 10, 11, 12).
One question -- do the (manually added) vertical lines separating the sessions on slides 6-12 help? Or do they clutter up the display?
There are probably better Excel ways to show this, but I'll worry about that later.
This is not the main cross-session analysis I'm doing, which is more qualitative in nature, but it will help in the overall comparisons.
For example, it's easy to see that Ames Group 2, which was in many ways the least successful of all 8 groups in terms of getting participants to engage with the representation, also had the lowest self-assigned scores in the facilitation measures (slides 2, 3, 4, 5, 6, 7, and 11), although they had a relatively high level of skill/experience with software and hypermedia (slides 1, 9, 10, 11, 12).
One question -- do the (manually added) vertical lines separating the sessions on slides 6-12 help? Or do they clutter up the display?
There are probably better Excel ways to show this, but I'll worry about that later.
Subscribe to:
Posts (Atom)