Sunday, May 30, 2010

Unity of purpose, communication, and representation

Further on the idea of integral representations and the role of practitioners in making representations matter...


As I was working through writing up the "how good/successful was the session" question, it struck me that one way to characterize this was by considering how closely three dimensions were unified in each session -- the purpose, or intended (as well as emergent) reasons or goals for each session; the communication, both verbal and non-verbal, the way the participants (as well as practitioners) interacted, and the representation, the visual artifact (Compendium maps in the cases I'm studying, but could be any kind of visual or written representation, or even just a verbal representation if there is some sense of a central focus for the session):



The best sessions see a fusion of the three dimensions. Purpose, communication, and representation become indistinguishable in practice; why we’re doing what we’re doing, how we interact with each other, and the visual artifact appear to be unified:



By contrast, in less optimal sessions, the dimensions are disconnected from each other. For example, people could be talking about something that is more or less removed from the ostensible purpose of the session, and the representation itself is ignored or irrelevant:



Saturday, May 29, 2010

Integral representations

As I've been working through writing up the comparative analyses, one thing that's struck me is that a key differentiator of practice styles and expertise is how much (and in what ways) practitioners make the representations themselves matter to the participants and to the proceedings, as opposed to being (in varying ways) a sideshow, background, or decoration.

For example, one of the Shaping aspects in Category B (practitioner interaction with participants) is the "degree of intervention to get participants to look at the representation". It occurred to me that, while all of the studied practitioners used various physical and verbal means to do this, the style, purpose, and strength with which they did this varied greatly. Both of the 'expert' sessions (Hab Crew and Remote Science Team) did this frequently and in depth throughout the session, but also did it with a degree of naturalness. They did not have to use much special force or emphasis because, due in large part to their expertise, the representation was integral to the proceesings, embedded in how and why the group was working.

This isn't something that comes for free and takes a lot of factors to achieve, but the phrase integral representations (seemingly widely used in mathematics) seems to me to sum up a lot of what this research is aiming toward: how to make such representations integral to their participants and audience, how to make them matter, and in what ways.

Sunday, March 28, 2010

And the final 'raw' cross-session analysis artifacts


Cross-session Shaping comparisons (table version) (Compendium version)

Cross-session Framing comparisons (table version) (Compendium version)

CEU comparisons

Some 'raw' comparisons of the "CEU" (Coherence, Engagement, and Usefulness) analyses across sessions are here.

Next and final step for the comparative analysis -- create a summary of the cross-session Framing analysis.

Saturday, March 27, 2010

Comparative framing analysis

Compendium version here.

One more piece of cross-session analysis to go -- CEU comparisons. I hope to finish those tomorrow, then it's on to start writing up the first draft of the analysis chapter.

Monday, March 15, 2010

Comparative analysis of sensemaking moments

A fuller writeup later, but in case you would like to look:

In Compendium form

In summary tables

Saturday, March 06, 2010

Preliminary results from cross-session 'shaping' analysis

I've finished the first round of comparative analysis (earlier steps described here). This morning I translated the mapping of each session along the 29 shaping dimensions into Excel. It's not a perfect or exhaustive study (that would take another set of years) but did yield some interesting comparisons.

In the mapping linked above, I'd ranked or grouped each of the 8 sessions for each of the dimensions, adding rationale for each ranking and sometimes example artifacts, such as screenshots, photos, or transcript quotes.

The Excel work was originally to see if any patterns emerged by comparing across the dimensions. I think some are. At least it gives some compelling visuals.

In the spreadsheet, for each dimension, I arrayed the sessions left to right, assigning a unique color to each (blue for Ames Group 1 (AG1), orange for the Hab Crew session (Hab), etc.). Where the dimensions were more simple groupings (such as dimension A1, which just shows the practitioner choice of method), I indicate that rather than a high-to-low array. There is a lot more info and description of each dimension and ranking method in the maps.

Group A are 5 dimensions concerning aspects of a session's pre-existing plan and other pre-session factors such as choice of method and approach. The results look like this:

The 9 Group B dimensions cover aspects of practitioner interaction with participants:

Group C comprises 5 dimensions showing various aspects of the sessions as meetings and discussions:



And Group D is 8 dimensions that focus more specifically on the shaping of the representation:
What was most interesting about moving the analysis into Excel was better being able to compare the dimensions themselves and think about what they might be showing as a whole. It's still early going but it does seem like there are some interesting patterns from a participatory shaping point of view.

For example, it occurred to me that some of the dimensions that came out of the data group into categories that could be used as a kind of index of 'goodness' at least from a shaping point of view. I looked over all the dimensions and identified these 13 that could contribute to such an index:
  • A5: Degree of practitioner adherence to the intended method during the session
  • A6: Participant adherence/faithfulness to the intended plan
  • B8: Practitioner willingness to intervene – frequency and depth of intervention
  • B10: Degree of practitioner-asked clarifying questions to participant input
  • B11: Degree which practitioners requested validation of changes to representation
  • B13: Degree of intervention to get participants to look at the representation
  • B14: Degree of collaboration between multiple practitioners (if applicable)
  • B15: Degree of collaboration/co-construction between practitioners and participants
  • C17: How “good”/successful was the session?
  • D22: How much attention to textual refinement of shaping
  • D23: How much attention to visual/spatial refinement of shaping
  • D24: How much attention to hypertextual refinement of shaping
  • D25: Degree of ‘finishedness’ of the artifacts
I then derived "scores" for each session along these dimensions by assigning 8 points for the highest-ranked session in a dimension, 7 points for 2nd place, etc. Of course these ratings are largely subjective, but in aggregate and for comparison reasons I think they have some validity (and at least, I've captured the rationale for each rating I gave, so someone else could look at the data and understand (if not agree with) why I ranked them this way).

By doing this, an overall ranking of sessions along this "shaping index" emerges. As you can see in this table, not surprisingly the Hab Crew and Remote Science team sessions are at the top (since they had the most experienced practitioners and were in situ sessions drawn from actual projects). But the Rutgers and Ames sessions also array along this meta-dimension. I then derived a "software proficiency rank" and a "facilitation proficiency rank" from the questionnaire data to compare to the shaping index, and (even to my statistically untutored eye) there appear to be some good correlations, especially with facilitation proficiency.


Here are graphical views of the (self-reported) facilitation proficiency and software proficiency comparison scores:


This made me think there are more interesting comparisons to be derived from the other composite comparisons from the questionnaire data. I could probably spend months going through all the possibilities:


However, I don't want to drown in all this data or let it take me too far off course of really getting at the aesthetics, ethics, and experience aspects of these sessions. Next I will be looking at the sensemaking moment analyses from a comparative perspective, and see what that yields.

Thursday, March 04, 2010

Comparing the questionnaire data across sessions

I put up some charts that show how the practitioners in each session group compare along skill/experience lines.

This is not the main cross-session analysis I'm doing, which is more qualitative in nature, but it will help in the overall comparisons.

For example, it's easy to see that Ames Group 2, which was in many ways the least successful of all 8 groups in terms of getting participants to engage with the representation, also had the lowest self-assigned scores in the facilitation measures (slides 2, 3, 4, 5, 6, 7, and 11), although they had a relatively high level of skill/experience with software and hypermedia (slides 1, 9, 10, 11, 12).

One question -- do the (manually added) vertical lines separating the sessions on slides 6-12 help? Or do they clutter up the display?

There are probably better Excel ways to show this, but I'll worry about that later.

Sunday, February 28, 2010

Comparing across sessions (part 2)

Following up on this post.

I was able to spend some time this weekend working on this, and got about a fifth of the way through the first round (shaping analysis). What was enjoyable was working with the material in Compendium. I want to be able to do all sorts of comparisons between the sessions as well as look across and through the data in unforeseen ways, which doing this part of the analysis in Compendium will help with, as well as potentially to be an interesting, web-accessible way of giving others access to the materials.

First I went through all of the 'shaping forms' (one for each session, as you can see here), and looked for aspects that appeared to recur throughout each of the session (i.e., dimensions, according to grounded theory). I put one dimension each on index card, then sorted them into groups.


I then typed up all of the dimensions in a Word doc, refining the descriptions as I went, then put them in a rough order within each of the five groups (the doc is temporarily here).

Next, I imported all of that into Compendium. I then worked through the six dimensions in the first of the five groups (Group A: "Aspects having to do with initial plan and other pre-session aspects, such as choice of method and approach").

Doing this in Compendium helped me to further refine, order, and sub-group the different dimensions. I was refining the approach as I went, so not all the 6 dimensions I got through are fully consistent yet, but I came to an approach where I would do the following for each session within each dimension:
  • characterize where each session lay along the high-to-low (or other values) for that dimension
  • give a justification/rationale for why I gave a session that value, captured as a Pro
  • array all 8 of the sessions along the 'axis' for that dimension, generally with the 'high' at the top of the map and 'low' at the bottom
You can see the interim results at these links: With left nav menu or Full page view. This map is the most interesting one, at least visually, since it has clickable thumbnails of the artifacts it refers to.

This is the kind of analysis I envisioned doing when I started this work six years ago. More to come.

Two new items

I've posted anonymized versions of all the individual session analyses done to date here. As I start completing the comparison analysis documents I'll post them there as well.

You can also access this pre-print version of a forthcoming journal article that I co-wrote with Simon Buckingham Shum and Mark Aakhus. It's the fullest description to date of the phd research.