As the light at the end of the PhD tunnel starts to turn from a pinprick to a dime-shaped glow, several people that have recently listened to me talk about my research have mentioned that they see similarities in the work I do at my day job.
I work in software usability and user interface design for systems used by call center reps inside a large company. Like the practitioners I've been looking at for my research, making decisions about the UI in enterprise software design has the same degree of connections between choices about the form to give a screen and the way this will affect the interests of the people who come into contact with it. And, as with participatory representations, it's not just a unitary set of considerations. There are multiple kinds of people involved -- clients, end users, other IT teams, auditors -- each with diverse imperatives that drive them. Even "users" are not a monolithic block with one set of interests. What works best for an experienced, expert user is not the same as what would be best for a new user encountering a task or screen for the first time (among many other sorts of differences between users).
We are constantly balancing considerations like speed of development, ease of maintenance, testability, business rules, time constraints, future changes and plans, and usability. Each of these dimensions have ethical implications and trade-offs. As a usability person, it would of course be easiest to give ease of use the principal value (and of course, it's our job to do so) and give it the highest ethical importance. But if we know that the best UI design will have costs and cause problems for others with equally legitimate interests, we have to weigh those factors against others in our choice-making.
For example, take a simple change to an existing dialog box in an ordering system. In our work we get requests for these all the time, perhaps half a dozen a day (along with much larger projects). From a usability point of view, we always want dialog boxes to have clear window titles that give an overview of the situation; concise and straightforward text in the box that lets the user know why the box appeared and what they can or should do; and clearly labeled buttons that spell out the choices they can make (e.g. "Proceed with Order" and "Return to Address Entry" instead of "OK" and "Cancel" or "Yes" and "No", so often misused or unclear). N0-brainer, right?
In the abstract, yes. But what if the issue comes up (as they so often do) two days before code freeze for a complex release in which this change is just one of hundreds affecting many interdependent systems? And (as is often the case) what if the dialog box is in an older part of a system where all such boxes are coded as simple alerts using constants in a programming language which don't allow for descriptive button labels or window titles?
Insisting on "proper" usability design in this case would require custom programming, which takes time (that the development team doesn't have) and additional testing, which adds costs that comes out of someone's budget (which then is less available for other things). It may mean a change to documented requirements, which then requires formal review by people who already have not enough time for all they have to do, and creates risk to delivery which may jeopardize delivery and testing.
A simple answer in this case might be, "just make simple text changes using the existing code for this release, and do a proper job for the next release when there's more time". And often we do take that approach -- when we can. Sometimes there won't be another opportunity because the dialog box is in a part of the system that won't now get touched for a long time, and it would be too expensive to open the code just to make that change (since that, again, would require development time, testing, requirements documentation, etc.). So in many cases the changes are "now or never".
These kinds of dilemmas are very familiar to anyone working in software development, and I'm not saying anything new about them. However, from a research point of view I am particularly interested in highlighting how the choices that have to be made about something's visual form relate to the ethical dimensions of such choices -- the conflicting imperatives that are all valid and which all reflect legitimate interests.
My focus is on the ethical choices involved in making decisions about form, when in the context of shaping interaction and experience for others with mediating tools and representations. In my research I've been applying it to the special case of people playing a group facilitative role with hypermedia representations, but really it's more broadly applicable. What I feel emerging is a set of ways to enable other people to think and talk about these choices as they relate to their own work. I'm not so much interested in being the expert assessor myself, though I have had to do dozen of assessments of practice in the course of the research (I just finished the last of the 47 individual analyses yesterday!) and I do similar kinds of assessments every day on the job.
Rather, I want to enable and enhance people's ability to think about these issues for themselves, and give them some tools to do so. I want to make the question of "how does the way I shape the things I make affect the people who interact with them?" something that's accessible for people to talk about and get insight on.
No comments:
Post a Comment