Page 76



F6. Instrumental Results Representations For knowledge work processes where the desired user experience is highly automated, “push button” simplicity, product teams can envision distilled representations of resulting information outputs that could facilitate rapid judgments within targeted work practices. Examples from three knowledge work domains: A scientist uses her analysis application to test whether any of the subjects in her clinical study, based on a subset of their uploaded genetic information, have a predisposition for certain well characterized conditions. She is surprised by how easy this test is to run and how concisely the results are displayed (see illustration). An architect runs a test in her building modeling application to simulate how light will pass through windows into a building’s interior over the course of a day. Almost immediately, the tool highlights areas of the model’s floor plan that do not receive a threshold value of natural light. A financial trader sees a glitch in his trading application and chooses to “test the connection” between his tool and an information vendor. The test automatically progresses through a series of checks, then displays a conclusive “passing” result. As certain processes become standardized and increasingly automated (E3, E4) in knowledge work, individuals may begin to expect the rationality of what Davis Baird has called “instrumental objectivity.” In these user experiences, which are common in mature consumer product genres, certain tasks or even entire activities (A5) that were previously effortful and required specialized skills become streamlined (A4) to a few simple input (B1, B3) and output steps (L1). As a side effect of automation in an “instrumental objectivity” style, workers’ conceptual models of underlying processes may become uncritical, limited, or even distorted (C1, D4, K7). These losses in understanding may be viewed as a positive impact, as an acceptable trend, or as a clear problem by certain individuals, communities of practice, organizations, and professions at large. With these potential effects in mind, product teams can envision how automated scenarios in their sketched functionality concepts could result in information representations that provide users the “answers” that they are seeking, embedded within relevant context. These rationalized outputs can also clarify potential next steps (B5, B6) by presenting pathway options within the larger narrative of workers’ activities (C4, G1). When product teams do not actively consider the potential role of instrumental results representations in their application concepts, opportunities to create meaningful innovations in summarized information display can be lost. When workers expect these highly concise and directive outputs, anything else may seem unnecessarily complicated (D2, D3) and difficult to learn (K2, K6). Conversely, in some cases, distilled representations of automation results can inappropriately oversimplify work outcomes in misleading ways, especially when functionality to view more detailed, underlying information is not provided (E5, F4, G3, K5). See also: A, C9, D6, E, F, I, J, K4, K12


Which of the knowledge work tasks or larger activities that your team is striving to mediate could be valuably supported by automations that result in easy to interpret, “instrumental” outputs? How might these results be distilled into meaningful representations of clearly actionable information?

I have a large set of clinical data, and I want to run some basic tests on it to see if there are any known, major gene�c abnormali�es in the subjects...

More specific questions for product teams to consider while envisioning applications for knowledge work: What do targeted individuals and organizations think about the simplification of certain work practices into instrumental inputs and outputs?

Clinical Scientist

What types of instrumental results representations do knowledge workers currently use? Which existing tasks have conventionally become so automated that even experienced workers have nearly forgotten how they could be accomplished without their current technological support?

So I’ve selected the data from the new subjects in my analysis applica�on, and I’m choosing the range of testable abnormali�es that I want the tool to look for...

What are targeted workers’ expectations about “push button simplicity” in the activity contexts that your team is targeting? Where in your team’s application concepts might you valuably cultivate this sort of highly trusted offloading in new scenarios? What standard and tedious work practices could be automated to an extent where people may not need to monitor or comprehend their inner workings? Where might this kind of simplicity become an unwanted barrier to workers being able to use their own analytical, sense making, and procedural skills in fine grained ways? What larger design and technology trends could influence your team’s ideas about how output content in your application concepts could be instrumentally represented?

And a few seconds later, when the results have come back, it gives me a quick summary of how many abnormali�es were found...


What analogous representational conventions might you reference, or apply directly as patterns, to your envisioned output displays? How might these analogies enhance users’ intuitive understanding of certain readouts? What standard output states might your sketched automation concepts result in? How could these states drive appropriate variations in representational responses, as well as the clear presentation of relevant pathways for subsequent actions?

I can then scroll down through the results to see the gene�c condi�ons for each subject, organized by sta�s�cal confidence and the severity of poten�al health impacts...

How might instrumental results displays surface ambiguities and errors in the execution of rule based processing? How could these representations reference your larger standards for error prevention and handling? Do you have enough information to usefully answer these and other envisioning questions? What additional research, problem space models, and design concepting could valuably inform your team’s application envisioning efforts? ?


Working through Screens (Tabloid Size)  

Working through Screens: 100 Ideas for Envisioning Powerful, Engaging, and Productive User Experiences in Knowledge Work This heavily illus...