Empathy, augmented - public services as digital assistants
Google Now is probably the best known example of the so called ‘intelligent digital assistants’*. It suggests relevant information based on your location, your calendar and your emails. So, for example, it might automatically track a parcel based on a confirmation email from Amazon, or nudge you with the quickest route home based on your location.
Google Now is (for now) confined to day-to-day admin, and using it feels very obviously like having a machine help you out (and I’d guess a machine that runs off simple ‘given these events have happened, then do this thing’ rather than any artificial intelligence cleverness).
In addition to Google Now, there are examples of personal assistants that combine contextual notifications with a conversational, instant messenger style interface. So you get pushed some relevant information or asked to complete a task, but you can also ask a question or add a comment.
Native and Vida are apps that help you book complex travel arrangements and diagnose food allergies respectively. There is a good write-up here of how they work.
Compared to Google Now, these seem much less obviously like a machine talking**. Instead, you are having a conversation with a person (and it almost certainly is a real person most of the time), but there are these automatic nudges and nuggets of information that make the conversation richer.
What is really nice with these examples is that the differences of dealing with a real person and dealing with the purely digital parts of the service are abstracted away. There is a single interface onto a complex domain with computers doing the things computers are good at (joining together disparate data sets / reacting to changes in context in real-time), and humans doing the things humans are good at (empathy, and understanding complex edge cases).
So, what is the relevance for public services? Well, for most public services, probably very little. You don’t need an intelligent assistant to buy a fishing licence or book a driving test.
Where it is potentially revolutionary is in the delivery of complex services that require interaction over a long period of time and with many edge-cases - services where everybody is an edge-case and everything is always changing. Things like benefits, caring, health, special educational needs or mediation. Things that are complex and demand empathy.
What could a public-service-as-digital-assistant look like?
-
Smart to-do lists that make it clear exactly what the next steps a user needs to do to navigate the system. Very much like cards in Google Now, items/cards get added to the list based on a user’s context. Completing one task may trigger other tasks. For example, if asked to confirm how many children are in their household, and the number has changed since they were first asked, new cards might appear asking them to enter the details of the children. New cards can be added automatically by the system, at a face-to-face meeting with a government advisor, or when a user is on the phone to a call centre; there is one interface regardless of the channel.
-
A dynamic overview of a user’s situation right now. What this looks like will depend on the service, but should also change based on a user’s exact context. For example, the overview when an advisor is beginning to understand the caring needs of a family member may be very different once help has been put in place. Broadly though, these should communicate where a user is right now, how they are progressing through the system and what to expect next. The same view that is visible to a user should be visible to the government advisors who are helping them.
-
Augmented conversations that, rather than remove human interaction from a service, instead augment it. So if a nurse mentions details of a medicine a user is going to be asked to try, then the contraindications are automatically presented. Or if a special education advisor mentions a school, then the travel time and school performance are linked too. Or if a user notes down 5 jobs they have applied for, the pay ranges and locations are automatically summarised for the user and government advisor to comment on.
(The closest to this currently happening in the public sector is the work the Universal Credit Digital Service team are doing with to-do lists.)
Personally, I think these patterns provide an opportunity to design services that genuinely understand and react to a citizen’s needs, that seamlessly blend the online and the offline, the human and the automated into a single empathetic service.
I guess we’ll only find out by building some.
- I’m not counting Siri here, which is really more of a voice interface with a personality
** I’ve not used either of these directly, I’m just going on descriptions and screen grabs