Gherkin - a universal language for accountable bots?

14 April 2016

You can’t view source in Google Now.

Software agents of one sort or another (bots, digital assistants, news feed algorithms) seem set to make more and more decisions for us.

How will we know how they are reaching their decisions? How will we know when a decision based on a something that aligns with our best interests vs the company who provide the software?

Will some critical bots need to become subject to auditing and regulation? (OFBOT anyone?). If the public sector starts building digital assistants into its services, how would a parliamentary committee ever understand what it does?

I watched the live feed of Bot Summit 2016 the other day - Martin O’Leary talked about various ways to make bots more understandable to users: expose the artifice; be explicit, not implicit. He pointed at the blogpost that goes alongside the Sorting Hat bot that explains in plain English how it works as an example of exposing the artifice of the bot.

Having a human readable explanation alongside a bit of software, in an agreed format, could help users understand the software they use and help regulators audit.

I’ve written before about the possibility of regulatory bodies doing something similar when they publish their data using the Gherkin language (aka Cucumber). I’ve also built a proof of concept building a digital assistant that runs on gherkin syntax input but a user.

What if all the makers of digital assistants and bots - regardless of how they are written, or if they are open or closed source - started publishing a description of how the software works in Gherkin?

GIVEN a user has an account WHEN a story is liked by 5 or more of their friends THEN it is recommended to them or

WHEN a user is outside AND more than 1 km from home THEN display nearby bus stops or

WHEN a user asks “what should I have for dinner” THEN reply with a random recipe