You can’t view source in Google Now.
Software agents of one sort or another (bots, digital assistants, news feed algorithms) seem set to make more and more decisions for us.
How will we know how they are reaching their decisions? How will we know when a decision based on a something that aligns with our best interests vs the company who provide the software?
Will some critical bots need to become subject to auditing and regulation? (OFBOT anyone?). If the public sector starts building digital assistants into its services, how would a parliamentary committee ever understand what it does?
I watched the live feed of Bot Summit 2016 the other day - Martin O’Leary talked about various ways to make bots more understandable to users: expose the artifice; be explicit, not implicit. He pointed at the blogpost that goes alongside the Sorting Hat bot that explains in plain English how it works as an example of exposing the artifice of the bot.
Having a human readable explanation alongside a bit of software, in an agreed format, could help users understand the software they use and help regulators audit.
I’ve written before about the possibility of regulatory bodies doing something similar when they publish their data using the Gherkin language (aka Cucumber). I’ve also built a proof of concept building a digital assistant that runs on gherkin syntax input but a user.
What if all the makers of digital assistants and bots - regardless of how they are written, or if they are open or closed source - started publishing a description of how the software works in Gherkin?
GIVEN a user has an account WHEN a story is liked by 5 or more of their friends THEN it is recommended to them or
WHEN a user is outside AND more than 1 km from home THEN display nearby bus stops or
WHEN a user asks “what should I have for dinner” THEN reply with a random recipe
“Digital/transformation/business is not about technology it’s about design / strategy / culture” is a recurring meme. It can be a comforting thing to cling on to, and it’s probably true a lot of the time, but is also not true in some important respects.
Technology does matter. Good digital / design / business / transformation / culture / strategy requires an understanding of the materials.
Open Streetmap came into existence 10 years ago in part because of affordable consumer GPS units and open-source GIS software; bespoke on-demand printing services like moo.com because of high-quality digital printers; many of the ‘web 2.0’ services became possible because of Ajax.
Who knows what Web RTC and other decentralised technologies are about to do to how we use the web and the sorts of things that could be designed to meet user needs? Or the quiet revolution in the capabilities of mobile web browsers? Or Yubikey and other new password technologies?
If you don’t understand the materials you are working with, you can’t build the right thing, even if you go about it in the right way. You can’t build what you can’t think of in the first place.
Sometimes the right question to ask is ‘could we meet our user needs better using this new technology?’.
The same thing applies to system level design.
Novel uses of human readable software tests, ‘reproducible builds’ and audited software supply chains could fundamentally change how regulatory bodies operate.
SOCITIM trying to improve and standardise the design of local authority websites and services without solving the underlying problem of how code and data get shared between hundreds of organisations (and an understanding of the technology available to do that). Users of local authority digital services are stuck with bad services, in part, because that underlying problem has not been solved. Design standards without an understanding of the current state of technology are less potent than they could be.
So what’s the solution?
For one, digital leaders need to spend more time understanding the current state of technology, and make sure they have technologists and developers in their organisations making decisions, not just building things. (How many of the people at board or senior management level in your organisation would count themselves as technologists?)
The move from wireframing and mockups to ‘designing in-browser’ has changed the way things get designed for the web, but I think it is time to go further. The dominance of mobiles and tablets mean teams should be designing and developing directly, with multiple devices on their desks and in their demos. Commoditised services like Heroku, Gocardless and Twilio, and mature web frameworks mean it is possible to get real products, or multiple variations of a service, into the hands of real users in the time it used to take to build a prototype or mockup.
Finally, design should be a genuinely multidisciplinary task, something that anyone can do, not something that is ever more specialised. I’d go further and say that dedicated design teams (and probably dev teams in many circumstances) should not exist. But that is probably a future blog post.
Some things from an Evernote notebook called ‘ideas’:
- Open Need Map A structured wiki for mapping things that are needed by people and organisations at particular places. eg items at a food bank, volunteer tasks for organisations, items required at a refugee camp.
Data would be available via an API so, for example, it would be possible for a third-party e-commerce service to invite you to add something a local charity needs to your basket, or for an employer that gives it’s staff time off to volunteer to add local volunteering opportunities directly to an employee’s pay slip.
DIY citizen juries A service where people can pledge to spend 2 days a year responding to a random government consultation. Each consultation is sent to a group of people along and a temporary email group to allow them to discuss the issues and draft their response(s).
URL dials A physical dial like the bicycle barometer that does one thing: display the number published at a URL. It would be laser cut from laminated wood so users can write the scale directly on each dial. A generic tool for displaying data in the physical world.
Assets of community value A service that makes it easy to both identify and build support for local amenities to be added to the Assets of Community Value register. Open Street Map data would be used to display the percentage of parks/pubs/public spaces etc currently listed. Users could check in to register their support.
Distributed local information platform Experiment with using GNU social or pump.io to build a distributed messaging service for posting local information (planning applications, lost cats, parking suspensions). Test if it would be possible for local councils or local interest groups to run their own interoperable instances.
Product datastore A datastore of the products that a household owns. Data is entered by scanning receipts, forwarding emails etc. Product recall notices and personal environmental impact reports are generated automatically. (We experimented with this idea at Consumer Focus Labs, but it feels doable now).
Total hyperlocal alerts Scrape and buy all the data that the internet has on a 3 square mile area of the UK - everything from planning applications, house sales, edits of the Wikipedia for a local monument, crime etc - and build an alerts service on top of it. Show what would be possible if communities had total, high quality structured information about their communities. (aka Brixton Radar done properly).
Richard Pope, 02 March 2016
* [Argüman](http://en.arguman.org) is an argument mapping tool. It uses visual presentation of a subject and a limited language set ('but', 'because', 'however') to map out the pros and cons of a subject - the aim is to aid critical thinking and quick learning. [Here's the argüman for cats vs dogs](http://en.arguman.org/cats-are-better-than-dogs-276225bea7cb41a1a580d5c30af995eb).
There was lots of talk about making it easier for IoT devices to talk to each other. (Everyone seems to be trying to make things talk not thinking about the usefulness or context of the information):
‘Iotivity’ is Samsung’s literally named attempt at this. (I’ve no idea if it’s any good, but I made a note to read up about Constrained Application Protocol and Web Services Interoperability).
Matthew Hodgson from matrix.org gave a convincing talk about using the distributed chatroom/database as a tool for shuffling data between various connected devices. He also mentioned https://tessel.io which is nice looking javascript + hardware rapid prototyping kit.
I spotted this little box on the OwnCloud stand:
I remain pretty certain we’ll soon all be buying little plastic boxes like this and just plugging them in at home - email, blogs, hyperlocal communities and backup services become mainstream consumer physical goods.
The most random talk I saw was on ‘Necrocomputing’ - trying to install the latest version of PostgreSQL on a 1980’s era VAX and how it can be a useful (if painful) way of finding bugs in current, but complex software projects like PostgreSQL.
Dawn Foster is doing a PhD on the the community that surrounds the Linux kernel (19M lines of code and over 11K developers and counting). She talked about the tools for analysing years of mailing lists, repos and wikis. (Random thought: if governments took mailing lists and wikis seriously as tools for developing policy the same approach might once day give some great insights about policy formation).
Software supply-chains, reproducable builds and how to establish trust in (again) very large aging software projects came up in quite a few talks:
There was a quote (which I can’t subsequently verify) that 98% of cheap Chinese tablets are in breach of the GLP in some way (how long before consumer rights organisations and review sites start including data from open source certification programmes?)
The tools currently used for checking compliance with licences could also be used for checking the supply chain of what went into a particular product. Stefano Zacchiroli talked about how to start opening up the partially closed compliance checking tool-chain [video].
SIL2LinuxMP is an attempt to get a minimum version of Linux verified to ‘SIL Level 2’. SIL levels are set by various standards bodies and are basically a measure of how likely something is to break. Different uses require different SIL levels (think airbags vs car radio). SIL2LinuxMP is based on Debian. Debian is introducing reproduceable builds so you can be certain what version of software you are running. (Being able to be certain that the software running your car or smoke alarm isn’t going to kill you feels like a pretty clear user need).
- The video of the whole ‘Safety Critical FOSS’ is avaliable here. Two points that are particuarly well made: 1) named maintainers, clear licensing and public bug trackers should give open source software an advantage and 2) regulators are in a hard place (it’s hard to check software, and lots of car jobs are at risk if they get it wrong) but the answer may lay in understanding how the FOSS community can get involved and move regulation to a more peer review based system.
Finally, the best talk of fosdem IMO was Comparing codes of conduct to copyleft licenses from Sumana Harihareswara. The transcript is here. Great arguments about on a subject Fosdem needed to hear about.
The UK government is asking for ideas from the public towards its digital strategy. I’ve submitted the following 2:
## Labour market data and job vacancies
As recent research by Citizens Advice has demonstrated, the quality of job adverts in the UK is poor and this has a negative effect on job-seekers (applying for unsuitable jobs, unable to find suitable jobs) and employers (processing unsuitable applications):
In addition to vacancy information, the supporting labour market datasets - such as lists of job categories, skills, qualifications and employer demand - are published infrequently and at low resolution. For example, the ONS Standard Occupational Classification is updated every 10 years and was last published in 2010, so any new job-types created in the last 5 years will be missing.
All this makes it harder for both government and the private sector to build better tools to help people find work.
The digital strategy should include commitments to:
Encourage the adoption of the schema.org jobPosting standard for publishing job adverts. The government should do this by mandating that all public sector jobs are advertised in accordance with the standard, and work with large employers who maintain their own job-listing websites (supermarkets, hotels etc) to implement the standard in the public sector.
Seek to make labour market information higher-resolution and higher frequency. It Government should review the current labour market datasets and see how they can be improved. It should work with job vacancy aggregators and Jobcentre Plus to create new real-time public datasets about labour market demand, skills and job-types.
## Personal data sharing
The MI Data initiative has begun to create a healthy UK data economy where citizens can understand the data that is held about them by government, business and charities, and download it for reuse.
The next government digital strategy should continue and extend adoption of MI Data.
In addition, there is a component missing from our national data infrastructure: tools and standards for managing and sharing that personal data that are understood and trusted by citizens.
For example - as well as being able to download their energy usage data, citizens should be able to delegate access to third parties in a way that enables new business models while protecting their privacy.
The increasing quantities of data generated by internet-of-things devices and growing public concern about personal data and privacy will only make this more of a priority.
The digital strategy should include commitments to:
- Commission and publish research on user-interface design patterns for sharing personal data (so it can advise business on best practice and adopt them for government services).
- Create an independent institution for certifying best practice design/security for sharing personal data
- Create a fund for startups focused on personal data-stores and trusted personal data markets
3 just-about-related links on the subject of trust and clicktivism:
- Back in March, I saw Ethan Zuckerman talk at The Impacts of Civic Technology Conference. The talk was about mistrust as an untapped force. He mentioned 4 ways to change society: markets; laws; making things socially desirable; and by making something easier or harder to do (e.g. using code and good design to make something easier).
He went on to suggest that, if there is no trust - if people don’t believe that the organisation trying to change stuff is efficacious - just making things easier is not enough.
He also talks about mistrust as an untapped resource and can be a good trigger to get people to act (and, in turn, make them trust more).
Video: ‘It starts with a click’ by Ethan Zuckerman
- Nick Pearce, from the Institute for Policy Research at the University of Bath, blogged last week about political parties. He suggests that mainstream parties are the wrong shape for a networked world (and that ‘anti-system’ parties have understood this first). It includes this quote:
In order to address the legitimacy crisis, parties will have to integrate these alternatives with established institutional mechanisms … they will have to find a way to close the gap between an increasingly horizontalized public domain, on the one hand, in which critical citizens, societal organisations and political actors operate on a relatively equitable footing in real and virtual networks, and the vertical structures of electoral democracy and party government, on the other, which continue to be organised hierarchically and top-down. It is not easy to see how this can be accomplished.
”Blog: ‘Yes, we can? Renewal vs remorseless decline of political parties’ by Nick Pearce
- I fell down a wiki hole and ended up reading about the Clarion Cycling Club and Robert Blatchford. Blatchford was a journalist and campaigner in the 1890s who aimed to get people engaged in politics by socialising and doing stuff in their communities.
The cycling club was one of a number of institutions inspired by Blatchford’s newspaper ‘The Clarion’.
As well as the cycling club, there was the ‘Cinderella Movement’ where readers organised groups to provide food and entertainment for the children of the slums.