Photos
Sketching interfaces, riffing off Berg’s Here & There

Designing the seams, not seamless design
Designing the seams, not seamless design

On YouTube, there’s a compilation of Steve Jobs speeches where he says: ‘It just works. Seamlessly.’ There are forty-four examples in total. ‘It just works’ sums up Jobs’ approach to design: remove and simplify. He thought design should ‘get out of the way’. Products that just worked were not there to be meddled with either (when Apple discovered repair shops opening the iPhone 4, they added tamper-proof screws).
It has become an article of faith that a good design is one that just works. One school of thought is that it is enough to apply these principles to the public sphere — to create public services that ‘just work’. But do we really want to design our public services like an iPad? Functional yes, even magical, but good luck if you want to understand how it works.
A functional, transactional view of the relationship between citizen and state — characterised by the argument: I pay my taxes, so all the council has to do is collect the bins and fix potholes — is hardly a new one. But those arguments often come from the privileged position of people who don’t have to interact with public services all that much. It also fundamentally misunderstands the nature of what makes public services public.
Public services are different because people have to use them. They must work, and they must work for everyone. Their quality is measured not through the number of units sold, but by feedback from the public.
Regardless of how well they are perceived to have been designed by designers or policymakers, public services require an element of ‘co-production’ with the public.
The concept of co-production has its origins in the early 1970s and the work of Elinor Ostrom. As part of her work on co-production, Ostrom described the ‘service paradox’ where the quality of services as defined by professionals results in suboptimal outcomes as defined by users.
For example, better-designed textbooks might make education worse if the content is so clear that students no longer feel the need to discuss issues with their class or teachers. In a school, pupils are co-producers of learning with their teachers and with each other.
At the UK Government Digital Service, our version of ‘it just works, seamlessly’ was ‘do the hard work to make it simple’. That principle summed up what we’d tried to do with GOV.UK: people should not need to understand government to interact with it. But as modern design practice spread across government, the simplicity principle took on a life of its own. The idea that people should not have to understand the rules and the structure of government seemed to morph into an assertion that the workings of government should be obfuscated.
When I was working on the UK’s digital welfare system, Universal Credit, in 2014, it became abundantly clear that the next-generation digital public services that automate, abstract and have complex data flows demanded a different approach to design.
Despite the aspiration of public sector design to make services simpler, clearer and faster, we also need to acknowledge that a focus on simplicity, especially when combined with the inherent opaqueness of technology and data, make it harder for people to understand government.
The ability for users to ‘meddle’ is a key feature of public services. Democracy is about more than voting every four or five years, it’s about the opportunities people have to shape the services and the rules that, in turn, shape their world. Understanding the way things are is a precondition for being able to change them. Democracy, it has been said, is ‘government by explanation’.
To avoid the service paradox in the next generation of public services, there need to be clear opportunities to understand the workings of those services.
But can we reconcile ‘it just works’ with ‘government by explanation’?
For inspiration, we can look to the work of the late Mark Weiser, who was the chief technologist of Xerox PARC in the 1980s and 1990s. He argued that we should aspire to design ‘calm technology’. ‘Beautiful seams’ would, he proposed, be the way to interact with digital tools that would otherwise exist in the background. Rather than being totally hidden, complexity is there to be revealed. Users can configure, understand or take control of automated processes, as needed.
Rather than designing seamless public services, we should aspire to design better seams. Services should orientate users so they can adopt the correct stance for the organisation they are dealing with. They should help people understand how data about them is used and when that data is used in a new context. If a decision just doesn’t look right, services need to help users switch tasks: from seeking an outcome to trying to understand what has happened. Finally, there should be a route from the service to the underlying rules that say why the service works the way it does and who is accountable.
Digital services should work, yes, but they should also actively educate people about how democracy works and where power and accountability lie by putting transparency at the point of use. After all, democracy is a user need too.
———
Richard Pope was part of the founding team of the UK Government Digital Service and the first product manager for GOV.UK. He is the author of the book ‘Platformland’.
A measure of value for digital public service delivery
A measure of value for digital public service delivery
User need: outcomes for people, their representatives or communities
Policy intent: meeting explicit outcomes sought by politicians or ones implicit in legislation
Capability to operate: building a collegiate team, unpicking legacy software or answering a knotty technical question
Product leadership in the public sector is, more often than not, about balancing these 3 things.

Talk: designing means-tested welfare procedures in government
Talk: designing means-tested welfare procedures in government
Short talk from the tail-end of last year that I gave to the Administrative Fairness Lab’s webinar on the Energy Crisis, Fuel Poverty, and Administrative Fairness on digital means testing *














](https://cdn-images-1.medium.com/max/800/1*qnDCmAccU2a_5zzlWVfLww.png)

* Not a judgement on if means-testing is good/bad and the social implications therein
A Guide To The New Field Of Software Politics
A Guide To The New Field Of Software Politics
2016 was the year it became impossible to ignore the power software exerts on society. Today, in 2018, we can start to identify the companies and organizations that are putting power back in the hands of consumers.

Software is politics. I wrote that back in 2016, arguing that the digital services we all rely on should not just be designed for ease of use–they also need to be understandable, accountable, and trusted.
Viewing software as politics is about more than tech, and it’s about more than ethics. It’s about the idea that, if politics is about the distribution of power in society, then software is inherently political. How that power is managed and the choices about who it is put to work for are the political questions of our age.
If 2016 was the year it became impossible to ignore the power software exerts on society, then today, in 2018, we can start to identify some signals about what the levers of control might be. Are there reasons to be optimistic? Which companies are using trust as a competitive advantage? What organizations are showing how the power of tech can be held to account? Here are six themes that are emerging:

Trust, competitive advantage, and the power of markets
Research by Kelly Martin, Abhishek Borah, and Robert Palmatier published in Harvard Business Review has found that data breaches have a ripple effect–if one company in an industry suffers a data breach, then others in that industry will also feel its effects on their finances. The researchers also found that companies can mitigate that risk when they are transparent about how they use data and give users control of their data.
This prompts an important question: How would investors–those who hold the ultimate power over which businesses rise and which ones fall–understand if a company is a risk or not?
There may be some parallels here to the Carbon Disclosure Project (CDP). CDP collects and standardizes data about the environmental impact of companies. Investors use that data to make ethical investment decisions or manage risk; regulators use it to make better laws. Maybe investors will start to evaluate risk by consulting transparency initiatives like Ranking Digital Rights and Terms of Service Didn’t Read. Taken to its logical end, only transparent companies would receive funding, and opaque companies would falter, elevating the services available to consumers overall.

Auditing and transparency
Inspired by ProPublica’s investigations into biased algorithms, New York’s city government passed an algorithmic accountability bill into law and established a task force to bring transparency and accountability to automated decision-making by the city’s agencies.
What’s encouraging about this is that the initiative came, not from a campaign group, but from a serving politician, James Vacca, chair of the city’s Committee on Technology. Transparency is now a matter of mainstream importance.
Transparency is not just being adopted by the public sector though, as Canadian VPN provider TunnelBear showed when it published the results of an independent security audit.
The idea behind TunnelBear’s audit was to reveal to its users that the company could be trusted over competitors in a sector that has significant trust issues.
There are some intriguing technical approaches to transparent design, too. To pick just two:
Code Keeper is a new service for creating escrow agreements for code, specifying the legal circumstances where source code can be accessed. The main focus of the project is to allow access to source code when a company goes bust. But I wonder if it could also be used to enable access to source code for audits?
Google is working on a General Transparency database called Trillian. Based on the same ideas as the decentralized SSL certificate system Certificate Transparency, the idea is to make it easy to create a dataset–say the list of changes to a company’s terms of service. In turn, that dataset’s integrity can be independently verified.

Certification and standards
The Internet of Things has been under scrutiny recently, as botnets, data breaches, and poor safety make headlines. But two things came out of the Mozilla Foundation at the end of 2017 that show how the connected device market could shift to prioritize consumers’ safety.
The first was a privacy buying guide, a pre-Christmas review of the most popular connected devices that compared the relative safety and protections of each platform. Hopefully more mainstream consumer review sites like Wirecutter and marketplaces like Amazon will take the idea and run with it.
The second was a report Mozilla commissioned from Thingscon, exploring options for a trustmark for IoT. The report recommended building on the work the U.K.-based #iotmark project has done to develop an open certification mark for a minimum set of principles that connected devices should meet.
At the same time Doteveryone, which campaigns for a fairer internet, have been looking at the concept of a trustmark for digital services.
Separately, we’ve seen other standards-based initiatives begin to emerge around digital rights. Consumer Reports published the Digital Standard in 2017, signaling a new era for testing and advocacy organizations. Part testing-framework, part certification scheme, it’s a great resource for anyone developing digital products to ask: “Is what I’m doing right?”
Time will tell if kitemarks and certification are an effective way of ensuring the safety of connected devices, but it’s an encouraging development.

Decentralizing machine learning
Machine learning algorithms, rather than being explicitly programmed, are trained using data. Crudely speaking, the more data they have, the smarter they get. When it comes to data about people, this poses an obvious privacy challenge: the trade-off for better software is more sensitive, centralized datasets.
Google’s Clips camera is an always-on wireless camera that uses machine learning to decide what to take pictures of. Rather than uploading photos to a central server for classification, it all happens locally. The hypothesis, presumably, is that people are more likely to trust an always-on camera if it keeps what it is seeing to itself.
Both Google and Apple have recently introduced products that make use of “differential privacy,” a technique that allows services to learn from the behavior of groups of users without revealing anything about any individuals.
Apple has been using the technique to add new words iOS’s keyboard autocorrect function. Google has been using the technique in combination with federated machine learning to understand how to make better suggestions in its Gboard keyboard in Android. None of this represents a magic bullet. And there are questions about the exact implementations of differential privacy. There is also a risk that, although the learning is decentralized, the control and the learnings remain centralized–only Google and Apple can run experiments like this. Further, there is also an issue of who will verify the promises they are making?
These concerns aside, decentralized data and localized learning represent a very clear change in approach from the cloud services of today. It will be exciting to see what happens next.

Changing how products get made
How software gets made has an impact on what software gets made.
Recently, we’ve seen various initiatives that aim to make it easier for developers and designers to do the right thing when it comes to making products that respect users’ rights and safety.
GitHub has announced that it will start telling developers when their projects have insecure dependencies.
The Simply Secure project has been providing professional education for user experience designers, researchers, and developers working on privacy, security, transparency, and ethics.
At IF, where I’m chief operation officer, we’ve updated our open-source Data Permissions Catalogue, which we hope will make it a more useful resource for designers building services that need permission from users to access data.
There are also increasing calls for ethics modules to be added to computer science degrees, and Harvard and MIT have started offering a course to their students specifically on the ethics and regulation of artificial intelligence.

New data regulation
Those of you living outside Europe might not of heard of GDPR. GDPR stands for “General Data Protection Regulation,” and it is the EU’s new set of rights and regulations for how personal data gets handled. It enshrines a slew of digital rights and levels huge financial threats against companies that don’t comply. These new rights should make it easier for people to understand and control how data about them is used, see who’s using it, and do something if they’re not happy with what’s going on.
Companies, wherever they are based, face the choice of meeting the regulations or risk being locked out of the European market. As such, GDPR could become a de facto global standard for data protection.
Rather than a regulatory burden, this is a huge opportunity for companies to show how they can be trusted with users’ data. (For example, the Open Data Institute has written about what that might look like for the retail sector.)
In addition to GDPR, In January, after several years of lobbying and activism, the Open Banking Standard was introduced in the U.K. It is designed to facilitate a new range of banking services and applications. There are some potential risks, but with good design, it has the potential to empower customers by allowing them to reuse the data held by their banks for other purposes–for example sharing data with an accountant or proving income.
Beyond the opportunity to transform markets, GDPR and initiatives like the Open Banking Standard represent an opportunity to educate people about data–to provide totally new accountability and transparency mechanisms–and produce a healthier public debate about what data should never be collected in the first place.

What do we call this thing?
I’m optimistic about where we are heading. Companies are developing reputations–good and bad–for how they handle data. Regulators are starting to hold people to account for decisions that affect people’s lives. New technologies and new sources of open data are going to make it easier for companies to be transparent and accountable. There’s a growing interest from people in the tech sector about ethics and responsibility.
And once people get used to having new digital rights, we’re going to expect more. This is a huge opportunity for organizations, whose digital strategies and policies empower users.
One thing I’m left wondering, though, is this: The examples I’ve listed here include new regulations, technologies, design patterns, professional development, tools, ethical frameworks, standards, and market realities. The thing that ties them together is that they can all play a part in ensuring that more of the products and services we rely on respect more of the rights we have.
This prompts the question: What is the name of this emerging field of software politics? It feels like it should have one. Names are useful.
While it includes some elements of security, it definitely feels like a different field. “Responsible Tech” or “Digital Ethics” state the intent, but don’t really leave room for the business reality of “trust as a competitive advantage.”
“Descentralized” is fast becoming devalued as it is used unquestioningly in association with technologies like blockchain. Answers on a postcard. Or maybe it doesn’t need a new name. Maybe it’s just politics.