Articles
- Open Need Map A structured wiki for mapping things that are needed by people and organisations at particular places. eg items at a food bank, volunteer tasks for organisations, items required at a refugee camp.
-
DIY citizen juries A service where people can pledge to spend 2 days a year responding to a random government consultation. Each consultation is sent to a group of people along and a temporary email group to allow them to discuss the issues and draft their response(s).
-
URL dials A physical dial like the bicycle barometer that does one thing: display the number published at a URL. It would be laser cut from laminated wood so users can write the scale directly on each dial. A generic tool for displaying data in the physical world.
-
Assets of community value A service that makes it easy to both identify and build support for local amenities to be added to the Assets of Community Value register. Open Street Map data would be used to display the percentage of parks/pubs/public spaces etc currently listed. Users could check in to register their support.
-
Distributed local information platform Experiment with using GNU social or pump.io to build a distributed messaging service for posting local information (planning applications, lost cats, parking suspensions). Test if it would be possible for local councils or local interest groups to run their own interoperable instances.
-
Product datastore A datastore of the products that a household owns. Data is entered by scanning receipts, forwarding emails etc. Product recall notices and personal environmental impact reports are generated automatically. (We experimented with this idea at Consumer Focus Labs, but it feels doable now).
-
Total hyperlocal alerts Scrape and buy all the data that the internet has on a 3 square mile area of the UK - everything from planning applications, house sales, edits of the Wikipedia for a local monument, crime etc - and build an alerts service on top of it. Show what would be possible if communities had total, high quality structured information about their communities. (aka Brixton Radar done properly).
-
There was lots of talk about making it easier for IoT devices to talk to each other. (Everyone seems to be trying to make things talk not thinking about the usefulness or context of the information):
-
‘Iotivity’ is Samsung’s literally named attempt at this. (I’ve no idea if it’s any good, but I made a note to read up about Constrained Application Protocol and Web Services Interoperability).
-
Matthew Hodgson from matrix.org gave a convincing talk about using the distributed chatroom/database as a tool for shuffling data between various connected devices. He also mentioned https://tessel.io which is nice looking javascript + hardware rapid prototyping kit.
-
I spotted this little box on the OwnCloud stand:
-
The most random talk I saw was on ‘Necrocomputing’ - trying to install the latest version of PostgreSQL on a 1980’s era VAX and how it can be a useful (if painful) way of finding bugs in current, but complex software projects like PostgreSQL.
-
Dawn Foster is doing a PhD on the the community that surrounds the Linux kernel (19M lines of code and over 11K developers and counting). She talked about the tools for analysing years of mailing lists, repos and wikis. (Random thought: if governments took mailing lists and wikis seriously as tools for developing policy the same approach might once day give some great insights about policy formation).
-
Software supply-chains, reproducable builds and how to establish trust in (again) very large aging software projects came up in quite a few talks:
-
There was a quote (which I can’t subsequently verify) that 98% of cheap Chinese tablets are in breach of the GLP in some way (how long before consumer rights organisations and review sites start including data from open source certification programmes?)
-
The tools currently used for checking compliance with licences could also be used for checking the supply chain of what went into a particular product. Stefano Zacchiroli talked about how to start opening up the partially closed compliance checking tool-chain [video].
-
SIL2LinuxMP is an attempt to get a minimum version of Linux verified to ‘SIL Level 2’. SIL levels are set by various standards bodies and are basically a measure of how likely something is to break. Different uses require different SIL levels (think airbags vs car radio). SIL2LinuxMP is based on Debian. Debian is introducing reproduceable builds so you can be certain what version of software you are running. (Being able to be certain that the software running your car or smoke alarm isn’t going to kill you feels like a pretty clear user need).
- The video of the whole ‘Safety Critical FOSS’ is avaliable here. Two points that are particuarly well made: 1) named maintainers, clear licensing and public bug trackers should give open source software an advantage and 2) regulators are in a hard place (it’s hard to check software, and lots of car jobs are at risk if they get it wrong) but the answer may lay in understanding how the FOSS community can get involved and move regulation to a more peer review based system.
-
Finally, the best talk of fosdem IMO was Comparing codes of conduct to copyleft licenses from Sumana Harihareswara. The transcript is here. Great arguments about on a subject Fosdem needed to hear about.
-
Split data from services. Hold it in organisations with appropriate accountability (central government, local government, professional bodies) and make the quality of the data independently verifiable.
-
Services can be provided by any layer of government, and by commercial or third sector orgs. It’s OK when they overlap, complement and duplicate.
-
It is possible to interact with multiple layers of government at once while respecting their organisational and democratic sovereignty.
-
Build small services that can be loosely joined together however citizens like. Do not try and model the whole world in a single user experience, you will either fail or build a digital Vogon.
-
Put users in control of their data. Millions of engaged curators are the best protection government has against fraud, and that citizens have against misuse.
-
A user not having to understand government does not mean obfuscating the workings of the system.
-
The system should actively educate people about how their democracy works and where power and accountability lie. Put transparency at the point of use.
-
Be as vigilant against creating concentrations of power as you are in creating efficiency or avoiding bad user experiences.
-
Understand that collecting data to personalise or means test a service comes at a cost to a users time and privacy.
-
Sometimes the user need is ‘because democracy’.
- No, they have not been tested, but then you can’t build what you can’t think of in the first place.
-
Smart to-do lists that make it clear exactly what the next steps a user needs to do to navigate the system. Very much like cards in Google Now, items/cards get added to the list based on a user’s context. Completing one task may trigger other tasks. For example, if asked to confirm how many children are in their household, and the number has changed since they were first asked, new cards might appear asking them to enter the details of the children. New cards can be added automatically by the system, at a face-to-face meeting with a government advisor, or when a user is on the phone to a call centre; there is one interface regardless of the channel.
-
A dynamic overview of a user’s situation right now. What this looks like will depend on the service, but should also change based on a user’s exact context. For example, the overview when an advisor is beginning to understand the caring needs of a family member may be very different once help has been put in place. Broadly though, these should communicate where a user is right now, how they are progressing through the system and what to expect next. The same view that is visible to a user should be visible to the government advisors who are helping them.
-
Augmented conversations that, rather than remove human interaction from a service, instead augment it. So if a nurse mentions details of a medicine a user is going to be asked to try, then the contraindications are automatically presented. Or if a special education advisor mentions a school, then the travel time and school performance are linked too. Or if a user notes down 5 jobs they have applied for, the pay ranges and locations are automatically summarised for the user and government advisor to comment on.
- active vs passive (do you actively engage, or do things come to you)
- money vs other factors
- know what I want vs need suggestions
- specific geographical area vs general area
- understanding the past, right now or some time in the future
- centralised vs distributed
- complete vs partial (knows about every job in the country, or just a subset)
- specialist vs generalist
- low fidelity data vs high fidelity
- general audience vs specific audience
- solitary vs social Most current job-search services can be characterised as active, partial, generalist, solitary, relatively centralised and aimed at generalist audiences.
-
An agreed or enforced open standard for publishing jobs, such as schema.org/JobPosting that makes it easier for other people to build services - distributed, complete, high fidelity data
-
A flashcard tool for quickly sorting all the current jobs within 60 minutes travel time into a yes/no pile - semi-passive, generalist, need suggestions
-
A profiler, Netflix style, that asks you a bunch of questions, then uses a recommendation engine to suggest open positions and resources that have been successful for people like you (e.g. maybe people with a history of cleaning jobs in a certain postcode should try security jobs on a certain job board) - need suggestions, semi-social.
-
A tool for helping someone understand the totality of the local job market - enter a postcode and answer a couple of questions to generate a report showing active jobs by travel to work time, apprenticeships most likely to lead to a full-time job in the area, the most active job boards in a particular area (it varies a lot), a breakdown of types of opening local etc. - need suggestions, solitary, high fidelity data
-
A journal for collecting links to jobs applied for and analytics about applications and interviews over time, maybe for sharing with other people -semi-social, understanding the past, distributed.
-
An online job group where members suggest work for each other - social, need suggestions
- do anything that expands your understanding of the potential space
- design is much more about technology than it is popular to admit
- don’t just assume you are building one thing There’s one aspect of product space, I’ve not really gone into yet, which is the subject of power in product design. That will be part 3.
- Web pages in Firefox are getting a Bluetooth API for pairing and sharing data directly between devices and web apps.
- GeoTrelis is a tool for doing fast queries against geospatial raster data. The demo I saw was priocessing several GB in a few sseconds on a standard laptop.
- GeoGig is distributed versioning of geospatial data.
- Matrix a distibuted comms system, but looks like it could also be used as a distributed immutable datastore.
- DIY book scanning [is a thing].(http://www.diybookscanner.org/)
- Pump.io is a library for building distributed social network type things. I bet there’s a couple of hyperlocal things in that.
- Media Goblin is a decentalised Flickr/Youtube type thing. When we all have tiny home server, and it has had some UI love maybe this will be a goer.
- Scrapy is simple, serious looking scraper library for Python.
- Open Food Facts is building an open database of what is in our food, indexed by barcodes, collected via phone apps.
- it’s hard to find a definitive quote
- The things browsers can do The web browser on your phone has access to sensors, outputs and offline storage to make proper contextual design a reality. It can:
- capture a screen
- check if a tab has been backgrounded
- check the battery
- check orientation of the device in 3 dimensions
- check and lock the orientation of the screen
- detect the pitch of a sound
- listen to you
- record video and audio
- respond to ambient light
- share all or part of your screen
- show notifications
- talk to you
- talk, type or video conference someone
- use your camera
- vibrate
- work offline Most of those won’t work if you try them on a laptop browser, but they will on your phone or tablet if you use chrome or firefox. This is partly the point, the technology is here, but not in the tools that we use to design things for the web (laptop browsers), but is in the place where users are spending more time.
- Stable mobile design patterns The 7 years of the Apple App Store and the android equivalents have, in effect, been mass, micro funded experiments in UI design for small, touch sensitive devices with lots of sensors and outputs. They have generated winning patterns like:
- Checkboxes replaced by switches
- Check-ins
- Edit without save button
- Everything can be contextual, any bit of UI can disappear between pages
- Everything has it’s own settings page
- Floating buttons
- Keeping primary navigation off canvas (hidden behind the page)
- Minimal or zero page header (the context an old school page header / nav gives seems less important when you are holding the app in your hand.)
- Multiple, focused apps for the same service
- Offline by default
- Overscroll to refresh
- Reserving dropdown menus for actions on the current context
- Search scoped to their current context (the app) These are patterns that people use day in day out on facebook, Gmail and WhatsApp. These are the new normal, what people expect.
- Printed a paper ticket and had it scanned using an abnormally large laptop in a rainy queue at The Fall gig at Electric Brixton.
- Had a PDF scanned to get on the Amtrak Coast Starlight.
- Had a printed ticket scanned using adapted phone scanner in the queue for some club in Portland and Bob Mould / Liars at Village Underground.
- Waved a code embeded in an app against an entry gate to the Eurostar.
- Printed out a label and stuck it to a package to Amazon via Collect+
- Had a an email scanned on my phone queing to see a film at the Ritzy.
- Had a ticket in Apple’s Passbook scanned on the way in to see a baseball game
- Scanned my laptop screen using my phone to load a TOTP security token to a Yubikey
-
Some tasks are just better suited to mobile/touch or desktop/keyboard - maybe the desktop and mobile version of the service are fundamentally different propositions? It’s worth taking a look at what Evernote have done as a result of understanding the context of how the mobile (app), desktop web and desktop app versions of their product are used. The GOV.UK performance platform team are also exploring this, by making the big screen view of data fundamentally different. Knowing when to build one product or multiple products is going to become increasingly important (I think).
-
A web page + javascript on a smartphone can now, among many other things, vibrate, respond to changes in ambient light and proximity, things that are just more useful on a phone than on other devices and open new possibilities. Every page is a potential app.
-
The design language for mobile software is diverging from that of bigger screen software (remember in the late 90’s early 2000’s when lots of websites looked like desktop software with loads of dropdowns and side menus?). So long as all mobile web apps look like mini-me versions of the desktop browser one (it is hard to find many that do not), they are at risk of being beaten by native apps. Users just are going to start expecting better.
-
Mobile phones and tablets, with touch as the interface, are set to become the default way people consume the web so we better make sure the mobile web is as good as native apps, or there’s a good chance the web will lose.
7 project ideas
Some things from an Evernote notebook called ‘ideas’:
Data would be available via an API so, for example, it would be possible for a third-party e-commerce service to invite you to add something a local charity needs to your basket, or for an employer that gives it’s staff time off to volunteer to add local volunteering opportunities directly to an employee’s pay slip.
Fosdem 2016 links and notes
Richard Pope, 02 March 2016
* [Argüman](http://en.arguman.org) is an argument mapping tool. It uses visual presentation of a subject and a limited language set ('but', 'because', 'however') to map out the pros and cons of a subject - the aim is to aid critical thinking and quick learning. [Here's the argüman for cats vs dogs](http://en.arguman.org/cats-are-better-than-dogs-276225bea7cb41a1a580d5c30af995eb).
I remain pretty certain we’ll soon all be buying little plastic boxes like this and just plugging them in at home - email, blogs, hyperlocal communities and backup services become mainstream consumer physical goods.
10 rules for distributed / networked / platformed government
Earlier this year, when I was working with Jamie, Tom, Anna, Paul, Stephen and Adam on a vision for Government as a Platform, I got stuck on the Central Line on the way back from work and ended up trying to distill all the things the team were talking about. The list below was the result.
I’m posting it here because Jamie keeps on telling me I should (he’s normally right), and in case it’s useful to anyone who happens to find themselves redesigning a government:
Notes:
To repeat the intro, definitely not all my own work, but they are my words, so where this is wrong it is my fault.
Changing changes of circumstance: 7 alternative design patterns
Lots of government services require their users to report when things in their life or an organisation change.
This places a lot of responsibility on the user - they need a good mental model of the service to know what to report, when they should do it and how. It also generates a need for lots of secondary transactions and services: update this, report that, change this, re-apply for that.
The ‘digital assistant’ approach to designing public services could start to make things simpler and reduce the number of ‘Report an X to Y’ style government transactions.
After all, if services understand your past circumstances, why can’t they use those circumstances to ask you the right questions?
Here are 7 possible* patterns (there are probably many more).
1) Recurring change
Some circumstances need updating on a regular basis (things like monthly childcare costs). The ‘recurring change’ pattern notifies users (via push alerts or sms) that they need to provide some information. The service should be smart enough to know the optimal number of days to ask this before any deadlines.
2) Future confirm
If a user reports a temporary change of state, for example they are going on holiday or taking their car off the road, the service should be able ask the user if that state has passed.
3) Date determined confirm
Similar to ‘future confirm’ there are some circumstances that the service should be able to determine from information it already holds, for example if it knows the user has a child of a certain age.
4) Recurring confirm
A ‘dead man’s handle’ style confirmation, so the user has to actively confirm: “does your cafe still have 12 tables on the pavement outside your business?”.
5) Recurring ignore-to-confirm
As above, but inaction is taken as confirmation.
6) Random change
Ask a user to submit new information on a subject at random intervals to help keep their data up-to-date.
7) Cascading updates
Sometimes the service will be able to determine if a change in a particular circumstance is likely to have caused a change in a related circumstance. For example, registering for a particular license or moving premises may ask the user to confirm information relevant to a related tax.
This Place Is Ours: check-in to add this pub to the Assets of Community Value Register
This has been sitting in my Google Docs since May, so I figured I’d just publish it here.
“This Place Is Ours” is the working name for an app that helps people club together to protect their local pub, skate-park, village hall or park by adding it to the Register of Assets of Community Value.
The Localism Act 2011 and associated regulations require local authorities to maintain a list of assets of community value.
An asset of community value is a building or bit of land that “furthers the social wellbeing or social interests of the local community”
Assets get added to the register when they are either nominated by a constituted local group (like a parish council) or when 21 people (who must be registered voters in the same borough) club together.
Once added to the register, the asset is subject to stricter planning regulations (eg a supermarket wanting to convert a pub will need full planning permission) and the local group may apply for a 6 month period to attempt to buy the asset if it is proposed to be sold.
Anecdotally, this power seems to get used reactively, when there is a known threat to a building, rather than proactively.
“This Place Is Ours” will make it easy for 21 people (who do not need to know each other) to prepare a valid application and submit it to a local authority.
To test the concept, an alpha project will aim to get the majority of pubs (and maybe another category eg adventure playgrounds) in the London Borough of Lambeth added to the register, and understand what works for users.
Currently there are only 10 assets listed on the register in Lambeth. Lambeth has more community assets than this.
So far, I’ve got as far as doing a trial with 1 pub, using Google Forms and promoting on Facebook.
More interesting would be using the using the Foursquare ‘check-in’ pattern for this sort of thing - building groups of people and civic campaigns around real world things.
Empathy, augmented - public services as digital assistants
Empathy, augmented - public services as digital assistants
Google Now is probably the best known example of the so called ‘intelligent digital assistants’*. It suggests relevant information based on your location, your calendar and your emails. So, for example, it might automatically track a parcel based on a confirmation email from Amazon, or nudge you with the quickest route home based on your location.
Google Now is (for now) confined to day-to-day admin, and using it feels very obviously like having a machine help you out (and I’d guess a machine that runs off simple ‘given these events have happened, then do this thing’ rather than any artificial intelligence cleverness).
In addition to Google Now, there are examples of personal assistants that combine contextual notifications with a conversational, instant messenger style interface. So you get pushed some relevant information or asked to complete a task, but you can also ask a question or add a comment.
Native and Vida are apps that help you book complex travel arrangements and diagnose food allergies respectively. There is a good write-up here of how they work.
Compared to Google Now, these seem much less obviously like a machine talking**. Instead, you are having a conversation with a person (and it almost certainly is a real person most of the time), but there are these automatic nudges and nuggets of information that make the conversation richer.
What is really nice with these examples is that the differences of dealing with a real person and dealing with the purely digital parts of the service are abstracted away. There is a single interface onto a complex domain with computers doing the things computers are good at (joining together disparate data sets / reacting to changes in context in real-time), and humans doing the things humans are good at (empathy, and understanding complex edge cases).
So, what is the relevance for public services? Well, for most public services, probably very little. You don’t need an intelligent assistant to buy a fishing licence or book a driving test.
Where it is potentially revolutionary is in the delivery of complex services that require interaction over a long period of time and with many edge-cases - services where everybody is an edge-case and everything is always changing. Things like benefits, caring, health, special educational needs or mediation. Things that are complex and demand empathy.
What could a public-service-as-digital-assistant look like?
(The closest to this currently happening in the public sector is the work the Universal Credit Digital Service team are doing with to-do lists.)
Personally, I think these patterns provide an opportunity to design services that genuinely understand and react to a citizen’s needs, that seamlessly blend the online and the offline, the human and the automated into a single empathetic service.
I guess we’ll only find out by building some.
* I’m not counting Siri here, which is really more of a voice interface with a personality
** I’ve not used either of these directly, I’m just going on descriptions and screen grabs
Product Land (Part 3)
This is the 3rd and final part of an essay about design and possibilities.
The first part - You can’t build what you can’t think of in the first place - was about the process of design being too linear, taking inspiration from evolution and the concept of hyper-volumes of ‘potential products’; the second part - Tools for exploring the margins - listed some approaches for thinking harder about the things that are possible in product design.
This final part is about power and about the obligations you now have if you make digital services in the 21st century.
The image above is from The New Anatomy of Britain by writer and journalist Anthony Sampson. He wrote a series of books on the subject on political power in Britain, published approximately every 10 years from 1962. Each included a diagram of what he considered the current state of play. The one above is from 1971, the one below is from the final book in the series Who Runs This Place, published just before his death in 2004:
I’ve always loved these diagrams (back at OpenTech in 2009 myself and Rob McKinnon used them to map civic tech projects).
Just like the biomorphs or ‘History of the World’ from part 1 of this essay, Sampson’s drawings are attempts to help us think about a subject that is inherently multi-dimensional. They are a tool for thinking about a problem - in this case how power is distributed and, taken together over the years, how it can change.
What might Sampson have drawn today, in 2015?
Well, politics is about the distribution of power in society, and in the early 21st century digital products are exerting influence on how power is distributed among us.
Redrawn 11 years later it seems clear to me a Sampson diagram would have large bubbles for the big digital services.
Politics in the 21st century will, in part, be about control over the digital services we now rely on, and which hold an ever growing concentration of our personal and household data, from how often we move (fitbit, jawbone), where to (Google Play Services), what we tell people (WhatsApp, Facebook) and to how often we burn our toast (Nest).
The same tight orbit that digital product design seems to be stuck in at the functional level (again, see part 1 of this essay) also exists at the organisational level: in the design of the organisations that run them.
The meme: ‘The only way to solve a given problem is to create a private company, provide a free service to users and mine their data’ is strong, but is also the equivalent in genetics of ‘the only animal that could possible exist is a hyena’.
And frankly, that’s getting a bit scary. As everything from household appliances to the most basic transport infrastructure gain an IP address and become fonts of data, at the same time as the democratic organisations of the last century seem unable to keep up, it is only going to get more so.
Software is politics now.
This was a subject that Vitalik Buterin, founder of the Etherium (a distributed, auditable computer) talked about at Nesta’s FutureFest event back in March.
There is a PDF of his slides here, but to try and summarise: the core utilities of the 19th and 20th centuries (roads, water transport, electricity system) were eventually run or regulated by governments, but the core utilities of the digital age (identity, communications, payment, sharing) are currently run by the first private company that happens to make its way to a near-monopoly. Etherium, a distributed auditable computer, is an alternative to unaccountable monopolies.
Just like water was in 19th Century London, where the adhoc organisations, with little accountability when things went wrong, were replaced with the first the Metropolitan Board of Works, and then a wider municipal democracy in the form of London County Council.
The story of the industrial revolution too often reads like that of entrepreneurs taking personal risk to weave the future against the odds. Now, granted that is a history, but not the interesting one in my opinion.
The interesting history is the one of the building of institutions that had the concept of accountability to the public baked into them - not an evolution of one thing to another, but active choice of a more accountable method of providing a service the public rely on.
Whether something like Etherium, which binds services to behave in a certain way via immutable code, is the right answer, or whether we need organisations that account for themselves in more traditional ways - membership, voting, but built for and of the digital age, are not the important things.
The first thing is recognising that the accountability mechanisms for a digital service are just another set of axes in product space - another thing that should be thought about and chosen.
Finding alternative models to run something like Uber, Google Now or Homekit that are viable is going to be hard (much as municipal democracy had a spluttering start and there were many failed attempts at finding a viable models for co-ops before the Rochdale Pioneers ended up with one that worked), but that’s no reason not to try.
The second, I think, is recognising the risk of designing services that are superficially the height of simplicity, but can never be understood. To steal a phase from Matt Jones' brilliant talk on design fiction and the understanding of systems - “magic is a power relationship”.
If a user can never understand how something works, where is the opportunity for recourse? To pick an obvious example: what does it mean when you can’t view source on an ever more powerful Google Now.
To address this problem, I think the accountability model for a service needs to be an intrinsic part of the design of that service. Accountability needs to be embraced as part of the service design rather than abstracted away.
This creates some interesting design constraints: it means there is a delicate balance between designing something that people can use without having to understand how it works without totally obfuscating the underlying workings of the service.
The third is that the private sector does not have a monopoly on good digital product design, but equally more accountable digital products should not just be clones with a democratic overhead.
New technologies bring new possibilities for accountability. So design patterns like accountability at the point of use become relevant in a way they never would in a commercial context.
The final thing though, is recognising that if you build or design digital products in 2015 you have a new responsibility.
You are not just building the best, simplest, user experience, or the most elegant code. You need to be as vigilant against creating concentrations of power as you are in creating efficiency.
The image of power flowing from one part of a Sampson diagram should be ever-present in your head.
The reason? If you accept the argument that software is politics, you are by definition also designing a power structure, and that is an important responsibility.
Or to put it another way, sometimes the user need is ‘because democracy’.
Written between March and September 2015 - Brixton, Broadstairs and West Norwood.
Open standards for job vacancies
Open standards can be a force-multiplier: a standard voltage for electricity abstracts away how the electricity was generated, this allows companies to confidently and cheaply build everything from household lighting to MRI scanners on top of it. An open standard can enable a wider public good.
Vacancy publishing today is skeuomorphic, not in the visual sense, rather it is functionally skeuomorphic: newspaper small ads put on the web (with the addition of a basic search). Vacancies are published as text not as data.
Citizens Advice have written an excellent report about the effects of poor quality of vacancy data on the public [PDF].
An open standard for publishing vacancies online which included location, pay, conditions, working hours, companies number etc as structured data could help people trust vacancies more and enable a new generation of smarter products to help people find the right job.
There are some examples of the use of standards in the wild, but there are not yet any big employers using them. As ever, it’s a bootstrap problem.
Last year, I submitted a challenge to the government’s Standards Hub to find a standard for how government publishes vacancies online:
standards.data.gov.uk/challenge/publishing-vacancies-online
It has just progressed to the ‘response stage’ where people can suggest a standard. If you know of an existing open standard for publishing vacancies online, please add it as a response.
Brand archaeology
![Reverse of a TfL bike key with plastic pealed back to reveal a Barclay']s URL](https://farm8.staticflickr.com/7704/16995499017_535832226d_k_d.jpg)
Telegraph laws
This was part of the telegraph zone of the new Information Age gallery at the Science Museum:
New technology, new data integrity and privacy laws?
There’s a couple of mentions of section II in Hansard here and here. It was part of a bill introduced by reformer Henry Fawcett when he was Postmaster General. Seemingly it only gets those mentions in Hansard due to inteventions from Charles Warton who was a bit of a pedant for process (although objecting to a bill being read at 5 AM does seem reasonable).
Permissions. Understood.
This is part follow up to The challenge for web developers in 2015, part inspired by Francis Irving’s The advert wars.
Better patterns for helping users understand software permissions feels like an imperative; somewhere a lot more technical, design and research thinking should be directed.
By permissions I mean ‘what bits of software are allowed to do with data and input/outputs’, and the mechanism by which a user is informed of those things.
It’s a hard problem that is getting harder and more important:
We have more private data on the web. We have more devices and organisations to move data between. We have devices and web technologies that can do a lot more than they could a couple of years ago. As the number of digital organisations and the number of personal/home things with IP addresses increases it will only get harder, and there is not yet a convincing set of permission patterns for any of it.
We need interaction patterns that enable knowing permission (to steal a phase from Francis' post on online advertising and privacy)
This is a hard UX problem because it probably requires making overall interactions harder or jerkyer or inconsistent so that someone understands what is happening to their data or what sensors are being activated on their devices.
The most-used patterns we have for explaining what a bit of software is allowed to do, that of Facebook and Google Play store apps, are full of the sort of anti-patterns you get when the custodian of the UI for permissions can gain from you interpreting them in a particular way.
On the Google Play website for example, the permissions of apps are not shown on the page, and not by default, and are not even in the HTML - they are loaded into a popup via a link that does a javascript POST to retrieve the information, so are not obvious and not indexed by search engines. They are then presented in a scrolling box, so you can’t see all the permissions. Why would you implement it like that if you wanted to achive understanding, if you wanted to know that someone had understood?
My guess is that most people currently don’t understand and don’t care that they don’t understand; but I think this is an area that needs some future-proofing - the consequences of not knowingly understanding might sneak up on us one day.
One answer might be new organisations that monitor permissions of software, data and digital organisations on our behalf? Forward thinking consumer rights organisations could start scraping public permissions data and tracking the changes with publicly verifiable cucumber tests for data (this is also seemingly non-trivial for the Google Play example due to the javascript + POST requirement).
Another is more research into what interfaces help people understand what is happening to their stuff, regardless of if they currently know how important their stuff is, while still remaining useable. Sounds like a good definition of a wicked problem.
Product Land (Part 2)
Tools for exploring the margins.
In the part 1 I set out the proposition that the way we think about building digital products is too linear and, as a result, our thinking about what is possible is constrained, with some sectors, I used the example of the ‘job-search’ sector, stuck shuffling around a local minima.
Instead, I suggested we should take a lead from evolutionary biology and start thinking about a hypervolume of potential products - after all, you can’t build what you can’t think of in the first place.
The purpose of this post is to try and start listing some approaches to moving away from the linear design-test-develop-iterate approach. I’ve managed to come up with 6, but the list feels very incomplete, so maybe I’ll add to it if it generates any discussion.
1) Writing a list of axes
Making time to think hard of some radically different potential products, and the space they might occupy, can get us a long way.
What might some of the axes for finding work be?
Here are some alternatives, from different points along those axes:
Those are just some random guesses but the important thing is that there are a range of guesses - using a set of axes like this forces you to stretch your guesses.
2) Asking questions
Libby Miller and Richard Sewell created Catwigs to help product teams think about the direction of a project. Catwigs is a pack of 22 cards that asks questions with extreme answers - e.g. “Does it solve a problem” is answered on a scale between “Antibiotics” through “Garage door remote” to “Cat wig”.
The aim of Catwigs is to engage in a discussion about the direction of a project in a new way, but I wonder if the Catwigs cards, or something like them, might have a secondary function - of better understanding some alternatives up front?
Is it enjoyable?, What is the size of the market and Does it rely on network effect all feel like useful questions to ask of our hypothetical job-search product space - you could build both a product that is enjoyable, has a small number of users you want to target, does not need a network effect and a product that has lots of users, relies on network and is dull-but-useful.
[You can buy a set of Catwigs and Libby has written up the background to the project.]
3) Building 2 of everything
A couple of months ago, Ars Technica published an article entitled Google’s product strategy: Make two of everything.
In it they suggested that, rather than seeing Google’s constant retiring of products as failures, we should view them as research at scale - using two or more products to explore the potential product-space, the limits of newly arrived technology and platform strategy with real users. Think Android/ChromeOS, Glass/Android Wear, Wave/Google plus.
This makes sense to me - sometimes the only way to know how people will react to something over time, and how a team’s thinking will evolve, is to put it in from of real users for real over a number of months.
As it becomes simpler to build almost any digital service - and design becomes more about intelligent assembly of technology components (due to mature frameworks and code libraries, services like Twillio and Mailpile, easy hosting on Heroku, quick prototyping with Arduino, app style APIs for web pages, css and javascript frameworks, etc.) this approach stops being the reserve of Google and is within reach of small teams. Things that were previously too niche too or expensive to build are suddenly practical.
So, why not build a standard job search and a flashcard tool and a job-market profiler, and …, run them for 6 months to understand more broadly what works and what doesn’t, then wrap that learning into the next generation of products? This is not to say that lab testing does not also help a team understand a problem space, but some things need testing in the wild and doing that testing is getting cheaper and easier.
4) Hackdays
The great lie about hackdays is that they lead to finished products, and that it is a failure of the organisers when ideas are allowed to wither.
The real benefit in hackdays is swarming lots of people around a theme and coming up with lots of potential products.
They also allow anyone, not just someone who self-identifies as a designer to create something user -facing. This is especially important with emerging specialist domains - the example of a Netflix style job profiler is more likely to come from someone with good knowledge about the current state of recommendation engine technology.
[Incidently, Ben Field’s talk ‘Customers Who Bought Music Also Bought Beer’ on recommendation engines is brilliant].
5) Understand what technology has done to the product-space
Ongoing technical/digital change is seemingly a constant at the begining of the 21st century.
The result is people’s expectations and what is technically possible are a moving target and the potential product space may be different from when you built something 6 months or 3 years ago. The potentual product-space is forever shifting and changing shape.
This is pretty well understood in biology - that the fitness (likelihood of reproduction) of a given set of characteristics can change with time:
The technology landscape which ‘finding a job’ sits in has certainly changed. The emerging schema.org/JobPosting standard I’ve mentioned; there are mature, open-source, libraries for the sort of things you need to do clever automated matching like tf–idf and collaborative filtering; there’s also growing mobile usage, push alerts, easy to implement geo location and open travel time data. All of those change what it is possible to build today.
I don’t think there is a missing capital-m-methodology here, it’s more about making sure there is time upfront to ask: ‘what has changed’ since someone last tried to solve something in this product space? What is possible that was not possible a couple of years ago?
Charlie Stross' Programming Perl in 2034, Tom Coates' Is the pace of change really such a shock? and Eric Schmidt’s How Google Works explain change, technology and time better than I ever can.
6) Automated design?
[I hesitated about putting this section in, but left it in because there is something interesting happening here, even if it’s not yet clear what it is]
The pages that I see on Amazon and Facebook are different from the ones you see, and that is as much down to a computer and a big stack of data about me as it is the intervention of a human designer. Automated/semi-automated design is already here. On-going, automatic, data driven experiments are forever probing the edges of product space.
Where it goes next is interesting. Do we end up with tools for generating a range of designs with a human just guiding the process, much as Biomorphs are selected for a given trait?
I’m totally out of my depth here, but I think there is something interesting, and it doesn’t feel like too much of a crazy statement to say how things get designed and the skills required to do it is begining to change.
The future of design: stay one step ahead of the algorithm by Dan Saffer and The automation of design by Jon Bruner are both worth a read on this subject.
What else?
As I said above, this list feels very incomplete. Ultimately though, I think it comes down to:
Habitat - Fosdem 2015 talk
This is the talk I gave at Fosdem 2015 about a proof-of-concept
personal datastore called Habitat.
My name is Richard Pope, and I am going to talk to you about a proof of concept service I’m building called Habitat. Habitat is a self hosted, programmable geospatial datastore, or a kind of digital assistant, an external brain like google now or IFTT. Or rather it could be, for now Habitat is just a proof of concept to try and scratch some particular itches:
What we consider to be personal data is going to change to include things like our mental model of our neighbourhood. Hint: it’s probably about a 15 minute travel form where you live, but it will be different for everybody, or the hash of your journey to work each day. This stuff is too important to loose control of for the sake of pure convenience.
As software becomes more woven into our lives, context and push will will come to define user interfaces. Mice, keyboards, activly visiting websites are on the wane. That certainly seems to be the play Google Now, Facebook, EasilyDo, Amazon are all making anyway. Geospatial/location feels like the most important context in that world.
A web page on your mobile phone knows where it is on the planet, which way it is orientated, where it is in 3d space, how much battery it has, how close it is to your face and what the ambient light is. Who will be the broker for all the data you are generating?
Cucumber tests can be used for much much more than testing software. They will be used to verify public datasets and as a user interface onto changing datasets. This photo is from a hack day organised by the UK Environment Agency, exploring the idea of using cucumber tests to check for environmental breaches in open environmental data.
OpenStreetMap’s value is not in the map, it’s in the data. The polygons represent an open, queriable land use map of the whole planet, and you can do some interesting things with that. This was an experiment I did to build an api for testing if I was indoors or outside by loading the outlines of London buildings into a mongo db instance.
Habitat is a self hosted, personal, programmable geospatial data store.
The tests look like this… WHEN I am with in 100m of [XXX,XXX] then ping the url example.com/tad-ah
Now, that is about the limit of what I’ve actually implemented, it is a proof of concept, which I’ll show you in a minute, but the idea is to be able to do other things.
So taking public data, for example about the weather, and building alerts based on location - so if I’m in a park (OpenStreetMap polygons define what areas are parks), and the weather (from the Met Office API) looks liek rain, then send me an SMS.
Eventually it could be used for much more complex use-cases like powering a thing I made called the bicycle barometer (which currently runs on bespoke code).
A quick word on what it is build with, then a demo. Habitat is built using Flask, a Python framework, and exposes an oAuth secured API using Flask Restful and Flask oAuthLib. It stores personal and public data in a MongoDB database, and runs cucumber tests against the data using Behave and Celery.
There are currently client apps for reporting your location and editing cucumber tests.
The server itself doesn’t have much of an interface, you can just login and see what apps have been granted access.
This is an example client for writing cucumber tests. In this instance, I’m going to amend the latitude and longitude where this test is triggered.
This is an example (very basic) client for reporting a single location. Really this should be a native mobile app running in the background continually reporting my location.
Reporting the location causes the cucumber test to be run, and you can see it running successfully.
Fosdem 2015 - interesting links
Signing in & composite services
Usernames and passwords are on borrowed time as a design pattern. Examples of the damage it does are everywhere. The only thing keeping it credible is two factor authentication via SMS or a mobile app, and that can’t reasonably survive the switch to mobile as the dominant way of accessing the web (because it’s not really two factor if it’s on the same device, right?).
The future, probably looks something like this from the FIDO Alliance, which sets out specifications for the use of hardware dongles for strong 2-factor authentication in association with a pin or password. (It also sets out specs for the slightly more problematic / scary use of fingerprint scanning, speech etc for authentication).
Hint: if you are a designer or developer, buy a Yubikey hardware dongle right now and start experimenting. Even if that particular bit of hardware doesn’t win, it will give you a feel for the sort of interaction patterns you will be dealing with in the near future.
Anyway, what I’m interested in, is what changes when signing-in to a service becomes an order-of-magnitude quicker and more secure?
My hunch is we quickly get used to signing in more often, to a greater number of services at once, and the oAuth style permissions pattern that is currently the preserve of large platforms like Google or iOS starts getting implemented by smaller, more discrete services.
The result: and it becomes much easier to build composite-services made up of lots of loosely joined parts.
Democracy at the point of use?
I went to hear Vernon Bogdanor talk about the (first) 1974 General Election the other day. It’s part of a seris about post-war elections that is well worth a watch.
In passing he used a phrase that stuck in my head:
Democracy is government by explanation
Apparently it comes from Prime Minister A.J. Balfour and/or Geoffrey Howe.
What I think it is saying is this: it is a characteristic of a democratic system that people have clear opportunities to be able to understand the workings of that system, and that one of the ways you build trust in government and health in the system is by actively exposing how it works and why things are how they are, be it planning permission, taxation or hospitals.
It reminded me of what Aneurin Bevan was supposed* to have said when setting up the NHS:
The sound of a bedpan dropped on the floor of Tredegar hospital should reverberate in the Palace of Westminster
That normally gets cited in terms of ministerial accountability, but there’s another way to look at it - that there should be a direct link between service delivery and its accountability mechanisms, and that link should aspire to be as understandable and effective for someone using a hospital in Blaenau Gwent or Wesminster.
As public services start becoming digital, both those things - exposing the workings, and providing understandable feedback mechanisms that are useful to politicians and the public - become a lot easier. You can reveal the workings of a policy by clicking a button directly in the service and send feedback from within the service (that sort of integration is just harder with paper forms and disparate organisations).
Maybe rather than some attempt at online direct democracy, or a dozen new ways to do consultations or petitions, this will be the real democracic revolution of the digital age: transparency, accountability and democracy at the point of use.
The challenge for web designers in 2015 (or how to cheat at the future)
This is a second attempt at articulating this issue, and was inspired by a conversation with @psd who also pointed me at a TEDx talk entitled A time traveller’s primer. The first attempt is here.
It took 4 or 5 years of ajax / XMLHTTP being a thing for it to change almost everything about the sort of things that were built on the web. That was 4 or 5 years when lots of amazing things didn’t get built, not because it wasn’t possible, but because people just, didn’t. There were probably lots of transient reasons - browser support, users might not understand it yet, hard justifying it to co-workers, lack of examples - but fundamentally, there was lots of potential that was not realised until later in time. When that potentual was finally realised everything that went before suddenly felt dated.
This is faster than the 700 year example of pasteurisation that Ryan North gives in his TEDx talk A Time Traveller’s Primer (and yes, ajax is significantly less significant than pasteurisation) but the principle is the same:
If you want to design for the future, look for unrealised but present potential, look for what people could be making right now but are not. Design what is lacking.
The web is going though an Ajax moment right now, and it is happening (or rather has the potentual to) for 2 reasons:
Designing in a laptop web browser and testing with a mouse rather than fingers may come to look very out of date soon.
But with a few notable exceptions - eg the mobile versions of Wikipedia and Forecast - these are not patterns that are making their way on to the web.
So, here is the challenge for anyone designing and building for the web in 2015. There are a set of technologies and design patterns that are here, right now, not experiments in an innovation lab, but things you can use to design better tools for people. Today.
At some point in the next few years, some of these will become as widespread on the web as ajax and responsive design, so why wait 5 years? Design what is lacking.
Abundantly useful
It’s nice when things just become quietly, abundantly useful.
QR codes have gone from something people plastered over business cards and adverts in failed attempts to appear cutting-edge, to the default pattern for moving information along the following interaction:
a web transaction → time passes → queuing up for something → human interaction → scan → some sort of change of state.
Quickly thinking back over the past year I’ve:
Time to start designing and demoing mobile first?
I’ve always had a bit of a problem with responsive design. It too easy to assume the most important context is the size of the screen, too easy to fall into the habit that the way you build a mobile version of a service is to change the presentation layer - just shuffle the same content about the page in a different order and hide a couple of things *.
To make things worse, the term ‘mobile-first’ is often understood by non-developers as “we designed this product for a mobile first!” rather than “we use the mobile CSS as the base and build up from there”.
Websites that work on a mobile are not the same as websites designed for a mobile context. Resizing a browser to make sure it looks OK is probably not good enough any more:
I think it’s time for teams to start coding directly on mobile (or at least an emulator) and for product owners to start demanding demos on mobile in the first.**
* I’m not suggesting everyone does this, just that is an easy trap to fall in to. ** I’d be interested to know if anyone has seen any good development setups that replace coding in browser on the development machine with coding on mobile.
co-op v2?
There’s a quote in this O’Reilly Radar trailer for a talk about the bitcoin blockchain that has slightly melted my brain:
I think 10 years from now we’re going to see that these types of semi-decentralized companies are going to be replaced by fully decentralized companies, where the company itself just runs in an automated way on some kind of cryptocurrency.
Imagine a co-operative or mutual, setup in a few lines of code, able to programmatically distribute shares in itself at point of sale, the purchase being the proof of work.