This is a write up of an idea that came out of the Environment Agency hackday.
How do we know software is working?
We can run the software and look out for bugs. For an open-source project, we can inspect the code.
We can also write and run automated tests against the software, so if it’s broken we know. Something like this:
GIVEN a user is logged in
WHEN they click on the 'my account' link
THEN they can view their billing history
That example is written in a format called Cucumber or Gerkin syntax. It is designed to both be readable, and to be written by, a non-technical person, but can be run automatically by a machine.
How about a scientific experiment? How is that verified?
Since the 17th century, it’s been via the publication of the results, along with a description of how to replicate the experiment, in a peer reviewed journal.
Increasingly that needs to include any source code used to run the experiment.
Finally, what about regulatory authorities? How does society verify that regulations are being met? How do we check for breaches?
The relevant authority probably publish reports listing any breaches of regulations. They may also publish raw open data to show that in a transparent way.
The Environment Agency, for example, publish data on pollution incidents, ground water abstraction and water quality.
So we could look through that open data, and manually check it against the various regulations, legislation and policies.
But what if we applied software testing principles to regulatory data and wrote automated tests to show up and breaches of regulations? Tests that were understandable and can be run by machines.
They might look like this:
GIVEN there is a release of Sulphur Dioxide from a power station
WHEN the concentration of Sulphur Dioxide should not be greater than 40 ppm
THEN there has been a breach of regulation E209
Or this:
GIVEN a company with license R409
WHEN it abstracts > 100L in a 30 day period
THEN there has been a breach of license R409
Regulatory bodies could start publishing tests like these alongside the open data they release to demonstrate how they are regulating by showing their workings.
Other interested parties - campaign groups, charities, parliament - could review tests against legislation and run them against the data to independently verify that the regulatory body is doing its job.
Programming Perl in 2034 by Charlie Stross is just brilliant - he covers what causes things to change, to stay the same and the reality distorting change the awaits us in the coming decades (and the place of programming languages in 100 years time). The whole thing is quotable, but this about sums it up:
“we can reasonably assume that any object more durable than a bar of soap and with a retail value of over $5 probably has as much computing power as your laptop today”
On Wednesday I went mapping Brixton Market with Open Streetmap London. I was aware of Field Papers from Stamen, but never actually used them. They are brilliantly simple in use. They must have applications beyond OSM. Maybe to help Farmers fill in their Common Agricultural Policy applications?
Also on a geo-tip, I’ve been trying to remember all the things I’ve forgotten about coordinate and projection systems from university.
The whole thing is simplish in concept, complex in implementation. eg OSGB (the British National Grid) covers Great Britain only, but may or may not also include a small rock 270 miles off Ireland;
I also came across The Great Retriangulation of Great Britain 1935 - 1962. By the time the survey was complete improvements in measuring technology meant it was probably out by approximately 20 meters.
Sarah Prag has written a great shopping list of things a ‘GDS for local government’ might need, and points out that some would be controversial.
Part of the reason some might be controversial is because, I think, there are some seemingly contradictory problems that need addressing.
Rather than offering my own view on what a ‘local GDS’ should be, I thought I’d have a go at stating what the I hard things are, in the hope it makes evaluating the shopping a bit easier.
The hard things:
Boundaries don’t reflect the real world
If you want to know where the nearest open library or swimming pool is right now, you probably need information from more than one authority.
If another authority starts on the other side of the road - you still care about any planning applications.
Geography is core
The information and services that local government provides are often inherently geographical in a way that central government is not.
Publishing systems and services need that baking into them in a way I don’t think is widely understood, and lacking in parallels.
The understanding of geography is often hyper-local, beyond the understanding of what a centrally managed gazetteer can convincingly do. It is in the heads of residents and local officials.
Democracy and power matter
Local governments are independently elected to provide services, in a way that separate government departments are not.
As citizens we have the power to remove local government if it is not providing adequate services. The boundaries matter in this context.
Separately, from time to time local government disagrees with central government, very occasionally to the point of serious dispute. In rare events like these, who controls the publishing button might matter more than we think.
Information is distributed
The person who knows the times that the park shuts is probably the person who locks them.
Understanding of the structure of government is patchy and inconsistent
People don’t understand the full detail of the structure of government, and should not have to.
What’s local, national, devolved district, county? But then that understanding is different between people too.
The same problem is being solved many times over
Or, at least a set of very similar problems, are being solved by each local authority. Any that is just and obvious frustration and inefficiency.
Answers?
I don’t know exactly what the answer is, but it probably is useful to start thinking of some parallels beyond just GDS to try and get to the answer.
Here’s a few that seem to jump out at me:
- Wordpress.com and wordpress.org - host it yourself or pay to host centrally (or your own domain or theirs). Code is shared and there is a healthy market in plugins.
- OpenStreetMap and Wikipedia - and open shared commons of structured information editable by many.
- FixMyStreet.com and Open 311 allow a distributed model of reporting civic infrastructure issues via multiple websites and apps.
We went to see Goldie’s Timeless end-to-end at the Festival Hall as part of the Meltdown Festival on Saturday. Live drums, live vocals. Amazing.
They could have got away with so little, but instead it was obviously brilliantly planned, executed and performed brilliantly. Including an obviously health and safety approved (there were goggles and extra safety glass - someone put the time in filling forms) smashing of glass into a miked-up steel rubbish bin. Details.
Someone has put a video on up here:
Diagram showing how to layout a team product space
Download larger version to print
Checklist
- Team
- Team room *
- Sprint planning room (co-joined)*
- Fast Internet connection
- User needs
- Principles
- Drawing of service
- Sprint wall x 2 (A & B)
- Story wall
- Screens x 2+ (for demos / information radiators)
- Email group
- Git repository
- IRC/chat room
- Wiki/note-sharing
* Both with whiteboard walls
Manuals (XKCD) and manuals (VW Beetle).
Time to start understanding more than just the superficial about cryptography: http://en.wikipedia.org/wiki/Homomorphic_encryption