Springwise: “Waiting is an activity that most people receiving healthcare have acutely experienced, whether it’s being put on a list for treatment or simply passing time before a GP visit. Cutting waiting time is something that can drastically improve patients’ experience, and in the past we’ve seen ideas such as HealthSpot use telemedicine to deliver healthcare advice remotely. For accidents and emergencies that require more hands-on attention however, a new platform called ER Wait Watcher enables users to determine which nearby hospital is likely to see them first.
Developed by journalism nonprofit ProPublica, the tool simply asks for those who have had an accident or emergency to enter in their zip code or street name. Using a number of factors — including average reported wait times for each hospital, how far away each hospital is and live Google traffic reports — it then lists which institution is likely to see them first. Users can see how long they can expect to wait on average before they will be seen, sent home, receive pain medication for a broken bone or transferred to their room for more serious injuries. The site works on the premise that the nearest hospital isn’t necessarily the one that will get patients the treatment they need most speedily.
The site does advise patients to call up their hospital for a more accurate estimate, and users currently need to navigate to the website to use it. Are there ways to make this kind of platform more accurate or user-friendly? Website: www.propublica.org”
Big Data for Law
legislation.gov.uk: “The National Archives has received ‘big data’ funding from the Arts and Humanities Research Council (AHRC) to deliver the ‘Big Data for Law‘ project. Just over £550,000 will enable the project to transform how we understand and use current legislation, delivering a new service – legislation.gov.uk Research – by March 2015. There are an estimated 50 million words in the statute book, with 100,000 words added or changed every month. Search engines and services like legislation.gov.uk have transformed access to legislation. Law is accessed by a much wider group of people, the majority of whom are typically not legally trained or qualified. All users of legislation are confronted by the volume of legislation, its piecemeal structure, frequent amendments, and the interaction of the statute book with common law and European law. Not surprisingly, many find the law difficult to understand and comply with. There has never been a more relevant time for research into the architecture and content of law, the language used in legislation and how, through interpretation by the courts, it is given effect. Research that will underpin the drive to deliver good, clear and effective law. Researchers typically lack the raw data, the tools, and the methods to undertake research across the whole statute book. Meanwhile, the combination of low cost cloud computing, open source software and new methods of data analysis – the enablers of the big data revolution – are transforming research in other fields. Big data research is perfectly possible with legislation if only the basic ingredients – the data, the tools and some tried and trusted methods – were as readily available as the computing power and the storage. The vision for this project is to address that gap by providing a new Legislation Data Research Infrastructure at research.legislation.gov.uk. Specifically tailored to researchers’ needs, it will consist of downloadable data, online tools for end-users; and open source tools for researchers to download, adapt and use….
There are three main areas for research:
- Understanding researchers’ needs: to ensure the service is based on evidenced need, capabilities and limitations, putting big data technologies in the hands of non-technical researchers for the first time.
- Deriving new open data from closed data: no one has all the data that researchers might find useful. For example, the potentially personally identifiable data about users and usage of legislation.gov.uk cannot be made available as open data but is perfect for processing using existing big data tools; eg to identify clusters in legislation or “recommendations” datasets of “people who read Act A or B also looked at Act Y or Z”. The project will look whether it is possible to create new open data sets from this type of closed data. An N-Grams dataset and appropriate user interface for legislation or related case law, for example, would contain sequences of words/phrases/statistics about their frequency of occurrence per document. N-Grams are useful for research in linguistics or history, and could be used to provide a predictive text feature in a drafting tool for legislation.
- Pattern language for legislation: We need new ways of codifying and modelling the architecture of the statute book to make it easier to research its entirety using big data technologies. The project will seek to learn from other disciplines, applying the concept of a ‘pattern language’ to legislation. Pattern languages have revolutionised software engineering over the last twenty years and have the potential to do the same for our understanding of the statute book. A pattern language is simply a structured method of describing good design practices, providing a common vocabulary between users and specialists, structured around problems or issues, with a solution. Patterns are not created or invented – they are identified as ‘good design’ based on evidence about how useful and effective they are. Applied to legislation, this might lead to a common vocabulary between the users of legislation and legislative drafters, to identifying useful and effective drafting practices and solutions that deliver good law. This could enable a radically different approach to structuring teaching materials or guidance for legislators.”
”
Open Data is an Essential Ingredient for Better Development Research
Aiddata blogpost: “UNICEF is making data a priority by re-launching the “UNICEF Child Info” department as “UNICEF Data” and actively promoting the use and collection of data to guide development. While their data is not subnational, it is comprehensive and expansive in its indicators. UNICEF’s mission calls for the use of the power of statistics and data to tell a story about the quality of life for children around the world. The connection between improving data and improving lives is a critical one that, while sometimes overshadowed by technical discussions on providing better data, is at the core of open data and the data transparency initiatives. By using evidence to anchor their decision-making, the UNICEF Data initiative hopes to craft and inspire better ways of caring for and empowering children across the globe.”
Habermas and the Garants : Narrowing the gap between policy and practice in French organisation – citizen engagement
New Programming Language Removes Human Error from Privacy Equation
MIT Technology Review: “Anytime you hear about Facebook inadvertently making your location public, or revealing who is stalking your profile, it’s likely because a programmer added code that inadvertently led to a bug.
But what if there was a system in place that could substantially reduce such privacy breaches and effectively remove human error from the equation?
One MIT PhD thinks she has the answer, and its name is Jeeves.
This past month, Jean Yang released an open-source Python version of “Jeeves,” a programming language with built-in privacy features that free programmers from having to provide on-the-go ad-hoc maintenance of privacy settings.
Given that somewhere between 10 and 20 percent of all code is related to privacy policy, Yang thinks that Jeeves will be an attractive option for social app developers who are looking to be more efficient in their use of programmer resources – as well as those who are hoping to assuage users’ privacy concerns about if and how they use your data.
For more information about Jeeves visit the project site.
For more information on Yang visit her CSAIL page.”
The FDA is making adverse event and recall data available to app developers
Nick Paul Taylor in FierceBioTechIT: “When Beth Noveck arrived at the White House she had a clear, albeit unusual, mission–to apply the transparency and collaboration of the open-source movement to government. Noveck has now left the White House, but the ideas she brought are still percolating through the governmental machine. In 2014, the thinking is set to lead to a new, more open FDA.
Regulatory Focus reports the agency has quietly created a website and initiative called openFDA. At this stage the project is still in the prelaunch phase, but the FDA has already given a teaser of its plans. When the program opens for beta access later this year, users will gain access to structured data sets as application programming interfaces (APIs) and raw downloads. The ultimate scope of the project is unclear, but for now the FDA is working on making three data sets available.
The three data sets will give users unprecedented access to FDA archives of adverse events, product recalls and label information. Together the three data sets represent a substantial slice of what many people want to know about the FDA. The adverse event database contains details of millions of side effects and medication errors, while the recall information the FDA is preparing to share gathers all the public notices of products withdrawn from the market.
Making the data available as an API–a way for machines to talk to each other–means third parties can use the information as the basis for apps. The experience of the National Aeronautics and Space Administration (NASA) gives some indication of what might happen once the FDA opens up its data. One year after making its data available as an API in 2011, NASA began holding an annual Space Apps Challenge. At the event, people create apps and APIs.
Some challenges have no obvious use for NASA, such as a project to make a 3D printed model of the dark side of the moon from NASA data. Others could clearly be the starting point for technology used by the space agency. In one challenge, teams were tasked with creating a miniaturized modular research satellite for use on Mars. NASA is working to the same White House digital playbook as the FDA. How the FDA interprets the broad goals in the drug regulation arena remains to be seen.
– read Regulatory Focus‘ article
– here’s the openFDA page
– check out NASA’s challenges“
What makes a good API?
Joshua Tauberer’s Blog: “There comes a time in every dataset’s life when it wants to become an API. That might be because of consumer demand or an executive order. How are you going to make a good one?…
Let’s take the common case where you have a relatively static, large dataset that you want to provide read-only access to. Here are 19 common attributes of good APIs for this situation. …
Granular Access. If the user wanted the whole thing they’d download it in bulk, so an API must be good at providing access to the most granular level practical for data users (h/t Ben Balter for the wording on that). When the data comes from a table, this usually means the ability to read a small slice of it using filters, sorting, and paging (limit/offset), the ability to get a single row by identifying it with a persistent, unique identifier (usually a numeric ID), and the ability to select just which fields should be included in the result output (good for optimizing bandwidth in mobile apps, h/t Eric Mill). (But see “intents” below.)
Deep Filtering. An API should be good at needle-in-haystack problems. Full text search is hard to do, so an API that can do it relieves a big burden for developers — if your API has any big text fields. Filters that can span relations or cross tables (i.e. joins) can be very helpful as well. But don’t go overboard. (Again, see “intents” below.)
Typed Values. Response data should be typed. That means that whether a field’s value is an integer, text, list, floating-point number, dictionary, null, or date should be encoded as a part of the value itself. JSON and XML with XSD are good at this. CSV and plain XML, on the other hand, are totally untyped. Types must be strictly enforced. Columns must choose a data type and stick with it, no exceptions. When encoding other sorts of data as text, the values must all absolutely be valid according to the most narrow regular expression that you can make. Provide that regular expression to the API users in documentation.
Normalize Tables, Then Denormalize. Normalization is the process of removing redundancy from tables by making multiple tables. You should do that. Have lots of primary keys that link related tables together. But… then… denormalize. The bottleneck of most APIs isn’t disk space but speed. Queries over denormalized tables are much faster than writing queries with JOINs over multiple tables. It’s faster to get data if it’s all in one response than if the user has to issue multiple API calls (across multiple tables) to get it. You still have to normalize first, though. Denormalized data is hard to understand and hard to maintain.
Be RESTful, And More. ”REST” is a set of practices. There are whole books on this. Here it is in short. Every object named in the data (often that’s the rows of the table) gets its own URL. Hierarchical relationships in the data are turned into nice URL paths with slashes. Put the URLs of related resources in output too (HATEOAS, h/t Ed Summers). Use HTTP GET and normal query string processing (a=x&b=y) for filtering, sorting, and paging. The idea of REST is that these are patterns already familiar to developers, and reusing existing patterns — rather than making up entirely new ones — makes the API more understandable and reusable. Also, use HTTPS for everything (h/t Eric Mill), and provide the API’s status as an API itself possibly at the root URL of the API’s URL space (h/t Eric Mill again).
….
Never Require Registration. Don’t have authentication on your API to keep people out! In fact, having a requirement of registration may contradict other guidelines (such as the 8 Principles of Open Government Data). If you do use an API key, make it optional. A non-authenticated tier lets developers quickly test the waters, and that is really important for getting developers in the door, and, again, it may be important for policy reasons as well. You can have a carrot to incentivize voluntary authentication: raise the rate limit for authenticated queries, for instance. (h/t Ben Balter)
Interactive Documentation. An API explorer is a web page that users can visit to learn how to build API queries and see results for test queries in real time. It’s an interactive browser tool, like interactive documentation. Relatedly, an “explain mode” in queries, which instead of returning results says what the query was and how it would be processed, can help developers understand how to use the API (h/t Eric Mill).
Developer Community. Life is hard. Coding is hard. The subject matter your data is about is probably very complex. Don’t make your API users wade into your API alone. Bring the users together, bring them to you, and sometimes go to them. Let them ask questions and report issues in a public place (such as github). You may find that users will answer other users’ questions. Wouldn’t that be great? Have a mailing list for longer questions and discussion about the future of the API. Gather case studies of how people are using the API and show them off to the other users. It’s not a requirement that the API owner participates heavily in the developer community — just having a hub is very helpful — but of course the more participation the better.
Create Virtuous Cycles. Create an environment around the API that make the data and API stronger. For instance, other individuals within your organization who need the data should go through the public API to the greatest extent possible. Those users are experts and will help you make a better API, once they realize they benefit from it too. Create a feedback loop around the data, meaning find a way for API users to submit reports of data errors and have a process to carry out data updates, if applicable and possible. Do this in the public as much as possible so that others see they can also join the virtuous cycle.”
We need a new Bismarck to tame the machines
If, in the words of Google chairman Eric Schmidt, there is a “race between people and computers” even he suspects people may not win, democrats everywhere should be worried. In the same vein, Lawrence Summers, former Treasury secretary, recently noted that new technology could be liberating but that the government needed to soften its negative effects and make sure the benefits were distributed fairly. The problem, he went on, was that “we don’t yet have the Gladstone, the Teddy Roosevelt or the Bismarck of the technology era”.
These Victorian giants have much to teach us. They were at the helm when their societies were transformed by the telegraph, the electric light, the telephone and the combustion engine. Each tried to soften the blow of change, and to equalise the benefits of prosperity for working people. With William Gladstone it was universal primary education and the vote for Britain’s working men. With Otto von Bismarck it was legislation that insured German workers against ill-health and old age. For Roosevelt it was the entire progressive agenda, from antitrust legislation and regulation of freight rates to the conservation of America’s public lands….
The Victorians created the modern state to tame the market in the name of democracy but they wanted a nightwatchman state, not a Leviathan. Thanks to the new digital technologies, the state they helped create now has powers of surveillance that threaten our privacy and freedom. What new technology makes possible, states will do. Keeping technology in the service of democracy will not be easy. Asking judges to guard the guards only bloats the state apparatus still further. Allowing dissident insiders to get away with leaking the state’s secrets will only result in more secretive, paranoid and controlling government.
The Victorians would have said there is a solution – representative government itself – but it requires citizens to trust their representatives to hold the government in check. The Victorians created modern, mass representative democracy so that collective public choice could control change for everyone’s benefit. They believed that representatives, if given the authority and the necessary information, could control the power that technology confers on the modern state.
This is still a viable ideal but we have plenty of rebuilding before our democratic institutions are ready for the task. Congress and parliament need to regain trust and capability; and, if they do, we can start recovering the faith of the Victorians we so sorely need: the belief that democracy can master the technologies that are transforming our lives.“
Tim Berners-Lee: we need to re-decentralise the web
Wired: “Twenty-five years on from the web’s inception, its creator has urged the public to re-engage with its original design: a decentralised internet that at its very core, remains open to all.
Speaking with Wired editor David Rowan at an event launching the magazine’s March issue, Tim Berners-Lee said that although part of this is about keeping an eye on for-profit internet monopolies such as search engines and social networks, the greatest danger is the emergence of a balkanised web.
“I want a web that’s open, works internationally, works as well as possible and is not nation-based,” Berners-Lee told the audience… “What I don’t want is a web where the Brazilian government has every social network’s data stored on servers on Brazilian soil. That would make it so difficult to set one up.”
It’s the role of governments, startups and journalists to keep that conversation at the fore, he added, because the pace of change is not slowing — it’s going faster than ever before. For his part Berners-Lee drives the issue through his work at the Open Data Institute, World Wide Web Consortium and World Wide Web Foundation, but also as an MIT professor whose students are “building new architectures for the web where it’s decentralised”. On the issue of monopolies, Berners-Lee did say it’s concerning to be “reliant on big companies, and one big server”, something that stalls innovation, but that competition has historically resolved these issues and will continue to do so.
The kind of balkanised web he spoke about, as typified by Brazil’s home-soil servers argument or Iran’s emerging intranet, is partially being driven by revelations of NSA and GCHQ mass surveillance. The distrust that it has brewed, from a political level right down to the threat of self-censorship among ordinary citizens, threatens an open web and is, said Berners-Lee, a greater threat than censorship. Knowing the NSA may be breaking commercial encryption services could result in the emergence of more networks like China’s Great Firewall, to “protect” citizens. This is why we need a bit of anti-establishment push back, alluded to by Berners-Lee.”
Unbundling the nation state
The Economist on Government-to-government trade: “NIGERIAN pineapple for breakfast, Peruvian quinoa for lunch and Japanese sushi for dinner. Two centuries ago, when David Ricardo advocated specialisation and free trade, the notion that international exchange in goods and services could make such a cosmopolitan diet commonplace would have seemed fanciful.
Today another scenario may appear equally unlikely: a Norwegian government agency managing Algeria’s sovereign-wealth fund; German police overseeing security in the streets of Mumbai; and Dubai playing the role of the courthouse of the Middle East. Yet such outlandish possibilities are more than likely if a new development fulfils its promise. Ever more governments are trading with each other, from advising lawmakers to managing entire services. They are following businesses, which have long outsourced much of what they do. Is this the dawn of the government-to-government era?
Such “G2G” trade is not new, though the name may be. After the Ottoman empire defaulted on its debt in 1875 foreign lenders set up an “Ottoman Public Debt Administration”, its governing council packed with European government officials. At its peak it had 9,000 employees, more than the empire’s finance ministry. And the legacy of enforced G2G trade—colonialism, as it was known—is still visible even today. Britain’s Privy Council is the highest court of appeal for many Commonwealth countries. France provides a monetary-policy service to several west African nations by managing their currency, the CFA franc.
One reason G2G trade is growing is that it is a natural extension of the trend for governments to pinch policies from each other. “Policymaking now routinely occurs in comparative terms,” says Jamie Peck of the University of British Columbia, who refers to G2G advice as “fast policy”. Since the late 1990s Mexico’s pioneering policy to make cash benefits for poor families conditional on things like getting children vaccinated and sending them to school has been copied by almost 50 other countries….Budget cuts can provide another impetus for G2G trade. The Dutch army recently sold its Leopard II tanks and now sends tank crews to train with German forces. That way it will be able to reform its tank squadrons quickly if they are needed. Britain, with a ten-year gap between scrapping old aircraft-carriers and buying new ones, has sent pilots to train with the American marines on the F-35B, which will fly from both American and British carriers.
…
No one knows the size of the G2G market. Governments rarely publicise deals, not least because they fear looking weak. And there are formidable barriers to trade. The biggest is the “Westphalian” view of sovereignty, says Stephen Krasner of Stanford University: that states should run their own affairs without foreign interference. In 2004 Papua New Guinea’s parliament passed a RAMSI-like delegation agreement, but local elites opposed it and courts eventually declared it unconstitutional. Honduras attempted to create independent “charter cities”, a concept developed by Paul Romer of New York University (NYU), whose citizens would have had the right of appeal to the supreme court of Mauritius. But in 2012 this scheme, too, was deemed unconstitutional.
Critics fret about accountability and democratic legitimacy. The 2005 Paris Declaration on Aid Effectiveness, endorsed by governments and aid agencies, made much of the need for developing countries to design their own development strategies. And providers open themselves to reputational risk. British police, for instance, have trained Bahraini ones. A heavy-handed crackdown by local forces during the Arab spring reflected badly on their foreign teachers…
When San Francisco decided to install wireless control systems for its streetlights, it posted a “call for solutions” on Citymart, an online marketplace for municipal projects. In 2012 it found a Swiss firm, Paradox Engineering, which had built such systems for local cities. But though members often share ideas, says Sascha Haselmayer, Citymart’s founder, most still decide to implement their chosen policies themselves.
Weak government services are the main reason poor countries fail to catch up with rich ones, says Mr Romer. One response is for people in poorly run places to move to well governed ones. Better would be to bring efficient government services to them. In a recent paper with Brandon Fuller, also of NYU, Mr Romer argues that either response would bring more benefits than further lowering the barriers to trade in privately provided goods and services. Firms have long outsourced activities, even core ones, to others that do them better. It is time governments followed suit.”