How Estonia became E-stonia


Tim Mansel from BBC News: “In some countries, computer programming might be seen as the realm of the nerd.

But not in Estonia, where it is seen as fun, simple and cool.
This northernmost of the three Baltic states, a small corner of the Soviet Union until 1991, is now one of the most internet-dependent countries in the world.
And Estonian schools are teaching children as young as seven how to programme computers….
better known is Skype, an Estonian start-up long since gone global.
Skype was bought by Microsoft in 2011 for a cool $8.5bn, but still employs 450 people at its local headquarters on the outskirts of Tallinn, roughly a quarter of its total workforce. Tiit Paananen, from Skype, says they are passionate about education and that it works closely with Estonian universities and secondary schools….
Estonians today vote online and pay tax online. Their health records are online and, using what President Ilves likes to call a “personal access key” – others refer to it as an ID card – they can pick up prescriptions at the pharmacy. The card offers access to a wide range of other services.
All this will be second nature to the youngest generation of E-stonians. They encounter electronic communication as soon as they enter school through the eKool (e-school) system. Exam marks, homework assignments and attendance in class are all available to parents at the click of a mouse.”

Data Edge


Steven Weber, professor in the School of Information and Political Science department at UC Berkeley, in Policy by the Numbers“It’s commonly said that most people overestimate the impact of technology in the short term, and underestimate its impact over the longer term.
Where is Big Data in 2013? Starting to get very real, in our view, and right on the cusp of underestimation in the long term. The short term hype cycle is (thankfully) burning itself out, and the profound changes that data science can and will bring to human life are just now coming into focus. It may be that Data Science is right now about where the Internet itself was in 1993 or so. That’s roughly when it became clear that the World Wide Web was a wind that would blow across just about every sector of the modern economy while transforming foundational things we thought were locked in about human relationships, politics, and social change. It’s becoming a reasonable bet that Data Science is set to do the same—again, and perhaps even more profoundly—over the next decade. Just possibly, more quickly than that….
Can data, no matter how big, change the world for the better? It may be the case that in some fields of human endeavor and behavior, the scientific analysis of big data by itself will create such powerful insights that change will simply have to happen, that businesses will deftly re-organize, that health care will remake itself for efficiency and better outcomes, that people will adopt new behaviors that make them happier, healthier, more prosperous and peaceful. Maybe. But almost everything we know about technology and society across human history argues that it won’t be so straightforward.
…join senior industry and academic leaders at DataEDGE at UC Berkeley on May 30-31 to engage in what will be a lively and important conversation aimed at answering today’s questions about the data science revolution—and formulating tomorrow’s.

New Open Data Executive Order and Policy


The White House: “The Obama Administration today took groundbreaking new steps to make information generated and stored by the Federal Government more open and accessible to innovators and the public, to fuel entrepreneurship and economic growth while increasing government transparency and efficiency.
Today’s actions—including an Executive Order signed by the President and an Open Data Policy released by the Office of Management and Budget and the Office of Science and Technology Policy—declare that information is a valuable national asset whose value is multiplied when it is made easily accessible to the public.  The Executive Order requires that, going forward, data generated by the government be made available in open, machine-readable formats, while appropriately safeguarding privacy, confidentiality, and security.
The move will make troves of previously inaccessible or unmanageable data easily available to entrepreneurs, researchers, and others who can use those files to generate new products and services, build businesses, and create jobs….
Along with the Executive Order and Open Data Policy, the Administration announced a series of complementary actions:
• A new Data.Gov.  In the months ahead, Data.gov, the powerful central hub for open government data, will launch new services that include improved visualization, mapping tools, better context to help locate and understand these data, and robust Application Programming Interface (API) access for developers.
• New open source tools to make data more open and accessible.  The US Chief Information Officer and the US Chief Technology Officer are releasing free, open source tools on Github, a site that allows communities of developers to collaboratively develop solutions.  This effort, known as Project Open Data, can accelerate the adoption of open data practices by providing plug-and-play tools and best practices to help agencies improve the management and release of open data.  For example, one tool released today automatically converts simple spreadsheets and databases into APIs for easier consumption by developers.  Anyone, from government agencies to private citizens to local governments and for-profit companies, can freely use and adapt these tools starting immediately.
• Building a 21st century digital government.  As part of the Administration’s Digital Government Strategy and Open Data Initiatives in health, energy, education, public safety, finance, and global development, agencies have been working to unlock data from the vaults of government, while continuing to protect privacy and national security.  Newly available or improved data sets from these initiatives will be released today and over the coming weeks as part of the one year anniversary of the Digital Government Strategy.
• Continued engagement with entrepreneurs and innovators to leverage government data.  The Administration has convened and will continue to bring together companies, organizations, and civil society for a variety of summits to highlight how these innovators use open data to positively impact the public and address important national challenges.  In June, Federal agencies will participate in the fourth annual Health Datapalooza, hosted by the nonprofit Health Data Consortium, which will bring together more than 1,800 entrepreneurs, innovators, clinicians, patient advocates, and policymakers for information sessions, presentations, and “code-a-thons” focused on how the power of data can be harnessed to help save lives and improve healthcare for all Americans.
For more information on open data highlights across government visit: http://www.whitehouse.gov/administration/eop/ostp/library/docsreports”

Open government data shines a light on hospital billing and health care costs


hospital-costsAlex Howard: “If transparency is the best disinfectant, casting sunlight upon the cost of care in hospitals across the United States will make the health care system itself healthier.
The Department of Health and Human Services has released open data that compares the billing for the 100 most common treatments and procedures performed at more than 3000 hospital in the U.S. The Medicare provider charge data shows significant variation within communies and across the country for the same procedures.
One hospital charged $8,000, another $38,000 — for the same condition. This data is enabling newspapers like the Washington Post to show people the actual costs of health care and create  interactive features that enable  people to search for individual hospitals and see how they compare. The New York Times explored the potential reasons behind wild disparities in billing at length today, from sicker patients to longer hospitalizations to higher labor costs.”

The Commodification of Patient Opinion: the Digital Patient Experience Economy in the Age of Big Data


Paper by Lupton, Deborah, from the Sydney Unversity’s Department of Sociology and Social Policy . Abstract: “As part of the digital health phenomenon, a plethora of interactive digital platforms have been established in recent years to elicit lay people’s experiences of illness and healthcare. The function of these platforms, as expressed on the main pages of their websites, is to provide the tools and forums whereby patients and caregivers, and in some cases medical practitioners, can share their experiences with others, benefit from the support and knowledge of other contributors and contribute to large aggregated data archives as part of developing better medical treatments and services and conducting medical research.
However what may not always be readily apparent to the users of these platforms are the growing commercial uses by many of the platforms’ owners of the archives of the data they contribute. This article examines this phenomenon of what I term ‘the digital patient experience economy’. In so doing I discuss such aspects as prosumption, the phenomena of big data and metric assemblages, the discourse and ethic of sharing and the commercialisation of affective labour via such platforms. I argue that via these online platforms patients’ opinions and experiences may be expressed in more diverse and accessible forums than ever before, but simultaneously they have become exploited in novel ways.”

Life in the City Is Essentially One Giant Math Problem


Smithsonian Magazine : “A new science—so new it doesn’t have its own journal, or even an agreed-upon name—is exploring these laws. We will call it “quantitative urbanism.” It’s an effort to reduce to mathematical formulas the chaotic, exuberant, extravagant nature of one of humanity’s oldest and most important inventions, the city.
The systematic study of cities dates back at least to the Greek historian Herodotus. In the early 20th century, scientific disciplines emerged around specific aspects of urban development: zoning theory, public health and sanitation, transit and traffic engineering. By the 1960s, the urban-planning writers Jane Jacobs and William H. Whyte used New York as their laboratory to study the street life of neighborhoods, the walking patterns of Midtown pedestrians, the way people gathered and sat in open spaces. But their judgments were generally aesthetic and intuitive…
Only in the past decade has the ability to collect and analyze information about the movement of people begun to catch up to the size and complexity of the modern metropolis itself…
Deep mathematical principles underlie even such seemingly random and historically contingent facts as the distribution of the sizes of cities within a country. There is, typically, one largest city, whose population is twice that of the second-largest, and three times the third-largest, and increasing numbers of smaller cities whose sizes also fall into a predictable pattern. This principle is known as Zipf’s law, which applies across a wide range of phenomena…”

Guide to Social Innovation


Social InnovationForeword of European Commission Guide on Social Innovation: “Social innovation is in the mouths of many today, at policy level and on the ground. It is not new as such: people have always tried to find new solutions for pressing social needs. But a number of factors have spurred its development recently.
There is, of course, a link with the current crisis and the severe employment and social consequences it has for many of Europe’s citizens. On top of that, the ageing of Europe’s population, fierce global competition and climate change became burning societal challenges. The sustainability and adequacy of Europe’s health and social security systems as well as social policies in general is at stake. This means we need to have a fresh look at social, health and employment policies, but also at education, training and skills development, business support, industrial policy, urban development, etc., to ensure socially and environmentally sustainable growth, jobs and quality of life in Europe.”

Linking open data to augmented intelligence and the economy


Open Data Institute and Professor Nigel Shadbolt (@Nigel_Shadbolt) interviewed by by (@digiphile):  “…there are some clear learnings. One that I’ve been banging on about recently has been that yes, it really does matter to turn the dial so that governments have a presumption to publish non-personal public data. If you would publish it anyway, under a Freedom of Information request or whatever your local legislative equivalent is, why aren’t you publishing it anyway as open data? That, as a behavioral change. is a big one for many administrations where either the existing workflow or culture is, “Okay, we collect it. We sit on it. We do some analysis on it, and we might give it away piecemeal if people ask for it.” We should construct publication process from the outset to presume to publish openly. That’s still something that we are two or three years away from, working hard with the public sector to work out how to do and how to do properly.
We’ve also learned that in many jurisdictions, the amount of [open data] expertise within administrations and within departments is slight. There just isn’t really the skillset, in many cases. for people to know what it is to publish using technology platforms. So there’s a capability-building piece, too.
One of the most important things is it’s not enough to just put lots and lots of datasets out there. It would be great if the “presumption to publish” meant they were all out there anyway — but when you haven’t got any datasets out there and you’re thinking about where to start, the tough question is to say, “How can I publish data that matters to people?”
The data that matters is revealed in the fact that if we look at the download stats on these various UK, US and other [open data] sites. There’s a very, very distinctive parallel curve. Some datasets are very, very heavily utilized. You suspect they have high utility to many, many people. Many of the others, if they can be found at all, aren’t being used particularly much. That’s not to say that, under that long tail, there isn’t large amounts of use. A particularly arcane open dataset may have exquisite use to a small number of people.
The real truth is that it’s easy to republish your national statistics. It’s much harder to do a serious job on publishing your spending data in detail, publishing police and crime data, publishing educational data, publishing actual overall health performance indicators. These are tough datasets to release. As people are fond of saying, it holds politicians’ feet to the fire. It’s easy to build a site that’s full of stuff — but does the stuff actually matter? And does it have any economic utility?”
there are some clear learnings. One that I’ve been banging on about recently has been that yes, it really does matter to turn the dial so that governments have a presumption to publish non-personal public data. If you would publish it anyway, under a Freedom of Information request or whatever your local legislative equivalent is, why aren’t you publishing it anyway as open data? That, as a behavioral change. is a big one for many administrations where either the existing workflow or culture is, “Okay, we collect it. We sit on it. We do some analysis on it, and we might give it away piecemeal if people ask for it.” We should construct publication process from the outset to presume to publish openly. That’s still something that we are two or three years away from, working hard with the public sector to work out how to do and how to do properly.
We’ve also learned that in many jurisdictions, the amount of [open data] expertise within administrations and within departments is slight. There just isn’t really the skillset, in many cases. for people to know what it is to publish using technology platforms. So there’s a capability-building piece, too.
One of the most important things is it’s not enough to just put lots and lots of datasets out there. It would be great if the “presumption to publish” meant they were all out there anyway — but when you haven’t got any datasets out there and you’re thinking about where to start, the tough question is to say, “How can I publish data that matters to people?”
The data that matters is revealed in the fact that if we look at the download stats on these various UK, US and other [open data] sites. There’s a very, very distinctive parallel curve. Some datasets are very, very heavily utilized. You suspect they have high utility to many, many people. Many of the others, if they can be found at all, aren’t being used particularly much. That’s not to say that, under that long tail, there isn’t large amounts of use. A particularly arcane open dataset may have exquisite use to a small number of people.
The real truth is that it’s easy to republish your national statistics. It’s much harder to do a serious job on publishing your spending data in detail, publishing police and crime data, publishing educational data, publishing actual overall health performance indicators. These are tough datasets to release. As people are fond of saying, it holds politicians’ feet to the fire. It’s easy to build a site that’s full of stuff — but does the stuff actually matter? And does it have any economic utility?

Innovations in American Government Award


Press Release: “Today the Ash Center for Democratic Governance and Innovation at the John F. Kennedy School of Government, Harvard University announced the Top 25 programs in this year’s Innovations in American Government Award competition. These government initiatives represent the dedicated efforts of city, state, federal, and tribal governments and address a host of policy issues including crime prevention, economic development, environmental and community revitalization, employment, education, and health care. Selected by a cohort of policy experts, researchers, and practitioners, four finalists and one winner of the Innovations in American Government Award will be announced in the fall. A full list of the Top 25 programs is available here.
A Culture of Innovation
A number of this year’s Top 25 programs foster a new culture of innovation through online collaboration and crowdsourcing. Signaling a new trend in government, these programs encourage the generation of smart solutions to existing and seemingly intractable public policy problems. LAUNCH—a partnership among NASA, USAID, the State Department, and NIKE—identifies and scales up promising global sustainability innovations created by individual citizens and organizations. The General Services Administration’s Challenge.gov uses crowdsourcing contests to solve government issues: government agencies post challenges, and the broader American public is awarded for submitting winning ideas. The Department of Transportation’s IdeaHub also uses an online platform to encourage its employees to communicate new ideas for making the department more adaptable and enterprising.”

Cities and Data


20130427_USC502The Economist: “Many cities around the country find themselves in a similar position: they are accumulating data faster than they know what to do with. One approach is to give them to the public. For example, San Francisco, New York, Philadelphia, Boston and Chicago are or soon will be sharing the grades that health inspectors give to restaurants with an online restaurant directory.
Another way of doing it is simply to publish the raw data and hope that others will figure out how to use them. This has been particularly successful in Chicago, where computer nerds have used open data to create many entirely new services. Applications are now available that show which streets have been cleared after a snowfall, what time a bus or train will arrive and how requests to fix potholes are progressing.
New York and Chicago are bringing together data from departments across their respective cities in order to improve decision-making. When a city holds a parade it can combine data on street closures, bus routes, weather patterns, rubbish trucks and emergency calls in real time.”