How Estonia became E-stonia


Tim Mansel from BBC News: “In some countries, computer programming might be seen as the realm of the nerd.

But not in Estonia, where it is seen as fun, simple and cool.
This northernmost of the three Baltic states, a small corner of the Soviet Union until 1991, is now one of the most internet-dependent countries in the world.
And Estonian schools are teaching children as young as seven how to programme computers….
better known is Skype, an Estonian start-up long since gone global.
Skype was bought by Microsoft in 2011 for a cool $8.5bn, but still employs 450 people at its local headquarters on the outskirts of Tallinn, roughly a quarter of its total workforce. Tiit Paananen, from Skype, says they are passionate about education and that it works closely with Estonian universities and secondary schools….
Estonians today vote online and pay tax online. Their health records are online and, using what President Ilves likes to call a “personal access key” – others refer to it as an ID card – they can pick up prescriptions at the pharmacy. The card offers access to a wide range of other services.
All this will be second nature to the youngest generation of E-stonians. They encounter electronic communication as soon as they enter school through the eKool (e-school) system. Exam marks, homework assignments and attendance in class are all available to parents at the click of a mouse.”

When the Crowd Fights Corruption


New Harvard Business School Research Paper by Paul Healy and Karthik Ramanna  (Harvard Business Review): “Corruption is the greatest impediment to conducting business in Russia, according to leaders recently surveyed by the World Economic Forum. Indeed, it’s a problem in many emerging markets, and businesses have a role to play in combating it, according to Healy and Ramanna. The authors focus on RosPil — an anticorruption entity in Russia set up by Alexey Navalny, a crusader against public and private malfeasance in that country. As of December 2011, RosPil claimed to have prevented the granting of dubious contracts worth US$1.3 billion. The organization holds corrupt politicians’ and bureaucrats’ feet to the fire largely through internet-based crowdsourcing, whereby often-anonymous people identify requests for government-issued tenders that are designed to generate kickbacks. Should entities like RosPil be supported, and should companies fashion their own responses to corruption? On the one hand, there are obvious public-relations and political risks; on the other hand, corruption can erode a firm’s competitiveness, the trust of customers and employees, and even the very legitimacy of capitalism. The authors argue that heads of many multinational companies are well positioned to combat corruption in emerging markets. Those leaders have the power to enforce policies in their organizations and networks, and they enjoy the ability to organize others in the industry against this pernicious threat.”

Technology and Economic Prosperity


EDUARDO PORTER in The New York Times: “The impact of a technological innovation depends on how deeply it embeds itself in everything we do.
Earlier this month, a couple of economists at the Harvard Business School and the Toulouse School of Economics in France produced a paper asking “If Technology Has Arrived Everywhere, Why Has Income Diverged?” Economic prosperity, they noted, is ultimately driven by technological innovation. So if technologies today spread much more quickly than they used to from rich to poor countries, how come the income divide between rich and poor nations remains so large?
It took 119 years, on average, for the spindle to spread outside of Europe to the poorer reaches of the late-18th-century world, according to the authors. The Internet encircled the globe in seven. One might expect that this would have helped developing countries catch up with the richest nations at the frontier of technology
The reason that this did not happen, the authors propose, is that despite spreading faster, new technologies have not embedded themselves as deeply, measured by their prevalence, relative to the size of the economy. “The divergence in the degree of assimilation of technologies started about 100 years ago,” observed Diego Comin of Harvard Business School, one of the authors.”

Wikipedia Recent Changes Map


Wikipedia

The Verge: “By watching a new visualization, known plainly as the Wikipedia Recent Changes Map, viewers can see the location of every unregistered Wikipedia user who makes a change to the open encyclopedia. It provides a voyeuristic look at the rate that knowledge is contributed to the website, giving you the faintest impression of the Spaniard interested in the television show Jackass or the Brazilian who defaced the page on the Jersey Devil to feature a photograph of the new pope. Though the visualization moves quickly, it’s only displaying about one-fifth of the edits being made: Wikipedia doesn’t reveal location data for registered users, and unregistered users make up just 15 to 20 percent of all contribution, according to studies of the website.”

Global Internet Policy Observatory (GIPO)


European Commission Press Release: “The Commission today unveiled plans for the Global Internet Policy Observatory (GIPO), an online platform to improve knowledge of and participation of all stakeholders across the world in debates and decisions on Internet policies. GIPO will be developed by the Commission and a core alliance of countries and Non Governmental Organisations involved in Internet governance. Brazil, the African Union, Switzerland, the Association for Progressive Communication, Diplo Foundation and the Internet Society have agreed to cooperate or have expressed their interest to be involved in the project.
The Global Internet Policy Observatory will act as a clearinghouse for monitoring Internet policy, regulatory and technological developments across the world.
It will:

  • automatically monitor Internet-related policy developments at the global level, making full use of “big data” technologies;
  • identify links between different fora and discussions, with the objective to overcome “policy silos”;
  • help contextualise information, for example by collecting existing academic information on a specific topic, highlighting the historical and current position of the main actors on a particular issue, identifying the interests of different actors in various policy fields;
  • identify policy trends, via quantitative and qualitative methods such as semantic and sentiment analysis;
  • provide easy-to-use briefings and reports by incorporating modern visualisation techniques;”

UK: The nudge unit – has it worked so far?


The Guardian: “Since 2010 David Cameron’s pet project has been tasked with finding ways to improve society’s behaviour – and now the ‘nudge unit’ is going into business by itself. But have its initiatives really worked?….
The idea behind the unit is simpler than you might believe. People don’t always act in their own interests – by filing their taxes late, for instance, overeating, or not paying fines until the bailiffs call. As a result, they don’t just harm themselves, they cost the state a lot of money. By looking closely at how they make their choices and then testing small changes in the way the choices are presented, the unit tries to nudge people into leading better lives, and save the rest of us a fortune. It is politics done like science, effectively – with Ben Goldacre’s approval – and, in many cases, it appears to work….”

See also: Jobseekers’ psychometric test ‘is a failure’ (US institute that devised questionnaire tells ‘nudge’ unit to stop using it as it failed to be scientifically validated)

Is Privacy Algorithmically Impossible?


MIT Technology Reviewwhat.is_.personal.data2x519: “In 1995, the European Union introduced privacy legislation that defined “personal data” as any information that could identify a person, directly or indirectly. The legislators were apparently thinking of things like documents with an identification number, and they wanted them protected just as if they carried your name.
Today, that definition encompasses far more information than those European legislators could ever have imagined—easily more than all the bits and bytes in the entire world when they wrote their law 18 years ago.
Here’s what happened. First, the amount of data created each year has grown exponentially (see figure)…
Much of this data is invisible to people and seems impersonal. But it’s not. What modern data science is finding is that nearly any type of data can be used, much like a fingerprint, to identify the person who created it: your choice of movies on Netflix, the location signals emitted by your cell phone, even your pattern of walking as recorded by a surveillance camera. In effect, the more data there is, the less any of it can be said to be private. We are coming to the point that if the commercial incentives to mine the data are in place, anonymity of any kind may be “algorithmically impossible,” says Princeton University computer scientist Arvind Narayanan.”

Guide to Social Innovation


Social InnovationForeword of European Commission Guide on Social Innovation: “Social innovation is in the mouths of many today, at policy level and on the ground. It is not new as such: people have always tried to find new solutions for pressing social needs. But a number of factors have spurred its development recently.
There is, of course, a link with the current crisis and the severe employment and social consequences it has for many of Europe’s citizens. On top of that, the ageing of Europe’s population, fierce global competition and climate change became burning societal challenges. The sustainability and adequacy of Europe’s health and social security systems as well as social policies in general is at stake. This means we need to have a fresh look at social, health and employment policies, but also at education, training and skills development, business support, industrial policy, urban development, etc., to ensure socially and environmentally sustainable growth, jobs and quality of life in Europe.”

Linking open data to augmented intelligence and the economy


Open Data Institute and Professor Nigel Shadbolt (@Nigel_Shadbolt) interviewed by by (@digiphile):  “…there are some clear learnings. One that I’ve been banging on about recently has been that yes, it really does matter to turn the dial so that governments have a presumption to publish non-personal public data. If you would publish it anyway, under a Freedom of Information request or whatever your local legislative equivalent is, why aren’t you publishing it anyway as open data? That, as a behavioral change. is a big one for many administrations where either the existing workflow or culture is, “Okay, we collect it. We sit on it. We do some analysis on it, and we might give it away piecemeal if people ask for it.” We should construct publication process from the outset to presume to publish openly. That’s still something that we are two or three years away from, working hard with the public sector to work out how to do and how to do properly.
We’ve also learned that in many jurisdictions, the amount of [open data] expertise within administrations and within departments is slight. There just isn’t really the skillset, in many cases. for people to know what it is to publish using technology platforms. So there’s a capability-building piece, too.
One of the most important things is it’s not enough to just put lots and lots of datasets out there. It would be great if the “presumption to publish” meant they were all out there anyway — but when you haven’t got any datasets out there and you’re thinking about where to start, the tough question is to say, “How can I publish data that matters to people?”
The data that matters is revealed in the fact that if we look at the download stats on these various UK, US and other [open data] sites. There’s a very, very distinctive parallel curve. Some datasets are very, very heavily utilized. You suspect they have high utility to many, many people. Many of the others, if they can be found at all, aren’t being used particularly much. That’s not to say that, under that long tail, there isn’t large amounts of use. A particularly arcane open dataset may have exquisite use to a small number of people.
The real truth is that it’s easy to republish your national statistics. It’s much harder to do a serious job on publishing your spending data in detail, publishing police and crime data, publishing educational data, publishing actual overall health performance indicators. These are tough datasets to release. As people are fond of saying, it holds politicians’ feet to the fire. It’s easy to build a site that’s full of stuff — but does the stuff actually matter? And does it have any economic utility?”
there are some clear learnings. One that I’ve been banging on about recently has been that yes, it really does matter to turn the dial so that governments have a presumption to publish non-personal public data. If you would publish it anyway, under a Freedom of Information request or whatever your local legislative equivalent is, why aren’t you publishing it anyway as open data? That, as a behavioral change. is a big one for many administrations where either the existing workflow or culture is, “Okay, we collect it. We sit on it. We do some analysis on it, and we might give it away piecemeal if people ask for it.” We should construct publication process from the outset to presume to publish openly. That’s still something that we are two or three years away from, working hard with the public sector to work out how to do and how to do properly.
We’ve also learned that in many jurisdictions, the amount of [open data] expertise within administrations and within departments is slight. There just isn’t really the skillset, in many cases. for people to know what it is to publish using technology platforms. So there’s a capability-building piece, too.
One of the most important things is it’s not enough to just put lots and lots of datasets out there. It would be great if the “presumption to publish” meant they were all out there anyway — but when you haven’t got any datasets out there and you’re thinking about where to start, the tough question is to say, “How can I publish data that matters to people?”
The data that matters is revealed in the fact that if we look at the download stats on these various UK, US and other [open data] sites. There’s a very, very distinctive parallel curve. Some datasets are very, very heavily utilized. You suspect they have high utility to many, many people. Many of the others, if they can be found at all, aren’t being used particularly much. That’s not to say that, under that long tail, there isn’t large amounts of use. A particularly arcane open dataset may have exquisite use to a small number of people.
The real truth is that it’s easy to republish your national statistics. It’s much harder to do a serious job on publishing your spending data in detail, publishing police and crime data, publishing educational data, publishing actual overall health performance indicators. These are tough datasets to release. As people are fond of saying, it holds politicians’ feet to the fire. It’s easy to build a site that’s full of stuff — but does the stuff actually matter? And does it have any economic utility?

Measuring Impact of Open and Transparent Governance


opengovMark Robinson @ OGP blog: “Eighteen months on from the launch of the Open Government Partnership in New York in September 2011, there is growing attention to what has been achieved to date.  In the recent OGP Steering Committee meeting in London, government and civil society members were unanimous in the view that the OGP must demonstrate results and impact to retain its momentum and wider credibility.  This will be a major focus of the annual OGP conference in London on 31 October and 1 November, with an emphasis on showcasing innovations, highlighting results and sharing lessons.
Much has been achieved in eighteen months.  Membership has grown from 8 founding governments to 58.  Many action plan commitments have been realised for the majority of OGP member countries. The Independent Reporting Mechanism has been approved and launched. Lesson learning and sharing experience is moving ahead….
The third type of results are the trickiest to measure: What has been the impact of openness and transparency on the lives of ordinary citizens?  In the two years since the OGP was launched it may be difficult to find many convincing examples of such impact, but it is important to make a start in collecting such evidence.
Impact on the lives of citizens would be evident in improvements in the quality of service delivery, by making information on quality, access and complaint redressal public. A related example would be efficiency savings realised from publishing government contracts.  Misallocation of public funds exposed through enhanced budget transparency is another. Action on corruption arising from bribes for services, misuse of public funds, or illegal procurement practices would all be significant results from these transparency reforms.  A final example relates to jobs and prosperity, where the utilisation of government data in the public domain by the private sector to inform business investment decisions and create employment.
Generating convincing evidence on the impact of transparency reforms is critical to the longer-term success of the OGP. It is the ultimate test of whether lofty public ambitions announced in country action plans achieve real impacts to the benefit of citizens.”