Open Access


Reports by the UK’s House of Commons, Business, Innovation and Skills Committee: “Open access refers to the immediate, online availability of peer reviewed research articles, free at the point of access (i.e. without subscription charges or paywalls). Open access relates to scholarly articles and related outputs. Open data (which is a separate area of Government policy and outside the scope of this inquiry) refers to the availability of the underlying research data itself. At the heart of the open access movement is the principle that publicly funded research should be publicly accessible. Open access expanded rapidly in the late twentieth century with the growth of the internet and digitisation (the transcription of data into a digital form), as it became possible to disseminate research findings more widely, quickly and cheaply.
Whilst there is widespread agreement that the transition to open access is essential in order to improve access to knowledge, there is a lack of consensus about the best route to achieve it. To achieve open access at scale in the UK, there will need to be a shift away from the dominant subscription-based business model. Inevitably, this will involve a transitional period and considerable change within the scholarly publishing market.
For the UK to transition to open access, an effective, functioning and competitive market in scholarly communications will be vital. The evidence we saw over the course of this inquiry shows that this is currently far from the case, with journal subscription prices rising at rates that are unsustainable for UK universities and other subscribers. There is a significant risk that the Government’s current open access policy will inadvertently encourage and prolong the dysfunctional elements of the scholarly publishing market, which are a major barrier to access.
See Volume I and  Volume II

Visualizing the legislative process with Sankey diagrams


Kamil Gregor at OpeningParliament.org: “The process of shaping the law often resembles an Indiana Jones maze. Bills and amendments run through an elaborate system of committees, sessions and hearings filled with booby traps before finally reaching the golden idol of a final approval.
Parliamentary monitoring organizations and researchers are often interested in how various pieces of legislation survive in this environment and what are the strategies to either kill or aid them. This specifically means answering two questions: What is the probability of a bill being approved and what factors determine this probability?
The legislative process is usually hierarchical: Successful completion of a step in the process is conditioned by completion of all previous steps. Therefore, we may also want to know the probabilities of completion in each consecutive step and their determinants.
A simple way how to give a satisfying answer to these questions without wandering into the land of nonlinear logistic regressions is the Sankey diagram. It is a famous flow chart in which a process is visualized using arrows. Relative quantities of outcomes in the process are represented by arrows’ widths.
A famous example is a Sankey diagram of Napoleon’s invasion of Russia. We can clearly see how the Grand Army was gradually shrinking as French soldiers were dying or defecting. Another well-known example is the Google Analytics flow chart. It shows how many visitors enter a webpage and then either leave or continue to a different page on the same website. As the number of consecutive steps increases, the number of visitors remaining on the website decreases.
The legislative process can be visualized in the same way. Progress of bills is represented by a stream between various steps in the process and width of the stream corresponds to quantities of bills. A bill can either complete all the steps of the process, or it can “drop out” of it at some point if it gets rejected.
Let’s take a look…”

Open data for accountable governance: Is data literacy the key to citizen engagement?


at UNDP’s Voices of Eurasia blog: “How can technology connect citizens with governments, and how can we foster, harness, and sustain the citizen engagement that is so essential to anti-corruption efforts?
UNDP has worked on a number of projects that use technology to make it easier for citizens to report corruption to authorities:

These projects are showing some promising results, and provide insights into how a more participatory, interactive government could develop.
At the heart of the projects is the ability to use citizen generated data to identify and report problems for governments to address….

Wanted: Citizen experts

As Kenneth Cukier, The Economist’s Data Editor, has discussed, data literacy will become the new computer literacy. Big data is still nascent and it is impossible to predict exactly how it will affect society as a whole. What we do know is that it is here to stay and data literacy will be integral to our lives.
It is essential that we understand how to interact with big data and the possibilities it holds.
Data literacy needs to be integrated into the education system. Educating non-experts to analyze data is critical to enabling broad participation in this new data age.
As technology advances, key government functions become automated, and government data sharing increases, newer ways for citizens to engage will multiply.
Technology changes rapidly, but the human mind and societal habits cannot. After years of closed government and bureaucratic inefficiency, adaptation of a new approach to governance will take time and education.
We need to bring up a generation that sees being involved in government decisions as normal, and that views participatory government as a right, not an ‘innovative’ service extended by governments.

What now?

In the meantime, while data literacy lies in the hands of a few, we must continue to connect those who have the technological skills with citizen experts seeking to change their communities for the better – as has been done in many a Social Innovation Camps recently (in Montenegro, Ukraine and Armenia at Mardamej and Mardamej Relaoded and across the region at Hurilab).
The social innovation camp and hackathon models are an increasingly debated topic (covered by Susannah Vila, David Eaves, Alex Howard and Clay Johnson).
On the whole, evaluations are leading to newer models that focus on greater integration of mentorship to increase sustainability – which I readily support. However, I do have one comment:
Social innovation camps are often criticized for a lack of sustainability – a claim based on the limited number of apps that go beyond the prototype phase. I find a certain sense of irony in this, for isn’t this what innovation is about: Opening oneself up to the risk of failure in the hope of striking something great?
In the words of Vinod Khosla:

“No failure means no risk, which means nothing new.”

As more data is released, the opportunity for new apps and new ways for citizen interaction will multiply and, who knows, someone might come along and transform government just as TripAdvisor transformed the travel industry.”

Smaller, Better, Faster, Stronger: Remaking government for the digital age


New Report by PolicyExchange (UK): “The government could save as much as £70 billion by 2020 if it adopted plans to eliminate paper and digitise its activities, work smarter with fewer staff in Whitehall, shop around for the best procurement deals, and accelerate the use of data and analytics.
Smaller, Better, Faster, Stronger shows how the government is wasting billions of pounds by relying on paper based public services. The Crown Prosecution Service prints one million sheets of paper every day while two articulated trucks loaded with letters and paper work pull into the Driving and Vehicle Licensing Authority (DVLA) every day. In order to complete a passport application form online, the Passport Office will print the form out and post it back for the individual to sign and send back.
In the near future, everything the government does should be online, unless a face-to-face interaction is essential. The UK is already nation of internet users with nearly 6 in 10 people accessing the internet via a smartphone. People expect even simple government services like tax returns or driving licences to be online. Fully transforming government with digital technologies could help close the gap between productivity in the public and private sectors.
The report also calls for stronger digital and data skills in Whitehall, making the point that senior officials will make or break this agenda by the interest they take in digital and their willingness to keep up with the times.”

Twitter’s activist roots: How Twitter’s past shapes its use as a protest tool


Radio Netherlands Worldwide: “Surprised when demonstrators from all over the world took to Twitter as a protest tool? Evan “Rabble” Henshaw-Plath, member of Twitter’s founding team, was not. Rather, he sees it as a return to its roots: Inspired by protest coordination tools like TXTMob, and shaped by the values and backgrounds of Twitter’s founders, he believes activist potential was built into the service from the start.

It took a few revolutions before Twitter was taken seriously. Critics claimed that its 140-character limit only provided space for the most trivial thoughts: neat for keeping track of Ashton Kutcher’s lunch choices, but not much else. It made the transition from Silicon Valley toy into Middle East protest tool seem all the more astonishing.
Unless, Twitter co-founder Evan Henshaw-Plath argues, you know the story of how Twitter came to be. Evan Henshaw-Plath was the lead developer at Odeo, the company that started and eventually became Twitter. TXTMob, an activist tool deployed during the 2004 Republican National Convention in the US to coordinate protest efforts via SMS was, says Henshaw-Plath, a direct inspiration for Twitter.
Protest 1.0
In 2004, while Henshaw-Plath was working at Odeo, he and a few other colleagues found a fun side-project in working on TXTMob, an initiative by what he describes as a “group of academic artist/prankster/hacker/makers” that operated under the ostensibly serious moniker of Institute for Applied Autonomy (IAA). Earlier IAA projects included small graffiti robots on wheels that spray painted slogans on pavements during demonstrations, and a pudgy talking robot with big puppy eyes made to distribute subversive literature to people who ignored less-cute human pamphleteers.
TXTMob was a more serious endeavor than these earlier projects: a tactical protest coordination tool. With TXTMob, users could quickly exchange text messages with large groups of other users about protest locations and police crackdowns….”

Index: The Data Universe


The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on the data universe and was originally published in 2013.

  • How much data exists in the digital universe as of 2012: 2.7 zetabytes*
  • Increase in the quantity of Internet data from 2005 to 2012: +1,696%
  • Percent of the world’s data created in the last two years: 90
  • Number of exabytes (=1 billion gigabytes) created every day in 2012: 2.5; that number doubles every month
  • Percent of the digital universe in 2005 created by the U.S. and western Europe vs. emerging markets: 48 vs. 20
  • Percent of the digital universe in 2012 created by emerging markets: 36
  • Percent of the digital universe in 2020 predicted to be created by China alone: 21
  • How much information in the digital universe is created and consumed by consumers (video, social media, photos, etc.) in 2012: 68%
  • Percent of which enterprises have liability or responsibility for (copyright, privacy, compliance with regulations, etc.): 80
  • Amount included in the Obama Administration’s 2-12 Big Data initiative: over $200 million
  • Amount the Department of Defense is investing annually on Big Data projects as of 2012: over $250 million
  • Data created per day in 2012: 2.5 quintillion bytes
  • How many terabytes* of data collected by the U.S. Library of Congress as of April 2011: 235
  • How many terabytes of data collected by Walmart per hour as of 2012: 2,560, or 2.5 petabytes*
  • Projected growth in global data generated per year, as of 2011: 40%
  • Number of IT jobs created globally by 2015 to support big data: 4.4 million (1.9 million in the U.S.)
  • Potential shortage of data scientists in the U.S. alone predicted for 2018: 140,000-190,000, in addition to 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions
  • Time needed to sequence the complete human genome (analyzing 3 billion base pairs) in 2003: ten years
  • Time needed in 2013: one week
  • The world’s annual effective capacity to exchange information through telecommunication networks in 1986, 2007, and (predicted) 2013: 281 petabytes, 65 exabytes, 667 exabytes
  • Projected amount of digital information created annually that will either live in or pass through the cloud: 1/3
  • Increase in data collection volume year-over-year in 2012: 400%
  • Increase in number of individual data collectors from 2011 to 2012: nearly double (over 300 data collection parties in 2012)

*1 zetabyte = 1 billion terabytes | 1 petabyte = 1,000 terabytes | 1 terabyte = 1,000 gigabytes | 1 gigabyte = 1 billion bytes

Sources

The Logic of Connective Action- Digital Media and the Personalization of Contentious Politics


New book by W. Lance Bennett and Alexandra Segerberg: “The Logic of Connective Action explains the rise of a personalized digitally networked politics in which diverse individuals address the common problems of our times such as economic fairness and climate change. Rich case studies from the United States, United Kingdom, and Germany illustrate a theoretical framework for understanding how large-scale connective action is coordinated using inclusive discourses such as “We Are the 99%” that travel easily through social media. In many of these mobilizations, communication operates as an organizational process that may replace or supplement familiar forms of collective action based on organizational resource mobilization, leadership, and collective action framing. In some cases, connective action emerges from crowds that shun leaders, as when Occupy protesters created media networks to channel resources and create loose ties among dispersed physical groups. In other cases, conventional political organizations deploy personalized communication logics to enable large-scale engagement with a variety of political causes. The Logic of Connective Action shows how power is organized in communication-based networks, and what political outcomes may result.”

Is Online Transparency Just a Feel-Good Sham?


Billy House in the National Journal: “It drew more than a few laughs in Washington. Not long after the White House launched its We the People website in 2011, where citizens could write online petitions and get a response if they garnered enough signatures, someone called for construction of a Star Wars-style Death Star.
With laudable humor, the White House dispatched Paul Shawcross, chief of the Science and Space Branch of the Office of Management and Budget, to explain that the administration “does not support blowing up planets.”
The incident caused a few chuckles, but it also made a more serious point: Years after politicians and government officials began using Internet surveys and online outreach as tools to engage people, the results overall have been questionable….
But skepticism over the value of these programs—and their genuineness—remains strong. Peter Levine, a professor at Tufts University’s Jonathan M. Tisch College of Citizenship and Public Service, said programs like online petitioning and citizen cosponsoring do not necessarily produce a real, representative voice for the people.
It can be “pretty easy to overwhelm these efforts with deliberate strategic action,” he said, noting that similar petitioning efforts in the European Union often find marijuana legalization as the most popular measure.”

Civic Innovation Fellowships Go Global


Some thoughts from Panthea Lee from Reboot: “In recent years, civic innovation fellowships have shown great promise to improve the relationships between citizens and government. In the United States, Code for America and the Presidential Innovation Fellows have demonstrated the positive impact a small group of technologists can have working hand-in-hand with government. With the launch of Code for All, Code for Europe, Code4Kenya, and Code4Africa, among others, the model is going global.
But despite the increasing popularity of civic innovation fellowships, there are few templates for how a “Code for” program can be adapted to a different context. In the US, the success of Code for America has drawn from a wealth of tech talent eager to volunteer skills, public and private support, and the active participation of municipal governments. Elsewhere, new “Code for” programs are surely going to have to operate within a different set of capacities and constraints.”

Smartphones As Weather Surveillance Systems


Tom Simonite in MIT Technology Review: “You probably never think about the temperature of your smartphone’s battery, but it turns out to provide an interesting method for tracking outdoor air temperature. It’s a discovery that adds to other evidence that mobile apps could provide a new way to measure what’s happening in the atmosphere and improve weather forecasting.
Startup OpenSignal, whose app crowdsources data on cellphone reception, first noticed in 2012 that changes in battery temperature correlated with those outdoors. On Tuesday, they published a scientific paper on that technique in a geophysics journal and announced that the technique will be used to interpret data from a weather crowdsourcing app. OpenSignal originally started collecting data on battery temperatures to try and understand the connections between signal strength and how quickly a device chews through its battery.
OpenSignal’s crowdsourced weather-tracking effort joins another accidentally enabled by smartphones. A project called PressureNET that collects air pressure data by taking advantage of the fact many Android phones have a barometer inside to aid their GPS function (see “App Feeds Scientists Atmospheric Data From Thousands of Smartphones”). Cliff Mass, an atmospheric scientist at the University of Washington, is working to incorporate PressureNET data into weather models that usually rely on data from weather stations. He believes that smartphones could provide valuable data from places where there are no weather stations, if enough people start sharing data using apps like PressureNET.
Other research suggests that logging changes in cell network signal strength perceived by smartphones could provide yet more weather data. In February researchers in the Netherlands produced detailed maps of rainfall compiled by monitoring fluctuations in the signal strength measured by cellular network masts, caused by water droplets in the atmosphere.”