Using open government for climate action


Elizabeth Moses at Eco-Business: “Countries made many national climate commitments as part of the Paris Agreement on climate change, which entered into force earlier this month. Now comes the hard part of implementing those commitments. The public can serve an invaluable watchdog role, holding governments accountable for following through on their targets and making sure climate action happens in a way that’s fair and inclusive.

But first, the climate and open government communities will need to join forces….

Here are four areas where these communities can lean in together to ensure governments follow through on effective climate action:

1) Expand access to climate data and information.

Open government and climate NGOs and local communities can expand the use of traditional transparency tools and processes such as Freedom of Information (FOI) laws, transparent budgeting, open data policies and public procurement to enhance open information on climate mitigation, adaptation and finance.

For example, Transparencia Mexicana used Mexico’s Freedom of Information Law to collect data to map climate finance actors and the flow of finance in the country. This allows them to make specific recommendations on how to safeguard climate funds against corruption and ensure the money translates into real action on the ground….

2) Promote inclusive and participatory climate policy development.

Civil society and community groups already play a crucial role in advocating for climate action and improving climate governance at the national and local levels, especially when it comes to safeguarding poor and vulnerable people, who often lack political voice….

3) Take legal action for stronger accountability.

Accountability at a national level can only be achieved if grievance mechanisms are in place to address a lack of transparency or public participation, or address the impact of projects and policies on individuals and communities.

Civil society groups and individuals can use legal actions like climate litigation, petitions, administrative policy challenges and court cases at the national, regional or international levels to hold governments and businesses accountable for failing to effectively act on climate change….

4) Create new spaces for advocacy.

Bringing the climate and open government movements together allows civil society to tap new forums for securing momentum around climate policy implementation. For example, many civil society NGOs are highlighting the important connections between a strong Governance Goal 16 under the 2030 Agenda for Sustainable Development, and strong water quality and climate change policies….(More)”

Big data promise exponential change in healthcare


Gonzalo Viña in the Financial Times (Special Report: ): “When a top Formula One team is using pit stop data-gathering technology to help a drugmaker improve the way it makes ventilators for asthma sufferers, there can be few doubts that big data are transforming pharmaceutical and healthcare systems.

GlaxoSmithKline employs online technology and a data algorithm developed by F1’s elite McLaren Applied Technologies team to minimise the risk of leakage from its best-selling Ventolin (salbutamol) bronchodilator drug.

Using multiple sensors and hundreds of thousands of readings, the potential for leakage is coming down to “close to zero”, says Brian Neill, diagnostics director in GSK’s programme and risk management division.

This apparently unlikely venture for McLaren, known more as the team of such star drivers as Fernando Alonso and Jenson Button, extends beyond the work it does with GSK. It has partnered with Birmingham Children’s hospital in a £1.8m project utilising McLaren’s expertise in analysing data during a motor race to collect such information from patients as their heart and breathing rates and oxygen levels. Imperial College London, meanwhile, is making use of F1 sensor technology to detect neurological dysfunction….

Big data analysis is already helping to reshape sales and marketing within the pharmaceuticals business. Great potential, however, lies in its ability to fine tune research and clinical trials, as well as providing new measurement capabilities for doctors, insurers and regulators and even patients themselves. Its applications seem infinite….

The OECD last year said governments needed better data governance rules given the “high variability” among OECD countries about protecting patient privacy. Recently, DeepMind, the artificial intelligence company owned by Google, signed a deal with a UK NHS trust to process, via a mobile app, medical data relating to 1.6m patients. Privacy advocates say this as “worrying”. Julia Powles, a University of Cambridge technology law expert, asks if the company is being given “a free pass” on the back of “unproven promises of efficiency and innovation”.

Brian Hengesbaugh, partner at law firm Baker & McKenzie in Chicago, says the process of solving such problems remains “under-developed”… (More)

New Data Portal to analyze governance in Africa


Shareveillance: Subjectivity between open and closed data


Clare Birchall in Big Data and Society: “This article attempts to question modes of sharing and watching to rethink political subjectivity beyond that which is enabled and enforced by the current data regime. It identifies and examines a ‘shareveillant’ subjectivity: a form configured by the sharing and watching that subjects have to withstand and enact in the contemporary data assemblage. Looking at government open and closed data as case studies, this article demonstrates how ‘shareveillance’ produces an anti-political role for the public. In describing shareveillance as, after Jacques Rancière, a distribution of the (digital) sensible, this article posits a politico-ethical injunction to cut into the share and flow of data in order to arrange a more enabling assemblage of data and its affects. In order to interrupt shareveillance, this article borrows a concept from Édouard Glissant and his concern with raced otherness to imagine what a ‘right to opacity’ might mean in the digital context. To assert this right is not to endorse the individual subject in her sovereignty and solitude, but rather to imagine a collective political subjectivity and relationality according to the important question of what it means to ‘share well’ beyond the veillant expectations of the state.

Two questions dominate current debates at the intersection of privacy, governance, security, and transparency: How much, and what kind of data should citizens have to share with surveillant states? And: How much data from government departments should states share with citizens? Yet, these issues are rarely expressed in terms of ‘sharing’ in the way that I will be doing in this article. More often, when thought in tandem with the digital, ‘sharing’ is used in reference to either free trials of software (‘shareware’); the practice of peer-to-peer file sharing; platforms that facilitate the pooling, borrowing, swapping, renting, or selling of resources, skills, and assets that have come to be known as the ‘sharing economy’; or the business of linking and liking on social media, which invites us to share our feelings, preferences, thoughts, interests, photographs, articles, and web links. Sharing in the digital context has been framed as a form of exchange, then, but also communication and distribution (see John, 2013; Wittel, 2011).

In order to understand the politics of open and opaque government data practices, which either share with citizens or ask citizens to share, I will extend existing commentaries on the distributive qualities of sharing by drawing on Jacques Rancière’s notion of the ‘distribution of the sensible’ (2004a) – a settlement that determines what is visible, audible, sayable, knowable and what share or role we each have within it. In the process, I articulate ‘sharing’ with ‘veillance’ (veiller ‘to watch’ is from the Latin vigilare, from vigil, ‘watchful’) to turn the focus from prevalent ways of understanding digital sharing towards a form of contemporary subjectivity. What I call ‘shareveillance’ – a state in which we are always already sharing; indeed, in which any relationship with data is only made possible through a conditional idea of sharing – produces an anti-politicised public caught between different data practices.

I will argue that both open and opaque government data initiatives involve, albeit differently pitched, forms of sharing and veillance. Government practices that share data with citizens involve veillance because they call on citizens to monitor and act upon that data – we are envisioned (‘veiled’ and hailed) as auditing and entrepreneurial subjects. Citizens have to monitor the state’s data, that is, or they are expected to innovate with it and make it profitable. Data sharing therefore apportions responsibility without power. It watches citizens watching the state, delimiting the ways in which citizens can engage with that data and, therefore, the scope of the political per se….(More)”.

New UN resolution on the right to privacy in the digital age: crucial and timely


Deborah Brown at the Internet Policy Review: “The rapid pace of technological development enables individuals all over the world to use new information and communications technologies (ICTs) to improve their lives. At the same time, technology is enhancing the capacity of governments, companies and individuals to undertake surveillance, interception and data collection, which may violate or abuse human rights, in particular the right to privacy. In this context, the UN General Assembly’s Third Committee adoption on 21 November of a new resolution on the right to privacy in the digital age comes as timely and crucial for protecting the right to privacy in light of new challenges.

As with previous UN resolutions on this topic, the resolution adopted on 21 November 2016 recognises the importance of respecting international commitments in relation to the right to privacy. It underscores that any legitimate concerns states may have with regard to their security can and should be addressed in a manner consistent with obligations under international human rights law.

Recognising that more and more personal data is being collected, processed, and shared, this year’s resolution expresses concern about the sale or multiple re-sales of personal data, which often happens without the individual’s free, explicit and informed consent. It calls for the strengthening of prevention of and protection against such violations, and calls on states to develop preventative measures, sanctions, and remedies.

This year, the resolution more explicitly acknowledges the role of the private sector. It calls on states to put in place (or maintain) effective sanctions and remedies to prevent the private sector from committing violations and abuses of the right to privacy. This is in line with states’ obligations under the UN Guiding Principles on Business and Human Rights, which require states to protect against abuses by businesses within their territories or jurisdictions. The resolution specifically calls on states to refrain from requiring companies to take steps that interfere with the right to privacy in an arbitrary or unlawful way. With respect to companies, it recalls the responsibility of the private sector to respect human rights, and specifically calls on them to inform users about company policies that may impact their right to privacy….(More)”

Government for a Digital Economy


Chapter by Zoe Baird in America’s National Security Architecture: Rebuilding the Foundation: “The private sector is transforming at record speed for the digital economy. As recently as 2008, when America elected President Obama, most large companies had separate IT departments, which were seen as just that—departments—separate from the heart of the business. Now, as wireless networks connect the planet, and entire companies exist in the cloud, digital technology is no longer viewed as another arrow in the corporate quiver, but rather the very foundation upon which all functions are built. This, then, is the mark of the digital era: in order to remain successful, modern enterprises must both leverage digital technology and develop a culture that values its significance within the organization.

For the federal government to help all Americans thrive in this new economy, and for the government to be an engine of growth, it too must enter the digital era. On a basic level, we need to improve the government’s digital infrastructure and use technology to deliver government services better. But a government for the digital economy needs to take bold steps to embed these actions as part of a large and comprehensive transformation in how it goes about the business of governing. We should not only call on the “IT department” to provide tools, we must completely change the way we think about how a digital age government learns about the world, makes policy, and operates against its objectives.

Government today does not reflect the fundamental attributes of the digital age. It moves slowly at a time when information travels around the globe at literally the speed of light. It takes many years to develop and implement comprehensive policy in a world characterized increasingly by experimentation and iterative midcourse adjustments. It remains departmentally balkanized and hierarchical in an era of networks and collaborative problem solving. It assumes that it possesses the expertise necessary to make decisions while most of the knowledge resides at the edges. It is bogged down in legacy structures and policy regimes that do not take advantage of digital tools, and worse, create unnecessary barriers that hold progress back. Moreover, it is viewed by its citizens as opaque and complex in an era when openness and access are attributes of legitimacy….(More)”

What’s wrong with big data?


James Bridle in the New Humanist: “In a 2008 article in Wired magazine entitled “The End of Theory”, Chris Anderson argued that the vast amounts of data now available to researchers made the traditional scientific process obsolete. No longer would they need to build models of the world and test them against sampled data. Instead, the complexities of huge and totalising datasets would be processed by immense computing clusters to produce truth itself: “With enough data, the numbers speak for themselves.” As an example, Anderson cited Google’s translation algorithms which, with no knowledge of the underlying structures of languages, were capable of inferring the relationship between them using extensive corpora of translated texts. He extended this approach to genomics, neurology and physics, where scientists are increasingly turning to massive computation to make sense of the volumes of information they have gathered about complex systems. In the age of big data, he argued, “Correlation is enough. We can stop looking for models.”

This belief in the power of data, of technology untrammelled by petty human worldviews, is the practical cousin of more metaphysical assertions. A belief in the unquestionability of data leads directly to a belief in the truth of data-derived assertions. And if data contains truth, then it will, without moral intervention, produce better outcomes. Speaking at Google’s private London Zeitgeist conference in 2013, Eric Schmidt, Google Chairman, asserted that “if they had had cellphones in Rwanda in 1994, the genocide would not have happened.” Schmidt’s claim was that technological visibility – the rendering of events and actions legible to everyone – would change the character of those actions. Not only is this statement historically inaccurate (there was plenty of evidence available of what was occurring during the genocide from UN officials, US satellite photographs and other sources), it’s also demonstrably untrue. Analysis of unrest in Kenya in 2007, when over 1,000 people were killed in ethnic conflicts, showed that mobile phones not only spread but accelerated the violence. But you don’t need to look to such extreme examples to see how a belief in technological determinism underlies much of our thinking and reasoning about the world.

“Big data” is not merely a business buzzword, but a way of seeing the world. Driven by technology, markets and politics, it has come to determine much of our thinking, but it is flawed and dangerous. It runs counter to our actual findings when we employ such technologies honestly and with the full understanding of their workings and capabilities. This over-reliance on data, which I call “quantified thinking”, has come to undermine our ability to reason meaningfully about the world, and its effects can be seen across multiple domains.

The assertion is hardly new. Writing in the Dialectic of Enlightenment in 1947, Theodor Adorno and Max Horkheimer decried “the present triumph of the factual mentality” – the predecessor to quantified thinking – and succinctly analysed the big data fallacy, set out by Anderson above. “It does not work by images or concepts, by the fortunate insights, but refers to method, the exploitation of others’ work, and capital … What men want to learn from nature is how to use it in order wholly to dominate it and other men. That is the only aim.” What is different in our own time is that we have built a world-spanning network of communication and computation to test this assertion. While it occasionally engenders entirely new forms of behaviour and interaction, the network most often shows to us with startling clarity the relationships and tendencies which have been latent or occluded until now. In the face of the increased standardisation of knowledge, it becomes harder and harder to argue against quantified thinking, because the advances of technology have been conjoined with the scientific method and social progress. But as I hope to show, technology ultimately reveals its limitations….

“Eroom’s law” – Moore’s law backwards – was recently formulated to describe a problem in pharmacology. Drug discovery has been getting more expensive. Since the 1950s the number of drugs approved for use in human patients per billion US dollars spent on research and development has halved every nine years. This problem has long perplexed researchers. According to the principles of technological growth, the trend should be in the opposite direction. In a 2012 paper in Nature entitled “Diagnosing the decline in pharmaceutical R&D efficiency” the authors propose and investigate several possible causes for this. They begin with social and physical influences, such as increased regulation, increased expectations and the exhaustion of easy targets (the “low hanging fruit” problem). Each of these are – with qualifications – disposed of, leaving open the question of the discovery process itself….(More)

Teaching an Algorithm to Understand Right and Wrong


Greg Satell at Harvard Business Review: “In his Nicomachean Ethics, Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma. We all agree that we should be good and just, but it’s much harder to decide what that entails.

Since Aristotle’s time, the questions he raised have been continually discussed and debated. From the works of great philosophers like Kant, Bentham, andRawls to modern-day cocktail parties and late-night dorm room bull sessions, the issues are endlessly mulled over and argued about but never come to a satisfying conclusion.

Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.

Designing a Learning Environment

Every parent worries about what influences their children are exposed to. What TV shows are they watching? What video games are they playing? Are they hanging out with the wrong crowd at school? We try not to overly shelter our kids because we want them to learn about the world, but we don’t want to expose them to too much before they have the maturity to process it.

In artificial intelligence, these influences are called a “machine learning corpus.”For example, if you want to teach an algorithm to recognize cats, you expose it to thousands of pictures of cats and things that are not cats. Eventually, it figures out how to tell the difference between, say, a cat and a dog. Much as with human beings, it is through learning from these experiences that algorithms become useful.

However, the process can go horribly awry, as in the case of Microsoft’s Tay, aTwitter bot that the company unleashed on the microblogging platform. In under a day, Tay went from being friendly and casual (“Humans are super cool”) to downright scary (“Hitler was right and I hate Jews”). It was profoundly disturbing.

Francesca Rossi, an AI researcher at IBM, points out that we often encode principles regarding influences into societal norms, such as what age a child needs to be to watch an R-rated movie or whether they should learn evolution in school. “We need to decide to what extent the legal principles that we use to regulate humans can be used for machines,” she told me.

However, in some cases algorithms can alert us to bias in our society that we might not have been aware of, such as when we Google “grandma” and see only white faces. “There is a great potential for machines to alert us to bias,” Rossi notes. “We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.”…

Another issue that we will have to contend with is that we will have to decide not only what ethical principles to encode in artificial intelligences but also how they are coded. As noted above, for the most part, “Thou shalt not kill” is a strict principle. Other than a few rare cases, such as the Secret Service or a soldier, it’s more like a preference that is greatly affected by context….

As pervasive as artificial intelligence is set to become in the near future, the responsibility rests with society as a whole. Put simply, we need to take the standards by which artificial intelligences will operate just as seriously as those that govern how our political systems operate and how are children are educated.

It is a responsibility that we cannot shirk….(More)

Make Democracy Great Again: Let’s Try Some ‘Design Thinking’


Ken Carbone in the Huffington Post: “Allow me to begin with the truth. I’ve never studied political science, run for public office nor held a position in government. For the last forty years I’ve led a design agency working with enduring brands across the globe. As with any experienced person in my profession, I have used research, deductive reasoning, logic and “design thinking“ to solve complex problems and create opportunities. Great brands that are showing their age turn to our agency to get back on course. In this light, I believe American democracy is a prime target for some retooling….

The present campaign cycle has left many voters wondering how such divisiveness and national embarrassment could be happening in the land of the free and home of the brave. This could be viewed as symptomatic of deeper structural problems in our tradition bound 240 year-old democracy. Great brands operate on a “innovate or die” model to insure success. The continual improvement of how a business operates and adapts to market conditions is a sound and critical practice.

Although the current election frenzy will soon be over, I want to examine three challenges to our election process and propose possible solutions for consideration. I’ll use the same diagnostic thinking I use with major corporations:

Term Limits…

Voting and Voter registration…

Political Campaigns…

In June of this year I attended the annual leadership conference of AIGA, the professional association for design, in Raleigh NC. A provocative question posed to a select group of designers was “What would you do if you were Secretary of Design.” The responses addressed issues concerning positive social change, education and Veteran Affairs. The audience was full of several hundred trained professionals whose everyday problem solving methods encourage divergent thinking to explore many solutions (possible or impossible) and then use convergent thinking to select and realize the best resolution. This is the very definition of “design thinking.” That leads to progress….(More)”.

Innovation Labs: 10 Defining Features


Essay by Lidia Gryszkiewicz, Tuukka Toivonen, & Ioanna Lykourentzou: “Innovation labs, with their aspirations to foster systemic change, have become a mainstay of the social innovation scene. Used by city administrations, NGOs, think tanks, and multinational corporations, labs are becoming an almost default framework for collaborative innovation. They have resonance in and across myriad fields: London’s pioneering Finance Innovation Lab, for example, aims to create a system of finance that benefits “people and planet”; the American eLab is searching for the future of the electricity sector; and the Danish MindLab helps the government co-create better social services and solutions. Hundreds of other, similar initiatives around the world are taking on a range of grand challenges (or, if you prefer, wicked problems) and driving collective social impact.

Yet for all their seeming popularity, labs face a basic problem that closely parallels the predicament of hub organizations: There is little clarity on their core features, and few shared definitions exist that would make sense amid their diverse themes and settings. …

Building on observations previously made in the SSIR and elsewhere, we contribute to the task of clarifying the logic of modern innovation labs by distilling 10 defining features. …

1. Imposed but open-ended innovation themes…

2. Preoccupation with large innovation challenges…

3. Expectation of breakthrough solutions…

4. Heterogeneous participants…

5. Targeted collaboration…

6. Long-term perspectives…

7. Rich innovation toolbox…

8. Applied orientation…

9. Focus on experimentation…

10. Systemic thinking…

In a recent academic working paper, we condense the above into this definition: An innovation lab is a semi-autonomous organization that engages diverse participants—on a long-term basis—in open collaboration for the purpose of creating, elaborating, and prototyping radical solutions to pre-identified systemic challenges…(More)”