Prosperity: Better Business Makes the Greater Good


Book by Colin Mayer: “What is a business for? On day one of an economics course a new student is taught the answer: to maximize shareholder profit. But this single idea that pervades all our thinking about the role of the corporation, is fundamentally wrong, argues Colin Mayer. Constraining the firm to a single narrow objective has had wide-ranging and damaging consequences; economic, environmental, political, and social.

Prosperity challenges the fundamentals of business thinking. It also sets out a positive new agenda, demonstrating that the corporation is in a unique and powerful position to promote economic and social wellbeing in its fullest sense, for customers, for future generations, as well as for shareholders.

Professor and former Dean of the Saïd Business School in Oxford, Mayer is a leading figure in the global discussion about the purpose and role of the corporation. In Prosperity, he presents a radical and carefully considered agenda for corporations themselves, and for the regulatory frameworks that will enable them to do this. Drawing together insights from business, law, and economics, science, philosophy, and history, he shows how the corporation can realize its full potential to contribute to the economic and social wellbeing of the many, not just the few. 

Prosperity is as much a discussion of how to create and run successful businesses as it is a guide to policymaking to fix the broken system….(More)”.

The Theory and Practice of Social Machines


Book by Nigel Shadbolt, David De Roure, Kieron O’Hara and Wendy Hall: “Social machines are a type of network connected by interactive digital devices made possible by the ubiquitous adoption of technologies such as the Internet, the smartphone, social media and the read/write World Wide Web, connecting people at scale to document situations, cooperate on tasks, exchange information, or even simply to play. Existing social processes may be scaled up, and new social processes enabled, to solve problems, augment reality, create new sources of value, and disrupt existing practice.

This book considers what talents one would need to understand or build a social machine, describes the state of the art, and speculates on the future, from the perspective of the EPSRC project SOCIAM – The Theory and Practice of Social Machines. The aim is to develop a set of tools and techniques for investigating, constructing and facilitating social machines, to enable us to narrow down pragmatically what is becoming a wide space, by asking ‘when will it be valuable to use these methods on a sociotechnical system?’ The systems for which the use of these methods adds value are social machines in which there is rich person-to-person communication, and where a large proportion of the machine’s behaviour is constituted by human interaction….(More)”.

Making NHS data work for everyone


Reform: This report looks at the access and use of NHS data by private sector companies for research or product and service development purposes….

The private sector is an important partner to the NHS and plays a crucial role in the development of healthcare technologies that use data collected by hospitals or GP practices. It provides the skills and know-how to develop data-driven tools which can be used to improve patient care. However, this is not a one-sided exchange as the NHS makes the data available to build these tools and offers medical expertise to make sense of the data. This is known as the “value exchange”. Our research uncovered that there is a lack of clarity over what a fair value exchange looks like. This lack of clarity in conjunction with the lack of national guidance on the types of partnerships that could be developed has led to a patchwork on the ground….

Knowing what the “value exchange” is between patients, the NHS and industry allows for a more informed conversation about what constitutes a fair partnership when there is access to data to create a product or service

WHAT NEEDS TO CHANGE?

  1. Engage with the public
  2. A national strategy
  3. Access to good quality data
  4. Commercial and legal skills…(More)

Cybersecurity of the Person


Paper by Jeff Kosseff: “U.S. cybersecurity law is largely an outgrowth of the early-aughts concerns over identity theft and financial fraud. Cybersecurity laws focus on protecting identifiers such as driver’s licenses and social security numbers, and financial data such as credit card numbers. Federal and state laws require companies to protect this data and notify individuals when it is breached, and impose civil and criminal liability on hackers who steal or damage this data. In this paper, I argue that our current cybersecurity laws are too narrowly focused on financial harms. While such concerns remain valid, they are only one part of the cybersecurity challenge that our nation faces.

Too often overlooked by the cybersecurity profession are the harms to individuals, such as revenge pornography and online harassment. Our legal system typically addresses these harms through retrospective criminal prosecution and civil litigation, both of which face significant limits. Accounting for such harms in our conception of cybersecurity will help to better align our laws with these threats and reduce the likelihood of the harms occurring….(More)”,

We Need an FDA For Algorithms


Interview with Hannah Fry on the promise and danger of an AI world by Michael Segal:”…Why do we need an FDA for algorithms?

It used to be the case that you could just put any old colored liquid in a glass bottle and sell it as medicine and make an absolute fortune. And then not worry about whether or not it’s poisonous. We stopped that from happening because, well, for starters it’s kind of morally repugnant. But also, it harms people. We’re in that position right now with data and algorithms. You can harvest any data that you want, on anybody. You can infer any data that you like, and you can use it to manipulate them in any way that you choose. And you can roll out an algorithm that genuinely makes massive differences to people’s lives, both good and bad, without any checks and balances. To me that seems completely bonkers. So I think we need something like the FDA for algorithms. A regulatory body that can protect the intellectual property of algorithms, but at the same time ensure that the benefits to society outweigh the harms.

Why is the regulation of medicine an appropriate comparison?

If you swallow a bottle of colored liquid and then you keel over the next day, then you know for sure it was poisonous. But there are much more subtle things in pharmaceuticals that require expert analysis to be able to weigh up the benefits and the harms. To study the chemical profile of these drugs that are being sold and make sure that they actually are doing what they say they’re doing. With algorithms it’s the same thing. You can’t expect the average person in the street to study Bayesian inference or be totally well read in random forests, and have the kind of computing prowess to look up a code and analyze whether it’s doing something fairly. That’s not realistic. Simultaneously, you can’t have some code of conduct that every data science person signs up to, and agrees that they won’t tread over some lines. It has to be a government, really, that does this. It has to be government that analyzes this stuff on our behalf and makes sure that it is doing what it says it does, and in a way that doesn’t end up harming people.

How did you come to write a book about algorithms?

Back in 2011 in London, we had these really bad riots in London. I’d been working on a project with the Metropolitan Police, trying mathematically to look at how these riots had spread and to use algorithms to ask how could the police have done better. I went to go and give a talk in Berlin about this paper we’d published about our work, and they completely tore me apart. They were asking questions like, “Hang on a second, you’re creating this algorithm that has the potential to be used to suppress peaceful demonstrations in the future. How can you morally justify the work that you’re doing?” I’m kind of ashamed to say that it just hadn’t occurred to me at that point in time. Ever since, I have really thought a lot about the point that they made. And started to notice around me that other researchers in the area weren’t necessarily treating the data that they were working with, and the algorithms that they were creating, with the ethical concern they really warranted. We have this imbalance where the people who are making algorithms aren’t talking to the people who are using them. And the people who are using them aren’t talking to the people who are having decisions made about their lives by them. I wanted to write something that united those three groups….(More)”.

Using insights from behavioral economics to nudge individuals towards healthier choices when eating out


Paper by Stéphane Bergeron, Maurice Doyon, Laure Saulais and JoAnne Labrecque: “Using a controlled experiment in a restaurant with naturally occurring clients, this study investigates how nudging can be used to design menus that guide consumers to make healthier choices. It examines the use of default options, focusing specifically on two types of defaults that can be found when ordering food in a restaurant: automatic and standard defaults. Both types of defaults significantly affected choices, but did not adversely impact the satisfaction of individual choices. The results suggest that menu design could effectively use non-informational strategies such as nudging to promote healthier individual choices without restricting the offer or reducing satisfaction….(More)”.

G20/OECD Compendium of good practices on the use of open data for Anti-corruption


OECD: “This compendium of good practices was prepared by the OECD at the request of the G20 Anti-corruption Working Group (ACWG), to raise awareness of the benefits of open data policies and initiatives in: 

  • fighting corruption,
  • increasing public sector transparency and integrity,
  • fostering economic development and social innovation.

This compendium provides an overview of initiatives for the publication and re-use of open data to fight corruption across OECD and G20 countries and underscores the impact that a digital transformation of the public sector can deliver in terms of better governance across policy areas.  The practices illustrate the use of open data as a way of fighting corruption and show how open data principles can be translated into concrete initiatives.

The publication is divided into three sections:

Section 1 discusses the benefits of open data for greater public sector transparency and performance, national competitiveness and social engagement, and how these initiatives contribute to greater public trust in government.

Section 2 highlights the preconditions necessary across different policy areas related to anti-corruption (e.g. open government, public procurement) to sustain the implementation of an “Open by default” approach that could help government move from a perspective that focuses on increasing access to public sector information to one that enhances the publication of open government data for re-use and value co-creation. 

Section 3 presents the results of the OECD survey administered across OECD and G20 countries, good practices on the publishing and reusing of open data for anti-corruption in G20 countries, and lessons learned from the definition and implementation of these initiatives. This chapter also discusses the implications for broader national matters such as freedom of press, and the involvement of key actors of the open data ecosystem (e.g. journalists and civil society organisations) as key partners in open data re-use for anti-corruption…(More)”.

Data Flow in the Smart City: Open Data Versus the Commons


Chapter by Richard Beckwith, John Sherry and David Prendergast in The Hackable City: “Much of the recent excitement around data, especially ‘Big Data,’ focuses on the potential commercial or economic value of data. How that data will affect people isn’t much discussed. People know that smart cities will deploy Internet-based monitoring and that flows of the collected data promise to produce new values. Less considered is that smart cities will be sites of new forms of citizen action—enabled by an ‘economy’ of data that will lead to new methods of collectivization, accountability, and control which, themselves, can provide both positive and negative values to the citizenry. Therefore, smart city design needs to consider not just measurement and publication of data but also the implications of city-wide deployment, data openness, and the possibility of unintended consequences if data leave the city….(More)”.

The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence


Blog by Julia Powles and Helen Nissenbaum: “Serious thinkers in academia and business have swarmed to the A.I. bias problem, eager to tweak and improve the data and algorithms that drive artificial intelligence. They’ve latched onto fairness as the objective, obsessing over competing constructs of the term that can be rendered in measurable, mathematical form. If the hunt for a science of computational fairness was restricted to engineers, it would be one thing. But given our contemporary exaltation and deference to technologists, it has limited the entire imagination of ethics, law, and the media as well.

There are three problems with this focus on A.I. bias. The first is that addressing bias as a computational problem obscures its root causes. Bias is a social problem, and seeking to solve it within the logic of automation is always going to be inadequate.

Second, even apparent success in tackling bias can have perverse consequences. Take the example of a facial recognition system that works poorly on women of color because of the group’s underrepresentation both in the training data and among system designers. Alleviating this problem by seeking to “equalize” representation merely co-opts designers in perfecting vast instruments of surveillance and classification.

When underlying systemic issues remain fundamentally untouched, the bias fighters simply render humans more machine readable, exposing minorities in particular to additional harms.

Third — and most dangerous and urgent of all — is the way in which the seductive controversy of A.I. bias, and the false allure of “solving” it, detracts from bigger, more pressing questions. Bias is real, but it’s also a captivating diversion.

What has been remarkably underappreciated is the key interdependence of the twin stories of A.I. inevitability and A.I. bias. Against the corporate projection of an otherwise sunny horizon of unstoppable A.I. integration, recognizing and acknowledging bias can be seen as a strategic concession — one that subdues the scale of the challenge. Bias, like job losses and safety hazards, becomes part of the grand bargain of innovation.

The reality that bias is primarily a social problem and cannot be fully solved technically becomes a strength, rather than a weakness, for the inevitability narrative. It flips the script. It absorbs and regularizes the classification practices and underlying systems of inequality perpetuated by automation, allowing relative increases in “fairness” to be claimed as victories — even if all that is being done is to slice, dice, and redistribute the makeup of those negatively affected by actuarial decision-making.

In short, the preoccupation with narrow computational puzzles distracts us from the far more important issue of the colossal asymmetry between societal cost and private gain in the rollout of automated systems. It also denies us the possibility of asking: Should we be building these systems at all?…(More)”.

Harnessing Digital Tools to Revitalize European Democracy


Article by Elisa Lironi: “…Information and communication technology (ICT) can be used to implement more participatory mechanisms and foster democratic processes. Often referred to as e-democracy, there is a large range of very different possibilities for online engagement, including e-initiatives, e-consultations, crowdsourcing, participatory budgeting, and e-voting. Many European countries have started exploring ICT’s potential to reach more citizens at a lower cost and to tap into the so-called wisdom of the crowd, as governments attempt to earn citizens’ trust and revitalize European democracy by developing more responsive, transparent, and participatory decisionmaking processes.

For instance, when Anne Hidalgo was elected mayor of Paris in May 2014, one of her priorities was to make the city more collaborative by allowing Parisians to propose policy and develop projects together. In order to build a stronger relationship with the citizens, she immediately started to implement a citywide participatory budgeting project for the whole of Paris, including all types of policy issues. It started as a small pilot, with the city of Paris putting forward fifteen projects that could be funded with up to about 20 million euros and letting citizens vote on which projects to invest in, via ballot box or online. Parisians and local authorities deemed this experiment successful, so Hidalgo decided it was worth taking further, with more ideas and a bigger pot of money. Within two years, the level of participation grew significantly—from 40,000 voters in 2014 to 92,809 in 2016, representing 5 percent of the total urban population. Today, Paris Budget Participatif is an official platform that lets Parisians decide how to spend 5 percent of the investment budget from 2014 to 2020, amounting to around 500 million euros. In addition, the mayor also introduced two e-democracy platforms—Paris Petitions, for e-petitions, and Idée Paris, for e-consultations. Citizens in the French capital now have multiple channels to express their opinions and contribute to the development of their city.

In Latvia, civil society has played a significant role in changing how legislative procedures are organized. ManaBalss (My Voice) is a grassroots NGO that creates tools for better civic participation in decisionmaking processes. Its online platform, ManaBalss.lv, is a public e-participation website that lets Latvian citizens propose, submit, and sign legislative initiatives to improve policies at both the national and municipal level. …

In Finland, the government itself introduced an element of direct democracy into the Finnish political system, through the 2012 Citizens’ Initiative Act (CI-Act) that allows citizens to submit initiatives to the parliament. …

Other civic tech NGOs across Europe have been developing and experimenting with a variety of digital tools to reinvigorate democracy. These include initiatives like Science For You (SCiFY) in Greece, Netwerk Democratie in the Netherlands, and the Citizens Foundation in Iceland, which got its start when citizens were asked to crowdsource their constitution in 2010.

Outside of civil society, several private tech companies are developing digital platforms for democratic participation, mainly at the local government level. One example is the Belgian start-up CitizenLab, an online participation platform that has been used by more than seventy-five municipalities around the world. The young founders of CitizenLab have used technology to innovate the democratic process by listening to what politicians need and including a variety of functions, such as crowdsourcing mechanisms, consultation processes, and participatory budgeting. Numerous other European civic tech companies have been working on similar concepts—Cap Collectif in France, Delib in the UK, and Discuto in Austria, to name just a few. Many of these digital tools have proven useful to elected local or national representatives….

While these initiatives are making a real impact on the quality of European democracy, most of the EU’s formal policy focus is on constraining the power of the tech giants rather than positively aiding digital participation….(More)”