Does protest really work in cosy democracies?


Steve Crawshaw at LSE Impact Blog: “…If it is possible for peaceful crowds to force the collapse of the Berlin Wall or to unseat a Mubarak, how easy it should it be for protesters to persuade a democratically elected leader to retreat from “mere” bad policy? In truth, not easy at all. Two million marched in the UK against the Iraq War in 2003 – and it made not a blind bit of difference with Tony Blair’s determination to proceed with a war that the UN Secretary-General described as illegal. Blair was re-elected, two years later.

After the inauguration of Donald Trump in January 2017, millions took part in the series of Women’s Marches in the United States and around the world. It seemed – it was – a powerful defining moment. And yet, at least in the short-term, those remarkable protests were water off the presidential duck’s back. His response was mockery. In some respects, Trump could afford to mock. A man who has received 63 million votes is in a stronger position than the unelected leader who has to threaten or use violence to stay in power.

And yet.

One thing that protest in an authoritarian and a democratic context have in common is that the impact of protest – including delayed impact – remains uncertain, both for those who protest and those who are protested against.

Vaclav Havel argued that it was worth “living in truth” – speaking truth to power – even without any certainty of outcome. “Those that say individuals are not capable of changing anything are only looking for excuses.” In that context, what is perhaps most unacceptable is to mock those who take risks, and seek change. Lord Charles Powell, former adviser to Margaret Thatcher, for example explained to the umbrella protesters in Hong Kong in 2013 that they were foolish and naive. They should, he told them, learn to live with the “small black cloud” of anti-democratic pressures from Beijing. The protesters failed to heed Powell’s complacent message. In the words of Joshua Wong, on his way back to jail earlier in 2017: “You can lock up our bodies, but not our minds.”

Scepticism and failure are linked, as the Egyptian activist Asmaa Mahfouz made clear in a powerful video which helped trigger the uprising in 2011. The 26-year-old declared: ‘”Whoever says it is not worth it because there will only be a handful or people, I want to tell him, “You are the reason for this.” Sitting at home and just watching us on the news or Facebook leads to our humiliation.’ The video went viral. Millions went out. The rest was history.

Even in a democracy, that same it-can’t-be-done logic sucks us in more often, perhaps, than we realize….(More)”.

Growing the artificial intelligence industry in the UK


Summary from an independent review, carried out by Professor Dame Wendy Hall and Jérôme Pesenti: “Increased use of Artificial Intelligence (AI) can bring major social and economic benefits to the UK. With AI, computers can analyse and learn from information at higher accuracy and speed than humans can. AI offers massive gains in efficiency and performance to most or all industry sectors, from drug discovery to logistics. AI is software that can be integrated into existing processes, improving them, scaling them, and reducing their costs, by making or suggesting more accurate decisions through better use of information.

It has been estimated that AI could add an additional USD $814 billion (£630bn) to the UK economy by 2035, increasing the annual growth rate of GVA from 2.5 to 3.9%.

Our vision is for the UK to become the best place in the world for businesses developing and deploying AI to start, grow and thrive, to realise all the benefits the technology offers….

Key factors have combined to increase the capability of AI in recent years, in particular:

  • New and larger volumes of data
  • Supply of experts with the specific high level skills
  • Availability of increasingly powerful computing capacity. The barriers to achieving performance have fallen significantly, and continue to fall.

To continue developing and applying AI, the UK will need to increase ease of access to data in a wider range of sectors. This Review recommends:

  • Development of data trusts, to improve trust and ease around sharing data
  • Making more research data machine readable
  • Supporting text and data mining as a standard and essential tool for research.

Skilled experts are needed to develop AI, and they are in short supply. To develop more AI, the UK will need a larger workforce with deep AI expertise, and more development of lower level skills to work with AI. …

Increasing uptake of AI means increasing demand as well as supply through a better understanding of what AI can do and where it could be applied. This review recommends:

  • An AI Council to promote growth and coordination in the sector
  • Guidance on how to explain decisions and processes enabled by AI
  • Support for export and inward investment
  • Guidance on successfully applying AI to drive improvements in industry
  • A programme to support public sector use of AI
  • Funded challenges around data held by public organisations.

Our work has indicated that action in these areas could deliver a step-change improvement in growth of UK AI. This report makes the 18 recommendations listed in full below, which describe how Government, industry and academia should work together to keep the UK among the world leaders in AI…(More)”

colony


About Colony: “We believe that the most successful organizations of the future will be open.

Openness is not about open plan offices or ‘20% time’. It’s about how decisions get made, how labor is divided, and who controls the purse strings.

In an open organization, you’re empowered to do the work you care about, not just what you’re told to do. Decisions are made openly and transparently. Influence is earned by consistently demonstrating just how damn good you are. It means everyone’s incentives are aligned, because ownership is open to all.

And that means opportunity is open to all, and a new world of possibility opens up.

Colony is infrastructure for the future of work: self-organizing companies that run via software, not paperwork….

Infrastructure should be impartial and reliable.

One organization should not be beholden to, or have to trust another, in order to confidently conduct their business.

That’s why the Colony protocol is built as open source smart contracts on the Ethereum network….(More)”

Civil, the blockchain-based journalism marketplace, is building its first batch of publications


 “Civil is an idea that has started to turn heads, both among investors and within journalism circles. .. It’s also attracted its first publication, Popula, an alternative news and politics site run by journalist… Maria Bustillos, and joined by Sasha Frere-Jones, Ryan Bradley, Aaron Bady, and other big-name reporters. The site goes live in January.

Built on top of blockchain (the same technology that underpins bitcoin), Civil promises to use the technology to build decentralized marketplaces for readers and journalists to work together to fund coverage of topics that interest them, or for those in the public interest. Readers will support reporters using “CVL” tokens, Civil’s cryptocurrency, giving them a speculative stake in the currency that will — hopefully — increase in value as more people buy in over time. This, Civil, hopes will encourage more people to invest in the marketplaces, creating a self-sustaining system that will help fund more reporting…

Civil is also pitching its technology as a way for journalists around the world to fight censorship. Because all changes made to the blockchain are public, it’s impossible for third parties — governmental or otherwise — to change or alter the information on it without notice. “No one will be able to remove the record or prevent her organization from publishing what it wants to publish,” said Matthew Iles, Civil’s founder, referring to Popula. He also pointed to the other key parts of Civil’s value proposition, which include direct connections with readers, ease of access to financing, and a reduced reliance on third-party tech companies….(More)”.

The Unexpected Power of Google-Doc Activism


 at The Cut: “Earlier this month, after major news outlets published detailed allegations of abuse and harassment by Harvey Weinstein, an anonymous woman created a public Google spreadsheet titled Shitty Media Men. It was a space for other anonymous women to name men in media who had exhibited bad behavior ranging from sleazy DMs to rape. There were just a few rules: “Please never name an accuser,” it said at the top. “Please never share this spreadsheet with a man.” The document was live for less than 48 hours, in which time it was shared with dozens of women — and almost certainly a few men.

The goal of the document was not public accountability, according to its creator. It was to privately warn other women, especially those who are not well connected in the industry, about which men in their profession to avoid. This is information women have always shared among themselves, at after-work drinks and in surreptitious chat messages, but the Google Doc sought to collect it in a way that transcended any one woman’s immediate social network. And because the goal was to occupy a kind of middle ground — not public, not quite private either — with little oversight, it made perfect sense that the information appeared as a Google Doc.

Especially in the year since the election, the shared Google Doc has become a familiar way station on the road to collective political action. Shared documents are ideal for collecting resources in one place quickly and easily, without gatekeepers, because they’re free and easy to use. They have been used to crowdsource tweets and hashtags for pushing back against Trump’s first State of the Union address, to spread information on local town-hall events with members of Congress, and to collect vital donation and evacuee info for people affected by the fires in Northern California. A shared Google Doc allows collaborators to work together across time zones and is easy to update as circumstances change.

But perhaps most notably, a Google Doc can be technically public while functionally quite private, allowing members of a like-minded community to reach beyond their immediate friends and collaborators while avoiding the abuse and trolling that comes with publishing on other platforms….(More)”.

Are you doing what’s needed to get the state to respond to its citizens? Or are you part of the problem?


Vanessa Herringshaw at Making All Voices Count: ” …I’ve been reading over the incredibly rich and practically-orientated research and practice papers already on the Making All Voices Count website, and some of those coming out soon. There’s a huge amount of useful and challenging learning, and I’ll be putting out a couple of papers summarising some important threads later this year.

But as different civic tech and accountability actors prepare to come together in Brighton for Making All Voices Count’s final learning event later this week, I’m going to focus here on three things that really stood out and would benefit from the kind of ‘group grappling’ that such a gathering can best support. And I aim to be provocative!

  1. Improving state responsiveness to citizens is a complex business – even more than we realised – and a lot more complex than most interventions are designed to address. If we don’t address this complexity, our interventions won’t work. And that risks making things worse, not better.
  2. It’s clear we need to make more of a shift from narrow, ‘tactical’ approaches to more complex, systems-based ‘strategic’ approaches. Thinking is developing on how to do this in theory. But it’s not clear that our current institutional approaches will support, or even allow, a major shift in practice.
  3. So when we each look at our individual roles and those of our organisations, are we part of the solution, or part of the problem?

Let’s look at each of these in turn….(More)”

Reboot for the AI revolution


Yuval Noah Harari in Nature: “The ongoing artificial-intelligence revolution will change almost every line of work, creating enormous social and economic opportunities — and challenges. Some believe that intelligent computers will push humans out of the job market and create a new ‘useless class’; others maintain that automation will generate a wide range of new human jobs and greater prosperity for all. Almost everybody agrees that we should take action to prevent the worst-case scenarios….

Governments might decide to deliberately slow down the pace of automation, to lessen the resulting shocks and allow time for readjustments. But it will probably be both impossible and undesirable to prevent automation and job loss completely. That would mean giving up the immense positive potential of AI and robotics. If self-driving vehicles drive more safely and cheaply than humans, it would be counterproductive to ban them just to protect the jobs of taxi and lorry drivers.

A more sensible strategy is to create new jobs. In particular, as routine jobs are automated, opportunities for new non-routine jobs will mushroom. For example, general physicians who focus on diagnosing known diseases and administering familiar treatments will probably be replaced by AI doctors. Precisely because of that, there will be more money to pay human experts to do groundbreaking medical research, develop new medications and pioneer innovative surgical techniques.

This calls for economic entrepreneurship and legal dexterity. Above all, it necessitates a revolution in education…Creating new jobs might prove easier than retraining people to fill them. A huge useless class might appear, owing to both an absolute lack of jobs and a lack of relevant education and mental flexibility….

With insights gleaned from early warning signs and test cases, scholars should strive to develop new socio-economic models. The old ones no longer hold. For example, twentieth-century socialism assumed that the working class was crucial to the economy, and socialist thinkers tried to teach the proletariat how to translate its immense economic power into political clout. In the twenty-first century, if the masses lose their economic value they might have to struggle against irrelevance rather than exploitation….The challenges posed in the twenty-first century by the merger of infotech and biotech are arguably bigger than those thrown up by steam engines, railways, electricity and fossil fuels. Given the immense destructive power of our modern civilization, we cannot afford more failed models, world wars and bloody revolutions. We have to do better this time….(More)”

The Unexamined Algorithm Is Not Worth Using


Ruben Mancha & Haslina Ali at Stanford Social Innovation Review: “In 1983, at the height of the Cold War, just one man stood between an algorithm and the outbreak of nuclear war. Stanislav Petrov, a colonel of the Soviet Air Defence Forces, was on duty in a secret command center when early-warning alarms went off indicating the launch of intercontinental ballistic missiles from an American base. The systems reported that the alarm was of the highest possible reliability. Petrov’s role was to advise his superiors on the veracity of the alarm that, in turn, would affect their decision to launch a retaliatory nuclear attack. Instead of trusting the algorithm, Petrov went with his gut and reported that the alarm was a malfunction. He turned out to be right.

This historical nugget represents an extreme example of the effect that algorithms have on our lives. The detection algorithm, it turns out, mistook the sun’s reflection for a missile launch. It is a sobering thought that a poorly designed or malfunctioning algorithm could have changed the course of history and resulted in millions of deaths….

We offer five recommendations to guide the ethical development and evaluation of algorithms used in your organization:

  1. Consider ethical outcomes first, speed and efficiency second. Organizations seeking speed and efficiency through algorithmic automation should remember that customer value comes through higher strategic speed, not higher operational speed. When implementing algorithms, organizations should never forget their ultimate goal is creating customer value, and fast yet potentially unethical algorithms defile that objective.
  2. Make ethical guiding principles salient to your organization. Your organization should reflect on the ethical principles guiding it and convey them clearly to employees, business partners, and customers. A corporate social responsibility framework is a good starting point for any organization ready to articulate its ethical principles.
  3. Employ programmers well versed in ethics. The computer engineers responsible for designing and programming algorithms should understand the ethical implications of the products of their work. While some ethical decisions may seem intuitive (such as do not use an algorithm to steal data from a user’s computer), most are not. The study of ethics and the practice of ethical inquiry should be part of every coding project.
  4. Interrogate your algorithms against your organization’s ethical standards. Through careful evaluation of the your algorithms’ behavior and outcomes, your organization can identify those circumstances, real or simulated, in which they do not meet the ethical standards.
  5. Engage your stakeholders. Transparently share with your customers, employees, and business partners details about the processes and outcomes of your algorithms. Stakeholders can help you identify and address ethical gaps….(More).

A Systematic Scoping Review of the Choice Architecture Movement: Towards Understanding When and Why Nudges Work


Barnabas Imre Szaszi et al in the Journal of Behavioral Decision Making: “In this paper, we provide a domain-general scoping review of the nudge movement by reviewing 422 choice architecture interventions in 156 empirical studies. We report the distribution of the studies across countries, years, domains, subdomains of applicability, intervention types, and the moderators associated with each intervention category to review the current state of the nudge movement. Furthermore, we highlight certain characteristics of the studies and experimental and reporting practices which can hinder the accumulation of evidence in the field. Specifically, we found that 74 % of the studies were mainly motivated to assess the effectiveness of the interventions in one specific setting, while only 24% of the studies focused on the exploration of moderators or underlying processes. We also observed that only 7% of the studies applied power analysis, 2% used guidelines aiming to improve the quality of reporting, no study in our database was preregistered, and the used intervention nomenclatures were non-exhaustive and often have overlapping categories. Building on our current observations and proposed solutions from other fields, we provide directly applicable recommendations for future research to support the evidence accumulation on why and when nudges work….(More)”.

Crowdsourced Morality Could Determine the Ethics of Artificial Intelligence


Dom Galeon in Futurism: “As artificial intelligence (AI) development progresses, experts have begun considering how best to give an AI system an ethical or moral backbone. A popular idea is to teach AI to behave ethically by learning from decisions made by the average person.

To test this assumption, researchers from MIT created the Moral Machine. Visitors to the website were asked to make choices regarding what an autonomous vehicle should do when faced with rather gruesome scenarios. For example, if a driverless car was being forced toward pedestrians, should it run over three adults to spare two children? Save a pregnant woman at the expense of an elderly man?

The Moral Machine was able to collect a huge swath of this data from random people, so Ariel Procaccia from Carnegie Mellon University’s computer science department decided to put that data to work.

In a new study published online, he and Iyad Rahwan — one of the researchers behind the Moral Machine — taught an AI using the Moral Machine’s dataset. Then, they asked the system to predict how humans would want a self-driving car to react in similar but previously untested scenarios….

This idea of having to choose between two morally problematic outcomes isn’t new. Ethicists even have a name for it: the double-effect. However, having to apply the concept to an artificially intelligent system is something humankind has never had to do before, and numerous experts have shared their opinions on how best to go about it.

OpenAI co-chairman Elon Musk believes that creating an ethical AI is a matter of coming up with clear guidelines or policies to govern development, and governments and institutions are slowly heeding Musk’s call. Germany, for example, crafted the world’s first ethical guidelines for self-driving cars. Meanwhile, Google parent company Alphabet’s AI DeepMind now has an ethics and society unit.

Other experts, including a team of researchers from Duke University, think that the best way to move forward is to create a “general framework” that describes how AI will make ethical decisions….(More)”.