Social Media Use in Crisis and Risk Communication: Emergencies, Concerns and Awareness


Open Access Book edited by Harald Hornmoen and Klas Backholm: ” This book is about how different communicators – whether professionals, such as crisis managers, first responders and journalists, or private citizens and disaster victims – have used social media to communicate about risks and crises. It is also about how these very different actors can play a crucial role in mitigating or preventing crises. How can they use social media to strengthen their own and the public’s awareness and understanding of crises when they unfold? How can they use social media to promote resilience during crises and the ability to deal with the after-effects? Moreover, what can they do to avoid using social media in a manner that weakens the situation awareness of crisis workers and citizens, or obstructs effective emergency management?

The RESCUE (Researching Social Media and Collaborative Software Use in Emergency Situations) project, on which this book is based, has sought to enable a more efficient and appropriate use of social media among key communicators, such as journalists and government actors involved in crisis management. Through empirical studies, and by drawing on relevant theory, the collection aims to improve our understanding of how social media have been used in different types of risks and crises. Building on our empirical work, we provide research-based input into how social media can be used efficiently by different communicators in a way appropriate to the specific crisis and to the concerns of the public.

We address our questions by presenting new research-based knowledge on social media use during different crises: the terrorist attacks in Norway on 22 July 2011; the central European floods in Austria in 2013; and the West African Ebola outbreak in 2014. The social media platforms analysed include the most popular ones in the affected areas at the time of the crises: Twitter and Facebook. By addressing such different cases, the book will move the field of crisis communication in social media beyond individual studies towards providing knowledge which is valid across situations….(More)”.

Is Mass Surveillance the Future of Conservation?


Mallory Picket at Slate: “The high seas are probably the most lawless place left on Earth. They’re a portal back in time to the way the world looked for most of our history: fierce and open competition for resources and contested territories. Pirating continues to be a way to make a living.

It’s not a complete free-for-all—most countries require registration of fishing vessels and enforce environmental protocols. Cooperative agreements between countries oversee fisheries in international waters. But the best data available suggests that around 20 percent of the global seafood catch is illegal. This is an environmental hazard because unregistered boats evade regulations meant to protect marine life. And it’s an economic problem for fishermen who can’t compete with boats that don’t pay for licenses or follow the (often expensive) regulations. In many developing countries, local fishermen are outfished by foreign vessels coming into their territory and stealing their stock….

But Henri Weimerskirch, a French ecologist, has a cheap, low-impact way to monitor thousands of square miles a day in real time: He’s getting birds to do it (a project first reported by Hakai). Specifically, albatross, which have a 10-foot wingspan and can fly around the world in 46 days. The birds naturally congregate around fishing boats, hoping for an easy meal, so Weimerskirch is equipping them with GPS loggers that also have radar detection to pick up the ship’s radar (and make sure it is a ship, not an island) and a transmitter to send that data to authorities in real time. If it works, this should help in two ways: It will provide some information on the extent of the unofficial fishing operation in the area, and because the logger will transmit their information in real time, the data will be used to notify French navy ships in the area to check out suspicious boats.

His team is getting ready to deploy about 80 birds in the south Indian Ocean this November.
The loggers attached around the birds’ legs are about the shape and size of a Snickers. The south Indian Ocean is a shared fishing zone, and nine countries, including France (courtesy of several small islands it claims ownership of, a vestige of colonialism), manage it together. But there are big problems with illegal fishing in the area, especially of the Patagonian toothfish (better known to consumers as Chilean seabass)….(More)”

The Use of Regulatory Sandboxes in Europe and Asia


Claus Christensen at Regulation Aisa: “Global attention to money-laundering, terrorism financing and financial criminal practices has grown exponentially in recent years. As criminals constantly come up with new tactics, global regulations in the financial world are evolving all the time to try and keep up. At the same time, end users’ expectations are putting companies at commercial risk if they are not prepared to deliver outstanding and digital-first customer experiences through innovative solutions.

Among the many initiatives introduced by global regulators to address these two seemingly contradictory needs, regulatory sandboxes – closed environments that allow live testing of innovations by tech companies under the regulator’s supervision – are by far one of the most popular. As the CEO of a fast-growing regtech company working across both Asia and Europe, I have identified a few differences in how the regulators across different jurisdictions are engaging with the industry in general, and regulatory sandboxes in particular.

Since the launch of ‘Project Innovate’ in 2014, the UK’s FCA (Financial Conduct Authority) has won recognition for the success of its sandbox, where fintech companies can test innovative products, services and business models in a live market environment, while ensuring that appropriate safeguards are in place through temporary authorisation. The FCA advises companies, whether fintech startups or established banks, on which existing regulations might apply to their cutting-edge products.

So far, the sandbox has helped more than 500 companies, with 40+ firms receiving regulatory authorisation. Project Innovate has helped the FCA’s reputation for supporting initiatives which boost competition within financial services, which was part of the regulator’s post-financial crisis agenda. The success of the initiative in fostering a fertile fintech environment is reflected by the growing number of UK-based challenger banks that are expanding their client bases across Europe. Following its success, the sandbox approach has gone global, with regulators around the world adopting a similar strategy for fintech innovation.

Across Europe, regulators are directly working with financial services providers and taking proactive measures to not only encourage the use of innovative technology in improving their systems, but also to boost adoption by others within the ecosystem…(More)”.

Don’t Believe the Algorithm


Hannah Fry at the Wall Street Journal: “The Notting Hill Carnival is Europe’s largest street party. A celebration of black British culture, it attracts up to two million revelers, and thousands of police. At last year’s event, the Metropolitan Police Service of London deployed a new type of detective: a facial-recognition algorithm that searched the crowd for more than 500 people wanted for arrest or barred from attending. Driving around in a van rigged with closed-circuit TVs, the police hoped to catch potentially dangerous criminals and prevent future crimes.

It didn’t go well. Of the 96 people flagged by the algorithm, only one was a correct match. Some errors were obvious, such as the young woman identified as a bald male suspect. In those cases, the police dismissed the match and the carnival-goers never knew they had been flagged. But many were stopped and questioned before being released. And the one “correct” match? At the time of the carnival, the person had already been arrested and questioned, and was no longer wanted.

Given the paltry success rate, you might expect the Metropolitan Police Service to be sheepish about its experiment. On the contrary, Cressida Dick, the highest-ranking police officer in Britain, said she was “completely comfortable” with deploying such technology, arguing that the public expects law enforcement to use cutting-edge systems. For Dick, the appeal of the algorithm overshadowed its lack of efficacy.

She’s not alone. A similar system tested in Wales was correct only 7% of the time: Of 2,470 soccer fans flagged by the algorithm, only 173 were actual matches. The Welsh police defended the technology in a blog post, saying, “Of course no facial recognition system is 100% accurate under all conditions.” Britain’s police force is expanding the use of the technology in the coming months, and other police departments are following suit. The NYPD is said to be seeking access to the full database of drivers’ licenses to assist with its facial-recognition program….(More).

European science funders ban grantees from publishing in paywalled journals


Martin Enserink at Science: “Frustrated with the slow transition toward open access (OA) in scientific publishing, 11 national funding organizations in Europe turned up the pressure today. As of 2020, the group, which jointly spends about €7.6 billion on research annually, will require every paper it funds to be freely available from the moment of publication. In a statement, the group said it will no longer allow the 6- or 12-month delays that many subscription journals now require before a paper is made OA, and it won’t allow publication in so-called hybrid journals, which charge subscriptions but also make individual papers OA for an extra fee.

The move means grantees from these 11 funders—which include the national funding agencies in the United Kingdom, the Netherlands, and France as well as Italy’s National Institute for Nuclear Physics—will have to forgo publishing in thousands of journals, including high-profile ones such as NatureScienceCell, and The Lancet, unless those journals change their business model. “We think this could create a tipping point,” says Marc Schiltz, president of Science Europe, the Brussels-based association of science organizations that helped coordinate the plan. “Really the idea was to make a big, decisive step—not to come up with another statement or an expression of intent.”

The announcement delighted many OA advocates. “This will put increased pressure on publishers and on the consciousness of individual researchers that an ecosystem change is possible,” says Ralf Schimmer, head of Scientific Information Provision at the Max Planck Digital Library in Munich, Germany. Peter Suber, director of the Harvard Library Office for Scholarly Communication, calls the plan “admirably strong.” Many other funders support OA, but only the Bill & Melinda Gates Foundation applies similarly stringent requirements for “immediate OA,” Suber says. The European Commission and the European Research Council support the plan; although they haven’t adopted similar requirements for the research they fund, a statement by EU Commissioner for Research, Science and Innovation Carlos Moedas suggests they may do so in the future and urges the European Parliament and the European Council to endorse the approach….(More)”.

The UK’s Gender Pay Gap Open Data Law Has Flaws, But Is A Positive Step Forward


Article by Michael McLaughlin: “Last year, the United Kingdom enacted a new regulation requiring companies to report information about their gender pay gap—a measure of the difference in average pay between men and women. The new rules are a good example of how open data can drive social change. However, the regulations have produced some misleading statistics, highlighting the importance of carefully crafting reporting requirements to ensure that they produce useful data.

In the UK, nearly 11,000 companies have filed gender pay gap reports, which include both the difference between the mean and median hourly pay rates for men and women as well the difference in bonuses. And the initial data reveals several interesting findings. Median pay for men is 11.8 percent higher than for women, on average, and nearly 87 percent of companies pay men more than women on average. In addition, over 1,000 firms had a median pay gap greater than 30 percent. The sectors with the highest pay gaps—construction, finance, and insurance—each pay men at least 20 percent more than women. A major reason for the gap is a lack of women in senior positions—UK women actually make more than men between the ages of 22-29. The total pay gap is also a result of more women holding part-time jobs.

However, as detractors note, the UK’s data can be misleading. For example, the data overstates the pay gap on bonuses because it does not adjust these figures for hours worked. More women work part-time than men, so it makes sense that women would receive less in bonus pay when they work less. The data also understates the pay gap because it excludes the high compensation of partners in organizations such as law firms, a group that includes few women. And it is important to note that—by definition—the pay gap data does not compare the wages of men and women working the same jobs, so the data says nothing about whether women receive equal pay for equal work.

Still, publication of the data has sparked an important national conversation. Google searches in the UK for the phrase “gender pay gap” experienced a 12-month high the week the regulations began enforcement, and major news sites like Financial Times have provided significant coverage of the issue by analyzing the reported data. While it is too soon to tell if the law will change employer behavior, such as businesses hiring more female executives, or employee behavior, such as women leaving companies or fields that pay less, countries with similar reporting requirements, such as Belgium, have seen the pay gap narrow following implementation of their rules.

Requiring companies to report this data to the government may be the only way to obtain gender pay gap data, because evidence suggests that the private sector will not produce this data on its own. Only 300 UK organizations joined a voluntary government program to report their gender pay gap in 2011, and as few as 11 actually published the data. Crowdsourced efforts, where women voluntary report their pay, have also suffered from incomplete data. And even complete data does not illuminate variables such as why women may work in a field that pays less….(More)”.

Sharing the benefits: How to use data effectively in the public sector


Report by Sarah Timmis, Luke Heselwood and Eleonora Harwich (for Reform UK): “This report demonstrates the potential of data sharing to transform the delivery of public services and improve outcomes for citizens. It explores how government can overcome various challenges to ‘get data right’ and enable better use of personal data within and between public-sector organisations.

Ambition meets reality

Government is set on using data more effectively to help deliver better public services. Better use of data can improve the design, efficiency and outcomes of services. For example, sharing data digitally between GPs and hospitals can enable early identification of patients most at risk of hospital admission, which has reduced admissions by up to 30 per cent in Somerset. Bristol’s Homeless Health Service allows access to medical, psychiatric, social and prison data, helping to provide a clearer picture of the complex issues facing the city’s homeless population. However, government has not yet created a clear data infrastructure, which would allow data to be shared across multiple public services, meaning efforts on the ground have not always delivered results.

The data: sticking points

Several technical challenges must be overcome to create the right data infrastructure. Individual pieces of data must be presented in standard formats to enable sharing within and across services. Data quality can be improved at the point of data collection, through better monitoring of data quality and standards within public-sector organisations and through data-curation-processes. Personal data also needs to be presented in a given format so linking data is possible in certain instances to identify individuals. Interoperability issues and legacy systems act as significant barriers to data linking. The London Metropolitan Police alone use 750 different systems, many of which are incompatible. Technical solutions, such as Application Programming Interfaces (APIs) can be overlaid on top of legacy systems to improve interoperability and enable data sharing. However, this is only possible with the right standards and a solid new data model. To encourage competition and improve interoperability in the longer term, procurement rules should make interoperability a prerequisite for competing companies, allowing customers to integrate their choices of the most appropriate products from different vendors.

Building trustworthiness

The ability to share data at scale through the internet has brought new threats to the security and privacy of personal information that amplifies the need for trust between government and citizens and across government departments. Currently, just 9 per cent of people feel that the Government has their best interests at heart when data sharing, and only 15 per cent are confident that government organisations would deal well with a cyber-attack. Considering attitudes towards data sharing are time and context dependent, better engagement with citizens and clearer explanations of when and why data is used can help build confidence. Auditability is also key to help people and organisations track how data is used to ensure every interaction with personal data is auditable, transparent and secure. …(More)”.

How to Prevent Winner-Takes-All Democracy


Kaushik Basu at Project Syndicate: “Democracy is in crisis. Fake news – and fake allegations of fake news – now plagues civil discourse, and political parties have proved increasingly willing to use xenophobia and other malign strategies to win elections. At the same time, revisionist powers like Vladimir Putin’s Russia have been stepping up their efforts to interfere in elections across the West. Rarely has the United States witnessed such brazen attacks on its political system; and rarely has the world seen such lows during peacetime….

How can all of this be happening in democracies, and what can be done about it?

On the first question, one hypothesis is that new digital technologies are changing the structural incentives for corporations, political parties, and other major institutions. Consider the case of corporations. The wealth of proprietary data on consumer preferences and behavior is producing such massive returns to scale that a few giants are monopolizing markets. In other words, markets are increasingly geared toward a winner-take-all game: multiple corporations can compete, but to the victor go the spoils.1

Electoral democracy is drifting in the same direction. The benefits of winning an election have become so large that political parties will stoop to new lows to clinch a victory. And, as with corporations, they can do so with the help of data on electoral preferences and behavior, and with new strategies to target key constituencies.

This poses a dilemma for well-meaning democratic parties and politicians. If a “bad” party is willing to foment hate and racism to bolster its chances of winning, what is a “good” party to do? If it sticks to its principles, it could end up ceding victory to the “bad” party, which will do even more harm once it is in office. A “good” party may thus try to forestall that outcome by taking a step down the moral ladder, precipitating a race to the bottom. This is the problem with any winner-takes-all game. When second place confers no benefits, the cost of showing unilateral restraint can grow intolerably high.

But this problem is not as hopeless as it appears. In light of today’s crisis of democracy, we would do well to revisit Václav Havel’s seminal 1978 essay “The Power of the Powerless.” First published as samizdat that was smuggled out of Czechoslovakia, the essay makes a simple but compelling argument. Dictatorships and other seemingly omnipotent forms of authoritarianism may look like large, top-down structures, but in the final analysis, they are merely the outcome of ordinary individuals’ beliefs and choices. Havel did not have the tools of modern economic theory to demonstrate his argument formally. In my new book The Republic of Beliefs, I show that the essence of his argument can be given formal structure using elementary game theory. This, in turn, shows that ordinary individuals have moral options that may be unavailable to the big institutional players….(More)”.

An Overview of National AI Strategies


Medium Article by Tim Dutton: “The race to become the global leader in artificial intelligence (AI) has officially begun. In the past fifteen months, Canada, China, Denmark, the EU Commission, Finland, France, India, Italy, Japan, Mexico, the Nordic-Baltic region, Singapore, South Korea, Sweden, Taiwan, the UAE, and the UK have all released strategies to promote the use and development of AI. No two strategies are alike, with each focusing on different aspects of AI policy: scientific research, talent development, skills and education, public and private sector adoption, ethics and inclusion, standards and regulations, and data and digital infrastructure.

This article summarizes the key policies and goals of each strategy, as well as related policies and initiatives that have announced since the release of the initial strategies. It also includes countries that have announced their intention to develop a strategy or have related AI policies in place….(More)”.

The Risks of Dangerous Dashboards in Basic Education


Lant Pritchett at the Center for Global Development: “On June 1, 2009 Air France flight 447 from Rio de Janeiro to Paris crashed into the Atlantic Ocean killing all 228 people on board. While the Airbus 330 was flying on auto-pilot, the different speed indicators received by the on-board navigation computers started to give conflicting speeds, almost certainly because the pitot tubes responsible for measuring air speed had iced over. Since the auto-pilot could not resolve conflicting signals and hence did not know how fast the plane was actually going, it turned control of the plane over to the two first officers (the captain was out of the cockpit). Subsequent flight simulator trials replicating the conditions of the flight conclude that had the pilots done nothing at all everyone would have lived—nothing was actually wrong; only the indicators were faulty, not the actual speed. But, tragically, the pilots didn’t do nothing….

What is the connection to education?

Many countries’ systems of basic education are in “stall” condition.

A recent paper of Beatty et al. (2018) uses information from the Indonesia Family Life Survey, a representative household survey that has been carried out in several waves with the same individuals since 2000 and contains information on whether individuals can answer simple arithmetic questions. Figure 1, showing the relationship between the level of schooling and the probability of answering a typical question correctly, has two shocking results.

First, the difference in the likelihood a person can answer a simple mathematics question correctly differs by only 20 percent between individuals who have completed less than primary school (<PS)—who can answer correctly (adjusted for guessing) about 20 percent of the time—and those who have completed senior secondary school or more (>=SSS), who answer correctly only about 40 percent of the time. These are simple multiple choice questions like whether 56/84 is the same fraction as (can be reduced to) 2/3, and whether 1/3-1/6 equals 1/6. This means that in an entire year of schooling, less than 2 additional children per 100 gain the ability to answer simple arithmetic questions.

Second, this incredibly poor performance in 2000 got worse by 2014. …

What has this got to do with education dashboards? The way large bureaucracies prefer to work is to specify process compliance and inputs and then measure those as a means of driving performance. This logistical mode of managing an organization works best when both process compliance and inputs are easily “observable” in the economist’s sense of easily verifiable, contractible, adjudicated. This leads to attention to processes and inputs that are “thin” in the Clifford Geertz sense (adopted by James Scott as his primary definition of how a “high modern” bureaucracy and hence the state “sees” the world). So in education one would specify easily-observable inputs like textbook availability, class size, school infrastructure. Even if one were talking about “quality” of schooling, a large bureaucracy would want this too reduced to “thin” indicators, like the fraction of teachers with a given type of formal degree, or process compliance measures, like whether teachers were hired based on some formal assessment.

Those involved in schooling can then become obsessed with their dashboards and the “thin” progress that is being tracked and easily ignore the loud warning signals saying: Stall!…(More)”.