Digital Deceit II: A Policy Agenda to Fight Disinformation on the Internet


We have developed here a broad policy framework to address the digital threat to democracy, building upon basic principles to recommend a set of specific proposals.

Transparency: As citizens, we have the right to know who is trying to influence our political views and how they are doing it. We must have explicit disclosure about the operation of dominant digital media platforms — including:

  • Real-time and archived information about targeted political advertising;
  • Clear accountability for the social impact of automated decision-making;
  • Explicit indicators for the presence of non-human accounts in digital media.

Privacy: As individuals with the right to personal autonomy, we must be given more control over how our data is collected, used, and monetized — especially when it comes to sensitive information that shapes political decision-making. A baseline data privacy law must include:

  • Consumer control over data through stronger rights to access and removal;
  • Transparency for the user of the full extent of data usage and meaningful consent;
  • Stronger enforcement with resources and authority for agency rule-making.

Competition: As consumers, we must have meaningful options to find, send and receive information over digital media. The rise of dominant digital platforms demonstrates how market structure influences social and political outcomes. A new competition policy agenda should include:

  • Stronger oversight of mergers and acquisitions;
  • Antitrust reform including new enforcement regimes, levies, and essential services regulation;
  • Robust data portability and interoperability between services.

There are no single-solution approaches to the problem of digital disinformation that are likely to change outcomes. … Awareness and education are the first steps toward organizing and action to build a new social contract for digital democracy….(More)”

How AI Addresses Unconscious Bias in the Talent Economy


Announcement by Bob Schultz at IBM: “The talent economy is one of the great outcomes of the digital era — and the ability to attract and develop the right talent has become a competitive advantage in most industries. According to a recent IBM study, which surveyed over 2,100 Chief Human Resource Officers, 33 percent of CHROs believe AI will revolutionize the way they do business over the next few years. In that same study, 65 percent of CEOs expect that people skills will have a strong impact on their businesses over the next several years. At IBM, we see AI as a tremendous untapped opportunity to transform the way companies attract, develop, and build the workforce for the decades ahead.

Consider this: The average hiring manager has hundreds of applicants a day for key positions and spends approximately six seconds on each resume. The ability to make the right decision without analytics and AI’s predictive abilities is limited and has the potential to create unconscious bias in hiring.

That is why today, I am pleased to announce the rollout of IBM Watson Recruitment’s Adverse Impact Analysis capability, which identifies instances of bias related to age, gender, race, education, or previous employer by assessing an organization’s historical hiring data and highlighting potential unconscious biases. This capability empowers HR professionals to take action against potentially biased hiring trends — and in the future, choose the most promising candidate based on the merit of their skills and experience alone. This announcement is part of IBM’s largest ever AI toolset release, tailor made for nine industries and professions where AI will play a transformational role….(More)”.

Computers Can Solve Your Problem. You May Not Like The Answer


David Scharfenberg at the Boston Globe: “Years of research have shown that teenagers need their sleep. Yet high schools often start very early in the morning. Starting them later in Boston would require tinkering with elementary and middle school schedules, too — a Gordian knot of logistics, pulled tight by the weight of inertia, that proved impossible to untangle.

Until the computers came along.

Last year, the Boston Public Schools asked MIT graduate students Sébastien Martin and Arthur Delarue to build an algorithm that could do the enormously complicated work of changing start times at dozens of schools — and rerouting the hundreds of buses that serve them….

The algorithm was poised to put Boston on the leading edge of a digital transformation of government. In New York, officials were using a regression analysis tool to focus fire inspections on the most vulnerable buildings. And in Allegheny County, Pa., computers were churning through thousands of health, welfare, and criminal justice records to help identify children at risk of abuse….

While elected officials tend to legislate by anecdote and oversimplify the choices that voters face, algorithms can chew through huge amounts of complicated information. The hope is that they’ll offer solutions we’ve never imagined ­— much as Google Maps, when you’re stuck in traffic, puts you on an alternate route, down streets you’ve never traveled.

Dataphiles say algorithms may even allow us to filter out the human biases that run through our criminal justice, social service, and education systems. And the MIT algorithm offered a small window into that possibility. The data showed that schools in whiter, better-off sections of Boston were more likely to have the school start times that parents prize most — between 8 and 9 a.m. The mere act of redistributing start times, if aimed at solving the sleep deprivation problem and saving money, could bring some racial equity to the system, too.

Or, the whole thing could turn into a political disaster.

District officials expected some pushback when they released the new school schedule on a Thursday night in December, with plans to implement in the fall of 2018. After all, they’d be messing with the schedules of families all over the city.

But no one anticipated the crush of opposition that followed. Angry parents signed an online petition and filled the school committee chamber, turning the plan into one of the biggest crises of Mayor Marty Walsh’s tenure. The city summarily dropped it. The failure would eventually play a role in the superintendent’s resignation.

It was a sobering moment for a public sector increasingly turning to computer scientists for help in solving nagging policy problems. What had gone wrong? Was it a problem with the machine? Or was it a problem with the people — both the bureaucrats charged with introducing the algorithm to the public, and the public itself?…(More)”

Causal mechanisms and institutionalisation of open government data in Kenya


Paper by Paul W. Mungai: “Open data—including open government data (OGD)—has become a topic of prominence during the last decade. However, most governments have not realised the desired value streams or outcomes from OGD. The Kenya Open Data Initiative (KODI), a Government of Kenya initiative, is no exception with some moments of success but also sustainability struggles. Therefore, the focus for this paper is to understand the causal mechanisms that either enable or constrain institutionalisation of OGD initiatives. Critical realism is ideally suited as a paradigm to identify such mechanisms, but guides to its operationalisation are few. This study uses the operational approach of Bygstad, Munkvold & Volkoff’s six‐step framework, a hybrid approach that melds concepts from existing critical realism models with the idea of affordances. The findings suggest that data demand and supply mechanisms are critical in institutionalising KODI and that, underpinning basic data‐related affordances, are mechanisms engaging with institutional capacity, formal policy, and political support. It is the absence of such elements in the Kenya case which explains why it has experienced significant delays…(More)”.

The Three Goals and Five Functions of Data Stewards


Medium Article by Stefaan G. Verhulst: “…Yet even as we see more data steward-type roles defined within companies, there exists considerable confusion about just what they should be doing. In particular, we have noticed a tendency to conflate the roles of data stewards with those of individuals or groups who might be better described as chief privacy, chief data or security officers. This slippage is perhaps understandable, but our notion of the role is somewhat broader. While privacy and security are of course key components of trusted and effective data collaboratives, the real goal is to leverage private data for broader social goals — while preventing harm.

So what are the necessary attributes of data stewards? What are their roles, responsibilities, and goals of data stewards? And how can they be most effective, both as champions of sharing within organizations and as facilitators for leveraging data with external entities? These are some of the questions we seek to address in our current research, and below we outline some key preliminary findings.

The following “Three Goals” and “Five Functions” can help define the aspirations of data stewards, and what is needed to achieve the goals. While clearly only a start, these attributes can help guide companies currently considering setting up sharing initiatives or establishing data steward-like roles.

The Three Goals of Data Stewards

  • Collaborate: Data stewards are committed to working and collaborating with others, with the goal of unlocking the inherent value of data when a clear case exists that it serves the public good and that it can be used in a responsible manner.
  • Protect: Data stewards are committed to managing private data ethically, which means sharing information responsibly, and preventing harm to potential customers, users, corporate interests, the wider public and of course those individuals whose data may be shared.
  • Act: Data stewards are committed to pro-actively acting in order to identify partners who may be in a better position to unlock value and insights contained within privately held data.

…(More)”.

Google, T-Mobile Tackle 911 Call Problem


Sarah Krouse at the Wall Street Journal: “Emergency call operators will soon have an easier time pinpointing the whereabouts of Android phone users.

Google has struck a deal with T-Mobile US to pipe location data from cellphones with Android operating systems in the U.S. to emergency call centers, said Fiona Lee, who works on global partnerships for Android emergency location services.

The move is a sign that smartphone operating system providers and carriers are taking steps to improve the quality of location data they send when customers call 911. Locating callers has become a growing problem for 911 operators as cellphone usage has proliferated. Wireless devices now make 80% or more of the 911 calls placed in some parts of the U.S., according to the trade group National Emergency Number Association. There are roughly 240 million calls made to 911 annually.

While landlines deliver an exact address, cellphones typically register only an estimated location provided by wireless carriers that can be as wide as a few hundred yards and imprecise indoors.

That has meant that while many popular applications like Uber can pinpoint users, 911 call takers can’t always do so. Technology giants such as Google and Apple Inc. that run phone operating systems need a direct link to the technology used within emergency call centers to transmit precise location data….

Google currently offers emergency location services in 14 countries around the world by partnering with carriers and companies that are part of local emergency communications infrastructure. Its location data is based on a combination of inputs from Wi-Fi to sensors, GPS and a mobile network information.

Jim Lake, director at the Charleston County Consolidated 9-1-1 Center, participated in a pilot of Google’s emergency location services and said it made it easier to find people who didn’t know their location, particularly because the area draws tourists.

“On a day-to-day basis, most people know where they are, but when they don’t, usually those are the most horrifying calls and we need to know right away,” Mr. Lake said.

In June, Apple said it had partnered with RapidSOS to send iPhone users’ location information to 911 call centers….(More)”

We hold people with power to account. Why not algorithms?


Hannah Fry at the Guardian: “…But already in our hospitals, our schools, our shops, our courtrooms and our police stations, artificial intelligence is silently working behind the scenes, feeding on our data and making decisions on our behalf. Sure, this technology has the capacity for enormous social good – it can help us diagnose breast cancer, catch serial killers, avoid plane crashes and, as the health secretary, Matt Hancock, has proposed, potentially save lives using NHS data and genomics. Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.

If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong…

I think it’s time we started treating machines as we would any other source of power. I would like to propose a system of regulation for algorithms, and perhaps a good place to start would be with Tony Benn’s five simple questions, designed for powerful people, but equally applicable to modern AI:

“What power have you got?
“Where did you get it from?
“In whose interests do you use it?
“To whom are you accountable?
“How do we get rid of you?”
Because, ultimately, we can’t just think of algorithms in isolation. We have to think of the failings of the people who design them – and the danger to those they are supposedly designed to serve.

Illuminating GDP


Money and Banking: “GDP figures are ‘man-made’ and therefore unreliable,” reported remarks of Li Keqiang (then Communist Party secretary of the northeastern Chinese province of Liaoning), March 12, 2007.

Satellites are great. It is hard to imagine living without them. GPS navigation is just the tip of the iceberg. Taking advantage of the immense amounts of information collected over decades, scientists have been using satellite imagery to study a broad array of questions, ranging from agricultural land use to the impact of climate change to the geographic constraints on cities (see here for a recent survey).

One of the most well-known economic applications of satellite imagery is to use night-time illumination to enhance the accuracy of various reported measures of economic activity. For example, national statisticians in countries with poor information collection systems can employ information from satellites to improve the quality of their nationwide economic data (see here). Even where governments have relatively high-quality statistics at a national level, it remains difficult and costly to determine local or regional levels of activity. For example, while production may occur in one jurisdiction, the income generated may be reported in another. At a sufficiently high resolution, satellite tracking of night-time light emissions can help address this question (see here).

But satellite imagery is not just an additional source of information on economic activity, it is also a neutral one that is less prone to manipulation than standard accounting data. This makes it is possible to use information on night-time light to monitor the accuracy of official statistics. And, as we suggest later, the willingness of observers to apply a “satellite correction” could nudge countries to improve their own data reporting systems in line with recognized international standards.

As Luis Martínez inquires in his recent paper, should we trust autocrats’ estimates of GDP? Even in relatively democratic countries, there are prominent examples of statistical manipulation (recall the cases of Greek sovereign debt in 2009 and Argentine inflation in 2014). In the absence of democratic checks on the authorities, Martínez finds even greater tendencies to distort the numbers….(More)”.

Is Mass Surveillance the Future of Conservation?


Mallory Picket at Slate: “The high seas are probably the most lawless place left on Earth. They’re a portal back in time to the way the world looked for most of our history: fierce and open competition for resources and contested territories. Pirating continues to be a way to make a living.

It’s not a complete free-for-all—most countries require registration of fishing vessels and enforce environmental protocols. Cooperative agreements between countries oversee fisheries in international waters. But the best data available suggests that around 20 percent of the global seafood catch is illegal. This is an environmental hazard because unregistered boats evade regulations meant to protect marine life. And it’s an economic problem for fishermen who can’t compete with boats that don’t pay for licenses or follow the (often expensive) regulations. In many developing countries, local fishermen are outfished by foreign vessels coming into their territory and stealing their stock….

But Henri Weimerskirch, a French ecologist, has a cheap, low-impact way to monitor thousands of square miles a day in real time: He’s getting birds to do it (a project first reported by Hakai). Specifically, albatross, which have a 10-foot wingspan and can fly around the world in 46 days. The birds naturally congregate around fishing boats, hoping for an easy meal, so Weimerskirch is equipping them with GPS loggers that also have radar detection to pick up the ship’s radar (and make sure it is a ship, not an island) and a transmitter to send that data to authorities in real time. If it works, this should help in two ways: It will provide some information on the extent of the unofficial fishing operation in the area, and because the logger will transmit their information in real time, the data will be used to notify French navy ships in the area to check out suspicious boats.

His team is getting ready to deploy about 80 birds in the south Indian Ocean this November.
The loggers attached around the birds’ legs are about the shape and size of a Snickers. The south Indian Ocean is a shared fishing zone, and nine countries, including France (courtesy of several small islands it claims ownership of, a vestige of colonialism), manage it together. But there are big problems with illegal fishing in the area, especially of the Patagonian toothfish (better known to consumers as Chilean seabass)….(More)”

Data-Driven Government: The Role of Chief Data Officers


Jane Wiseman for IBM Center for The Business of Government: “Governments at all levels have seen dramatic increases in availability and use of data over the past decade.

The push for data-driven government is currently of intense interest at the federal level as it develops an integrated federal data strategy as part of its goal to “leverage data as a strategic asset.” There is also pending legislation to require agencies to designate chief data officers (CDOs).

This report focuses on the expanding use of data at the federal level and how to best manage it. Ms. Wiseman says: “The purpose of this report is to advance the use of data in government by describing the work of pioneering federal CDOs and providing a framework for thinking about how a new analytics leader might establish his or her office and use data to advance the mission of the agency.”

Ms. Wiseman’s report provides rich profiles of five pioneering CDOs in the federal government and how they have defined their new roles. Based on her research and interviews, she offers insights into how the role of agency CDOs is evolving in different agencies and the reasons agency leaders are establishing these roles.  She also offers advice on how new CDOs can be successful at the federal level, based on the experiences of the pioneers as well as the experiences of state and local CDOs….(More)”.