Finding a more human government


Report by the Centre for Public Impact: “…embarked upon a worldwide project to find out how governments can strengthen their legitimacy. Amidst the turbulence and unpredictability of recent years, there are many contemporary accounts of people feeling angry, cynical or ambivalent about government.

While much has been said about the personalities of leaders and the rise of populist parties, what’s less clear is what governments could really do to strengthen legitimacy, a concept most agree remains integral to worldwide stability and peace. To find out what legitimacy means to people today and how it could be strengthened, we decided to break out of the usual circles of influence and ensure our project heard directly from citizens from around the world. People were open and honest about the struggle for someone in government to understand and to listen. Some shed tears while others felt angry about how their voices and identities seemed undervalued. Everyone, however, wanted to show how it was still very possible to build a stronger relationship and understanding between governments and people, even if the day-to-day actions of government were not always popular.

The aim of this paper is not to provide the definitive model for legitimacy. Instead, we have sought to be open about what we heard, stay true to people’s views and shine a light on the common themes that could help governments have better conversations about building legitimacy into all their systems and with the support of their citizens.

We gathered case studies to show how this was already happening and found positive examples in places we didn’t expect. The importance of governments showing their human side – even in our age of AI and robotics – emerged as such a key priority, and is why we called this paper Finding a more human government.

This is a conversation that has only just begun. …. To see what others are saying, do take a look at our website www.findinglegitimacy.centreforpublicimpact.org”

How the government will operate in 2030


Darrell West at the Hill: “Imagine it is 2030 and you are a U.S. government employee working from home. With the assistance of the latest technology, you participate in video calls with clients and colleagues, augment your job activities through artificial intelligence and a personal digital assistant, work through collaboration software, and regularly get rated on a one-to-five scale by clients regarding your helpfulness, follow-through, and task completion.

How did you — and the government — get here? The sharing economy that unfolded in 2018 has revolutionized the public-sector workforce. The days when federal employees were subject to a centrally directed Office of Personnel and Management that oversaw permanent, full-time workers sitting in downtown office buildings are long gone. In their place is a remote workforce staffed by a mix of short- and long-term employees. This has dramatically improved worker productivity and satisfaction.

In the new digital world that has emerged, the goal is to use technology to make employees accountable. Gone are 20- or 30-year careers in the federal bureaucracy. Political leaders have always preached the virtue of running government like a business, and the success of Uber, Airbnb, and WeWork has persuaded them to focus on accountability and performance.

Companies such as Facebook demonstrated they could run large and complex organizations with less than 20,000 employees, and the federal government followed suit in the late 2020s. Now, workers deploy the latest tools of artificial intelligence, virtual reality, data analytics, robots, driverless cars, and digital assistants to improve the government. Unlike the widespread mistrust and cynicism that had poisoned attitudes in the decades before, the general public now sees government as a force for achieving positive results.

Many parts of the federal government are decentralized and mid-level employees are given greater authority to make decisions — but are subject to digital ratings that keep them accountable for their performance. The U.S. government borrowed this technique from China, where airport authorities in 2018 installed digital devices that allowed visitors to rate the performance of individual passport officers after every encounter. The reams of data have enabled Chinese authorities to fire poor performers and make sure foreign visitors see a friendly and competent face at the Beijing International Airport.

Alexa-like devices are given to all federal employees. The devices are used to keep track of leave time, file reimbursement requests, request time off, and complete a range of routine tasks that used to take employees hours. Through voice-activated commands, they navigate these mundane tasks quickly and efficiently. No one can believe the mountains of paperwork required just a decade ago….(More)”.

How Refugees Are Helping Create Blockchain’s Brand New World


Jessi Hempel at Wired: “Though best known for underpinning volatile cryptocurrencies, like Bitcoin and Ethereum, blockchain technology has a number of qualities which make it appealing for record-keeping. A distributed ledger doesn’t depend on a central authority to verify its existence, or to facilitate transactions within it, which makes it less vulnerable to tampering. By using applications that are built on the ‘chain, individuals may be able to build up records over time, use those records across borders as a form of identity—essentially creating the trust they need to interact with the world, without depending on a centralized authority, like a government or a bank, to vouch for them.

For now, these efforts are small experiments. In Finland, the Finnish Immigration Service offers refugees a prepaid Mastercard developed by the Helsinki-based startup MONI that also links to a digital identity, composed of the record of one’s financial transactions, which is stored on the blockchain. In Moldova, the government is working with digital identification expertsfrom the United Nations Office for Project Services (UNOPS) to brainstorm ways to use blockchain to provide children living in rural areas with a digital identity, so it’s more difficult for traffickers to smuggle them across borders.

Among the more robust programs is a pilot the United Nations World Food Program (WFP) launched in Jordan last May. Syrian refugees stationed at the Azraq Refugee Camp receive vouchers to shop at the local grocery store. The WFP integrated blockchain into its biometric authentication technology, so Syrian refugees can cash in their vouchers at the supermarket by staring into a retina scanner. These transactions are recorded on a private Ethereum-basedblockchain, called Building Blocks. Because the blockchain eliminates the need for WFP to pay banks to facilitate transactions, Building Blocks could save the WFP as much as $150,000 each month in bank fees in Jordan alone. The program has been so successful that by the end of the year, the WFP plans to expand the technology throughout Jordan. Blockchain enthusiasts imagine a future in which refugees can access more than just food vouchers, accumulating a transaction history that could stand in as a credit history when they attempt to resettle….

But in the rush to apply blockchain technology to every problem, many point out that relying on the ledger may have unintended consequences. As the Blockchain for Social Impact chief technology officer at ConsenSys, Robert Greenfeld IV writes, blockchain-based identity “isn’t a silver bullet, and if we don’t think about it/build it carefully, malicious actors could still capitalize on it as an element of control.” If companies rely on private blockchains, he warns, there’s a danger that the individual permissions will prevent these identity records from being used in multiple places. (Many of these projects, like the UNWFP project, are built on private blockchains so that organizations can exert more control over their development.) “If we don’t start to collaborate together with populations, we risk ending up with a bunch of siloed solutions,” says Greenfeld.

For his part, Greenfeld suggests governments could easily use state-sponsored machine learning algorithms to monitor public blockchain activity. But as bitcoin enthusiasts branch out of their get-rich-quick schemes to wrestle with how to make the web more equitable for everyone, they have the power to craft a world of their own devising. The early web should be a lesson to the bitcoin enthusiasts as they promote the blockchain’s potential. Right now we have the power to determine its direction; the dangers exist, but the potential is enormous….(More)”

Artificial Intelligence and the Need for Data Fairness in the Global South


Medium blog by Yasodara Cordova: “…The data collected by industry represents AI opportunities for governments, to improve their services through innovation. Data-based intelligence promises to increase the efficiency of resource management by improving transparency, logistics, social welfare distribution — and virtually every government service. E-government enthusiasm took of with the realization of the possible applications, such as using AI to fight corruption by automating the fraud-tracking capabilities of cost-control tools. Controversially, the AI enthusiasm has spread to the distribution of social benefits, optimization of tax oversight and control, credit scoring systems, crime prediction systems, and other applications based in personal and sensitive data collection, especially in countries that do not have comprehensive privacy protections.

There are so many potential applications, society may operate very differently in ten years when the “datafixation” has advanced beyond citizen data and into other applications such as energy and natural resource management. However, many countries in the Global South are not being given necessary access to their countries’ own data.

Useful data are everywhere, but only some can take advantage. Beyond smartphones, data can be collected from IoT components in common spaces. Not restricted to urban spaces, data collection includes rural technology like sensors installed in tractors. However, even when the information is related to issues of public importance in developing countries —like data taken from road mesh or vital resources like water and land — it stays hidden under contract rules and public citizens cannot access, and therefore take benefit, from it. This arrangement keeps the public uninformed about their country’s operations. The data collection and distribution frameworks are not built towards healthy partnerships between industry and government preventing countries from realizing the potential outlined in the previous paragraph.

The data necessary to the development of better cities, public policies, and common interest cannot be leveraged if kept in closed silos, yet access often costs more than is justifiable. Data are a primordial resource to all stages of new technology, especially tech adoption and integration, so the necessary long term investment in innovation needs a common ground to start with. The mismatch between the pace of the data collection among big established companies and small, new, and local businesses will likely increase with time, assuming no regulation is introduced for equal access to collected data….

Currently, data independence remains restricted to discussions on the technological infrastructure that supports data extraction. Privacy discussions focus on personal data rather than the digital accumulation of strategic data in closed silos — a necessary discussion not yet addressed. The national interest of data is not being addressed in a framework of economic and social fairness. Access to data, from a policy-making standpoint, needs to find a balance between the extremes of public, open access and limited, commercial use.

A final, but important note: the vast majority of social media act like silos. APIs play an important role in corporate business models, where industry controls the data it collects without reward, let alone user transparency. Negotiation of the specification of APIs to make data a common resource should be considered, for such an effort may align with the citizens’ interest….(More)”.

How to Make A.I. That’s Good for People


Fei-Fei Li in the New York Times: “For a field that was not well known outside of academia a decade ago, artificial intelligence has grown dizzyingly fast. Tech companies from Silicon Valley to Beijing are betting everything on it, venture capitalists are pouring billions into research and development, and start-ups are being created on what seems like a daily basis. If our era is the next Industrial Revolution, as many claim, A.I. is surely one of its driving forces.

It is an especially exciting time for a researcher like me. When I was a graduate student in computer science in the early 2000s, computers were barely able to detect sharp edges in photographs, let alone recognize something as loosely defined as a human face. But thanks to the growth of big data, advances in algorithms like neural networks and an abundance of powerful computer hardware, something momentous has occurred: A.I. has gone from an academic niche to the leading differentiator in a wide range of industries, including manufacturing, health care, transportation and retail.

I worry, however, that enthusiasm for A.I. is preventing us from reckoning with its looming effects on society. Despite its name, there is nothing “artificial” about this technology — it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.

I call this approach “human-centered A.I.” It consists of three goals that can help responsibly guide the development of intelligent machines.

First, A.I. needs to reflect more of the depth that characterizes our own intelligence….

No technology is more reflective of its creators than A.I. It has been said that there are no “machine” values at all, in fact; machine values arehuman values. A human-centered approach to A.I. means these machines don’t have to be our competitors, but partners in securing our well-being. However autonomous our technology becomes, its impact on the world — for better or worse — will always be our responsibility….(More).

Artificial intelligence could identify gang crimes—and ignite an ethical firestorm


Matthew Hutson at Science: “When someone roughs up a pedestrian, robs a store, or kills in cold blood, police want to know whether the perpetrator was a gang member: Do they need to send in a special enforcement team? Should they expect a crime in retaliation? Now, a new algorithm is trying to automate the process of identifying gang crimes. But some scientists warn that far from reducing gang violence, the program could do the opposite by eroding trust in communities, or it could brand innocent people as gang members.

That has created some tensions. At a presentation of the new program this month, one audience member grew so upset he stormed out of the talk, and some of the creators of the program have been tight-lipped about how it could be used….

For years, scientists have been using computer algorithms to map criminal networks, or to guess where and when future crimes might take place, a practice known as predictive policing. But little work has been done on labeling past crimes as gang-related.

In the new work, researchers developed a system that can identify a crime as gang-related based on only four pieces of information: the primary weapon, the number of suspects, and the neighborhood and location (such as an alley or street corner) where the crime took place. Such analytics, which can help characterize crimes before they’re fully investigated, could change how police respond, says Doug Haubert, city prosecutor for Long Beach, California, who has authored strategies on gang prevention.

To classify crimes, the researchers invented something called a partially generative neural network. A neural network is made of layers of small computing elements that process data in a way reminiscent of the brain’s neurons. A form of machine learning, it improves based on feedback—whether its judgments were right. In this case, researchers trained their algorithm using data from the Los Angeles Police Department (LAPD) in California from 2014 to 2016 on more than 50,000 gang-related and non–gang-related homicides, aggravated assaults, and robberies.

The researchers then tested their algorithm on another set of LAPD data. The network was “partially generative,” because even when it did not receive an officer’s narrative summary of a crime, it could use the four factors noted above to fill in that missing information and then use all the pieces to infer whether a crime was gang-related. Compared with a stripped-down version of the network that didn’t use this novel approach, the partially generative algorithm reduced errors by close to 30%, the team reported at the Artificial Intelligence, Ethics, and Society (AIES) conference this month in New Orleans, Louisiana. The researchers have not yet tested their algorithm’s accuracy against trained officers.

It’s an “interesting paper,” says Pete Burnap, a computer scientist at Cardiff University who has studied crime data. But although the predictions could be useful, it’s possible they would be no better than officers’ intuitions, he says. Haubert agrees, but he says that having the assistance of data modeling could sometimes produce “better and faster results.” Such analytics, he says, “would be especially useful in large urban areas where a lot of data is available.”…(More).

Your Data Is Crucial to a Robotic Age. Shouldn’t You Be Paid for It?


The New York Times: “The idea has been around for a bit. Jaron Lanier, the tech philosopher and virtual-reality pioneer who now works for Microsoft Research, proposed it in his 2013 book, “Who Owns the Future?,” as a needed corrective to an online economy mostly financed by advertisers’ covert manipulation of users’ consumer choices.

It is being picked up in “Radical Markets,” a book due out shortly from Eric A. Posner of the University of Chicago Law School and E. Glen Weyl, principal researcher at Microsoft. And it is playing into European efforts to collect tax revenue from American internet giants.

In a report obtained last month by Politico, the European Commission proposes to impose a tax on the revenue of digital companies based on their users’ location, on the grounds that “a significant part of the value of a business is created where the users are based and data is collected and processed.”

Users’ data is a valuable commodity. Facebook offers advertisers precisely targeted audiences based on user profiles. YouTube, too, uses users’ preferences to tailor its feed. Still, this pales in comparison with how valuable data is about to become, as the footprint of artificial intelligence extends across the economy.

Data is the crucial ingredient of the A.I. revolution. Training systems to perform even relatively straightforward tasks like voice translation, voice transcription or image recognition requires vast amounts of data — like tagged photos, to identify their content, or recordings with transcriptions.

“Among leading A.I. teams, many can likely replicate others’ software in, at most, one to two years,” notes the technologist Andrew Ng. “But it is exceedingly difficult to get access to someone else’s data. Thus data, rather than software, is the defensible barrier for many businesses.”

We may think we get a fair deal, offering our data as the price of sharing puppy pictures. By other metrics, we are being victimized: In the largest technology companies, the share of income going to labor is only about 5 to 15 percent, Mr. Posner and Mr. Weyl write. That’s way below Walmart’s 80 percent. Consumer data amounts to work they get free….

The big question, of course, is how we get there from here. My guess is that it would be naïve to expect Google and Facebook to start paying for user data of their own accord, even if that improved the quality of the information. Could policymakers step in, somewhat the way the European Commission did, demanding that technology companies compute the value of consumer data?…(More)”.

Journalism and artificial intelligence


Notes by Charlie Beckett (at LSE’s Media Policy Project Blog) : “…AI and machine learning is a big deal for journalism and news information. Possibly as important as the other developments we have seen in the last 20 years such as online platforms, digital tools and social media. My 2008 book on how journalism was being revolutionised by technology was called SuperMedia because these technologies offered extraordinary opportunities to make journalism much more efficient and effective – but also to transform what we mean by news and how we relate to it as individuals and communities. Of course, that can be super good or super bad.

Artificial intelligence and machine learning can help the news media with its three core problems:

  1. The overabundance of information and sources that leave the public confused
  2. The credibility of journalism in a world of disinformation and falling trust and literacy
  3. The Business model crisis – how can journalism become more efficient – avoiding duplication; be more engaged, add value and be relevant to the individual’s and communities’ need for quality, accurate information and informed, useful debate.

But like any technology they can also be used by bad people or for bad purposes: in journalism that can mean clickbait, misinformation, propaganda, and trolling.

Some caveats about using AI in journalism:

  1. Narratives are difficult to program. Trusted journalists are needed to understand and write meaningful stories.
  2. Artificial Intelligence needs human inputs. Skilled journalists are required to double check results and interpret them.
  3. Artificial Intelligence increases quantity, not quality. It’s still up to the editorial team and developers to decide what kind of journalism the AI will help create….(More)”.

Global Fishing Watch And The Power Of Data To Understand Our Natural World


A year and a half ago I wrote about the public debut of the Global Fishing Watch project as a showcase of what becomes possible when massive datasets are made accessible to the general public through easy-to-use interfaces that allow them to explore the planet they inhabit. At the time I noted how the project drove home the divide between the “glittering technological innovation of Silicon Valley and the technological dark ages of the development community” and what becomes possible when technologists and development organizations come together to apply incredible technology not for commercial gain, but rather to save the world itself. Continuing those efforts, last week Global Fishing Watch launched what it describes as the “the first ever dataset of global industrial fishing activities (all countries, all gears),” making the entire dataset freely accessible to seed new scientific, activist, governmental, journalistic and citizen understanding of the state of global fishing.

The Global Fishing Watch project stands as a powerful model for data-driven development work done right and hopefully, the rise of notable efforts like it will eventually catalyze the broader development community to emerge from the stone age of technology and more openly embrace the technological revolution. While it has a very long way to go, there are signs of hope for the development community as pockets of innovation begin to infuse the power of data-driven decision making and situational awareness into everything from disaster response to proactive planning to shaping legislative action.

Bringing technologists and development organizations together is not always that easy and the most creative solutions aren’t always to be found among the “usual suspects.” Open data and open challenges built upon them offer the potential for organizations to reach beyond the usual communities they interact with and identify innovative new approaches to the grand challenges of their fields. Just last month a collaboration of the World Bank, WeRobotics and OpenAerialMap launched a data challenge to apply deep learning to assess aerial imagery in the immediate aftermath of disasters to determine the impact to food producing trees and to road networks. By launching the effort as an open AI challenge, the goal is to reach the broader AI and open development communities at the forefront of creative and novel algorithmic approaches….(More)”.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation


Report by Miles Brundage et al: “Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis. Countless more such applications are being developed and can be expected over the long term. Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.

In response to the changing threat landscape we make four high-level recommendations:

1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.

2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuserelated considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.

3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.

4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges….(More)”.