UK can lead the way on ethical AI, says Lords Committee


Lords Select Committee: “The UK is in a strong position to be a world leader in the development of artificial intelligence (AI). This position, coupled with the wider adoption of AI, could deliver a major boost to the economy for years to come. The best way to do this is to put ethics at the centre of AI’s development and use concludes a report by the House of Lords Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able?, published today….

One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Other conclusions from the report include:

  • Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI. Retraining will become a lifelong necessity.
  • Individuals need to be able to have greater personal control over their data, and the way in which it is used. The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency. This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability and data trusts.
  • The monopolisation of data by big technology companies must be avoided, and greater competition is required. The Government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.
  • The prejudices of the past must not be unwittingly built into automated systems. The Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.
  • Transparency in AI is needed. The industry, through the AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.
  • At earlier stages of education, children need to be adequately prepared for working with, and using, AI. The ethical design and use of AI should become an integral part of the curriculum.
  • The Government should be bold and use targeted procurement to provide a boost to AI development and deployment. It could encourage the development of solutions to public policy challenges through speculative investment. There have been impressive advances in AI for healthcare, which the NHS should capitalise on.
  • It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users, and clarity in this area is needed. The Committee recommend that the Law Commission investigate this issue.
  • The Government needs to draw up a national policy framework, in lockstep with the Industrial Strategy, to ensure the coordination and successful delivery of AI policy in the UK….(More)”.

Blockchain Slashes US Govt. Contract Award Time From 100 To 10 Days


Article by Cameron Bishop: “…The US General services Administration built the first federal procurement blockchain proof of concept about six months ago. The procurement blockchain was built to demonstrate how the distributed ledger technology can modernize federal procurement. The pilot project made them realize that blockchain, when combined with artificial intelligence and robotics, provides the foundational architecture for widespread automation.

The proof of concept, which was built in seven weeks, automated the procurement process. More importantly, it reduced the average contract award time from 100 days to less than 10 days. Complex tasks such as financial review was automated through the use of blockchain. It also eliminated human error, bias and subjectivity from the process. A smart contract deployed in the blockchain automatically calculated the financial health score from the offerors’ balance sheets and income statements. The entire process was standardized using commercial and government practices.

Furthermore, the use of blockchain ledger ensured that vendors were kept abreast of the developments. Vendors received alerts on a real-time basis as the offers progress through the workflow. This made the process transparent, while preserving the privacy of each transaction. The success of this pilot project is expected to bring a drastic change in the federal procurement process.

While a blockchain can be public, permissioned, and private, federal agencies may opt for a private blockchain to facilitate procurement transactions among pre-screened vendors with digital identity certificates.

The Federal Acquisition Regulation (FAR) provides guidelines to ensure integrity, openness and fairness in federal procurement. The blockchain technology will enforce those policies through a system of procedural trust embedded into the platform.

By using blockchain technology, the federal procurement process can be more transparent, efficient, faster, and less vulnerable to fraud and abuse. More importantly, by design, a blockchain preserves the integrity of the assets and transactions between multiple parties within the value chain. Additionally, blockchain will avoid unnecessary litigations, while promoting competition in a healthy manner. It will also provide an organization with unique insights into the procurement value chain unavailable previously….(More)”.

Algorithmic Impact Assessment (AIA) framework


Report by AINow Institute: “Automated decision systems are currently being used by public agencies, reshaping how criminal justice systems work via risk assessment algorithms1 and predictive policing, optimizing energy use in critical infrastructure through AI-driven resource allocation, and changing our employment4 and educational systems through automated evaluation tools and matching algorithms.Researchers, advocates, and policymakers are debating when and where automated decision systems are appropriate, including whether they are appropriate at all in particularly sensitive domains.

Questions are being raised about how to fully assess the short and long term impacts of these systems, whose interests they serve, and if they are sufficiently sophisticated to contend with complex social and historical contexts. These questions are essential, and developing strong answers has been hampered in part by a lack of information and access to the systems under deliberation. Many such systems operate as “black boxes” – opaque software tools working outside the scope of meaningful scrutiny and accountability.8 This is concerning, since an informed policy debate is impossible without the ability to understand which existing systems are being used, how they are employed, and whether these systems cause unintended consequences. The Algorithmic Impact Assessment (AIA) framework proposed in this report is designed to support affected communities and stakeholders as they seek to assess the claims made about these systems, and to determine where – or if – their use is acceptable….

KEY ELEMENTS OF A PUBLIC AGENCY ALGORITHMIC IMPACT ASSESSMENT

1. Agencies should conduct a self-assessment of existing and proposed automated decision systems, evaluating potential impacts on fairness, justice, bias, or other concerns across affected communities;

2. Agencies should develop meaningful external researcher review processes to discover, measure, or track impacts over time;

3. Agencies should provide notice to the public disclosing their definition of “automated decision system,” existing and proposed systems, and any related self-assessments and researcher review processes before the system has been acquired;

4. Agencies should solicit public comments to clarify concerns and answer outstanding questions; and

5. Governments should provide enhanced due process mechanisms for affected individuals or communities to challenge inadequate assessments or unfair, biased, or otherwise harmful system uses that agencies have failed to mitigate or correct….(More)”.

AI And Open Data Show Just How Often Cars Block Bus And Bike Lanes


Eillie Anzilotti in Fast Company: “…While anyone who bikes or rides a bus in New York City knows intuitively that the lanes are often blocked, there’s been little data to back up that feeling apart from the fact that last year, the NYPD issues 24,000 tickets for vehicles blocking bus lanes, and around 79,000 to cars in the bike lane. By building the algorithm, Bell essentializes what engaged citizenship and productive use of open data looks like. The New York City Department of Transportation maintains several hundred video cameras throughout the city; those cameras feed images in real time to the DOT’s open-data portal. Bell downloaded a week’s worth of footage from that portal to analyze.

To build his computer algorithm to do the analysis, he fed around 2,000 images of buses, cars, pedestrians, and vehicles like UPS trucks into TensorFlow, Google’s open-source framework that the tech giant is using to train autonomous vehicles to recognize other road users. “Because of the push into AVs, machine learning in general and neural networks have made lots of progress, because they have to answer the same questions of: What is this vehicle, and what is it going to do?” Bell says. After several rounds of processing, Bell was able to come up with an algorithm that fairly faultlessly could determine if a vehicle at the bus stop was, in fact, a bus, or if it was something else that wasn’t supposed to be there.

As cities and governments, spurred by organizations like OpenGov, have moved to embrace transparency and open data, the question remains: So, what do you do with it?

For Bell, the answer is that citizens can use it to empower themselves. “I’m a little uncomfortable with cameras and surveillance in cities,” Bell says. “But agencies like the NYPD and DOT have already made the decision to put the cameras up. We don’t know the positive and negative outcomes if more and more data from cameras is opened to the public, but if the cameras are going in, we should know what data they’re collecting and be able to access it,” he says. He’s made his algorithm publicly available in the hopes that more people will use data to investigate the issue on their own streets, and perhaps in other cities….Bell is optimistic that open data can empower more citizens to identify issues in their own cities and bring a case for why they need to be addressed….(More)”.

Finding a more human government


Report by the Centre for Public Impact: “…embarked upon a worldwide project to find out how governments can strengthen their legitimacy. Amidst the turbulence and unpredictability of recent years, there are many contemporary accounts of people feeling angry, cynical or ambivalent about government.

While much has been said about the personalities of leaders and the rise of populist parties, what’s less clear is what governments could really do to strengthen legitimacy, a concept most agree remains integral to worldwide stability and peace. To find out what legitimacy means to people today and how it could be strengthened, we decided to break out of the usual circles of influence and ensure our project heard directly from citizens from around the world. People were open and honest about the struggle for someone in government to understand and to listen. Some shed tears while others felt angry about how their voices and identities seemed undervalued. Everyone, however, wanted to show how it was still very possible to build a stronger relationship and understanding between governments and people, even if the day-to-day actions of government were not always popular.

The aim of this paper is not to provide the definitive model for legitimacy. Instead, we have sought to be open about what we heard, stay true to people’s views and shine a light on the common themes that could help governments have better conversations about building legitimacy into all their systems and with the support of their citizens.

We gathered case studies to show how this was already happening and found positive examples in places we didn’t expect. The importance of governments showing their human side – even in our age of AI and robotics – emerged as such a key priority, and is why we called this paper Finding a more human government.

This is a conversation that has only just begun. …. To see what others are saying, do take a look at our website www.findinglegitimacy.centreforpublicimpact.org”

How the government will operate in 2030


Darrell West at the Hill: “Imagine it is 2030 and you are a U.S. government employee working from home. With the assistance of the latest technology, you participate in video calls with clients and colleagues, augment your job activities through artificial intelligence and a personal digital assistant, work through collaboration software, and regularly get rated on a one-to-five scale by clients regarding your helpfulness, follow-through, and task completion.

How did you — and the government — get here? The sharing economy that unfolded in 2018 has revolutionized the public-sector workforce. The days when federal employees were subject to a centrally directed Office of Personnel and Management that oversaw permanent, full-time workers sitting in downtown office buildings are long gone. In their place is a remote workforce staffed by a mix of short- and long-term employees. This has dramatically improved worker productivity and satisfaction.

In the new digital world that has emerged, the goal is to use technology to make employees accountable. Gone are 20- or 30-year careers in the federal bureaucracy. Political leaders have always preached the virtue of running government like a business, and the success of Uber, Airbnb, and WeWork has persuaded them to focus on accountability and performance.

Companies such as Facebook demonstrated they could run large and complex organizations with less than 20,000 employees, and the federal government followed suit in the late 2020s. Now, workers deploy the latest tools of artificial intelligence, virtual reality, data analytics, robots, driverless cars, and digital assistants to improve the government. Unlike the widespread mistrust and cynicism that had poisoned attitudes in the decades before, the general public now sees government as a force for achieving positive results.

Many parts of the federal government are decentralized and mid-level employees are given greater authority to make decisions — but are subject to digital ratings that keep them accountable for their performance. The U.S. government borrowed this technique from China, where airport authorities in 2018 installed digital devices that allowed visitors to rate the performance of individual passport officers after every encounter. The reams of data have enabled Chinese authorities to fire poor performers and make sure foreign visitors see a friendly and competent face at the Beijing International Airport.

Alexa-like devices are given to all federal employees. The devices are used to keep track of leave time, file reimbursement requests, request time off, and complete a range of routine tasks that used to take employees hours. Through voice-activated commands, they navigate these mundane tasks quickly and efficiently. No one can believe the mountains of paperwork required just a decade ago….(More)”.

How Refugees Are Helping Create Blockchain’s Brand New World


Jessi Hempel at Wired: “Though best known for underpinning volatile cryptocurrencies, like Bitcoin and Ethereum, blockchain technology has a number of qualities which make it appealing for record-keeping. A distributed ledger doesn’t depend on a central authority to verify its existence, or to facilitate transactions within it, which makes it less vulnerable to tampering. By using applications that are built on the ‘chain, individuals may be able to build up records over time, use those records across borders as a form of identity—essentially creating the trust they need to interact with the world, without depending on a centralized authority, like a government or a bank, to vouch for them.

For now, these efforts are small experiments. In Finland, the Finnish Immigration Service offers refugees a prepaid Mastercard developed by the Helsinki-based startup MONI that also links to a digital identity, composed of the record of one’s financial transactions, which is stored on the blockchain. In Moldova, the government is working with digital identification expertsfrom the United Nations Office for Project Services (UNOPS) to brainstorm ways to use blockchain to provide children living in rural areas with a digital identity, so it’s more difficult for traffickers to smuggle them across borders.

Among the more robust programs is a pilot the United Nations World Food Program (WFP) launched in Jordan last May. Syrian refugees stationed at the Azraq Refugee Camp receive vouchers to shop at the local grocery store. The WFP integrated blockchain into its biometric authentication technology, so Syrian refugees can cash in their vouchers at the supermarket by staring into a retina scanner. These transactions are recorded on a private Ethereum-basedblockchain, called Building Blocks. Because the blockchain eliminates the need for WFP to pay banks to facilitate transactions, Building Blocks could save the WFP as much as $150,000 each month in bank fees in Jordan alone. The program has been so successful that by the end of the year, the WFP plans to expand the technology throughout Jordan. Blockchain enthusiasts imagine a future in which refugees can access more than just food vouchers, accumulating a transaction history that could stand in as a credit history when they attempt to resettle….

But in the rush to apply blockchain technology to every problem, many point out that relying on the ledger may have unintended consequences. As the Blockchain for Social Impact chief technology officer at ConsenSys, Robert Greenfeld IV writes, blockchain-based identity “isn’t a silver bullet, and if we don’t think about it/build it carefully, malicious actors could still capitalize on it as an element of control.” If companies rely on private blockchains, he warns, there’s a danger that the individual permissions will prevent these identity records from being used in multiple places. (Many of these projects, like the UNWFP project, are built on private blockchains so that organizations can exert more control over their development.) “If we don’t start to collaborate together with populations, we risk ending up with a bunch of siloed solutions,” says Greenfeld.

For his part, Greenfeld suggests governments could easily use state-sponsored machine learning algorithms to monitor public blockchain activity. But as bitcoin enthusiasts branch out of their get-rich-quick schemes to wrestle with how to make the web more equitable for everyone, they have the power to craft a world of their own devising. The early web should be a lesson to the bitcoin enthusiasts as they promote the blockchain’s potential. Right now we have the power to determine its direction; the dangers exist, but the potential is enormous….(More)”

Artificial Intelligence and the Need for Data Fairness in the Global South


Medium blog by Yasodara Cordova: “…The data collected by industry represents AI opportunities for governments, to improve their services through innovation. Data-based intelligence promises to increase the efficiency of resource management by improving transparency, logistics, social welfare distribution — and virtually every government service. E-government enthusiasm took of with the realization of the possible applications, such as using AI to fight corruption by automating the fraud-tracking capabilities of cost-control tools. Controversially, the AI enthusiasm has spread to the distribution of social benefits, optimization of tax oversight and control, credit scoring systems, crime prediction systems, and other applications based in personal and sensitive data collection, especially in countries that do not have comprehensive privacy protections.

There are so many potential applications, society may operate very differently in ten years when the “datafixation” has advanced beyond citizen data and into other applications such as energy and natural resource management. However, many countries in the Global South are not being given necessary access to their countries’ own data.

Useful data are everywhere, but only some can take advantage. Beyond smartphones, data can be collected from IoT components in common spaces. Not restricted to urban spaces, data collection includes rural technology like sensors installed in tractors. However, even when the information is related to issues of public importance in developing countries —like data taken from road mesh or vital resources like water and land — it stays hidden under contract rules and public citizens cannot access, and therefore take benefit, from it. This arrangement keeps the public uninformed about their country’s operations. The data collection and distribution frameworks are not built towards healthy partnerships between industry and government preventing countries from realizing the potential outlined in the previous paragraph.

The data necessary to the development of better cities, public policies, and common interest cannot be leveraged if kept in closed silos, yet access often costs more than is justifiable. Data are a primordial resource to all stages of new technology, especially tech adoption and integration, so the necessary long term investment in innovation needs a common ground to start with. The mismatch between the pace of the data collection among big established companies and small, new, and local businesses will likely increase with time, assuming no regulation is introduced for equal access to collected data….

Currently, data independence remains restricted to discussions on the technological infrastructure that supports data extraction. Privacy discussions focus on personal data rather than the digital accumulation of strategic data in closed silos — a necessary discussion not yet addressed. The national interest of data is not being addressed in a framework of economic and social fairness. Access to data, from a policy-making standpoint, needs to find a balance between the extremes of public, open access and limited, commercial use.

A final, but important note: the vast majority of social media act like silos. APIs play an important role in corporate business models, where industry controls the data it collects without reward, let alone user transparency. Negotiation of the specification of APIs to make data a common resource should be considered, for such an effort may align with the citizens’ interest….(More)”.

How to Make A.I. That’s Good for People


Fei-Fei Li in the New York Times: “For a field that was not well known outside of academia a decade ago, artificial intelligence has grown dizzyingly fast. Tech companies from Silicon Valley to Beijing are betting everything on it, venture capitalists are pouring billions into research and development, and start-ups are being created on what seems like a daily basis. If our era is the next Industrial Revolution, as many claim, A.I. is surely one of its driving forces.

It is an especially exciting time for a researcher like me. When I was a graduate student in computer science in the early 2000s, computers were barely able to detect sharp edges in photographs, let alone recognize something as loosely defined as a human face. But thanks to the growth of big data, advances in algorithms like neural networks and an abundance of powerful computer hardware, something momentous has occurred: A.I. has gone from an academic niche to the leading differentiator in a wide range of industries, including manufacturing, health care, transportation and retail.

I worry, however, that enthusiasm for A.I. is preventing us from reckoning with its looming effects on society. Despite its name, there is nothing “artificial” about this technology — it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.

I call this approach “human-centered A.I.” It consists of three goals that can help responsibly guide the development of intelligent machines.

First, A.I. needs to reflect more of the depth that characterizes our own intelligence….

No technology is more reflective of its creators than A.I. It has been said that there are no “machine” values at all, in fact; machine values arehuman values. A human-centered approach to A.I. means these machines don’t have to be our competitors, but partners in securing our well-being. However autonomous our technology becomes, its impact on the world — for better or worse — will always be our responsibility….(More).

Artificial intelligence could identify gang crimes—and ignite an ethical firestorm


Matthew Hutson at Science: “When someone roughs up a pedestrian, robs a store, or kills in cold blood, police want to know whether the perpetrator was a gang member: Do they need to send in a special enforcement team? Should they expect a crime in retaliation? Now, a new algorithm is trying to automate the process of identifying gang crimes. But some scientists warn that far from reducing gang violence, the program could do the opposite by eroding trust in communities, or it could brand innocent people as gang members.

That has created some tensions. At a presentation of the new program this month, one audience member grew so upset he stormed out of the talk, and some of the creators of the program have been tight-lipped about how it could be used….

For years, scientists have been using computer algorithms to map criminal networks, or to guess where and when future crimes might take place, a practice known as predictive policing. But little work has been done on labeling past crimes as gang-related.

In the new work, researchers developed a system that can identify a crime as gang-related based on only four pieces of information: the primary weapon, the number of suspects, and the neighborhood and location (such as an alley or street corner) where the crime took place. Such analytics, which can help characterize crimes before they’re fully investigated, could change how police respond, says Doug Haubert, city prosecutor for Long Beach, California, who has authored strategies on gang prevention.

To classify crimes, the researchers invented something called a partially generative neural network. A neural network is made of layers of small computing elements that process data in a way reminiscent of the brain’s neurons. A form of machine learning, it improves based on feedback—whether its judgments were right. In this case, researchers trained their algorithm using data from the Los Angeles Police Department (LAPD) in California from 2014 to 2016 on more than 50,000 gang-related and non–gang-related homicides, aggravated assaults, and robberies.

The researchers then tested their algorithm on another set of LAPD data. The network was “partially generative,” because even when it did not receive an officer’s narrative summary of a crime, it could use the four factors noted above to fill in that missing information and then use all the pieces to infer whether a crime was gang-related. Compared with a stripped-down version of the network that didn’t use this novel approach, the partially generative algorithm reduced errors by close to 30%, the team reported at the Artificial Intelligence, Ethics, and Society (AIES) conference this month in New Orleans, Louisiana. The researchers have not yet tested their algorithm’s accuracy against trained officers.

It’s an “interesting paper,” says Pete Burnap, a computer scientist at Cardiff University who has studied crime data. But although the predictions could be useful, it’s possible they would be no better than officers’ intuitions, he says. Haubert agrees, but he says that having the assistance of data modeling could sometimes produce “better and faster results.” Such analytics, he says, “would be especially useful in large urban areas where a lot of data is available.”…(More).