Driven to safety — it’s time to pool our data


Kevin Guo at TechCrunch: “…Anyone with experience in the artificial intelligence space will tell you that quality and quantity of training data is one of the most important inputs in building real-world-functional AI. This is why today’s large technology companies continue to collect and keep detailed consumer data, despite recent public backlash. From search engines, to social media, to self driving cars, data — in some cases even more than the underlying technology itself — is what drives value in today’s technology companies.

It should be no surprise then that autonomous vehicle companies do not publicly share data, even in instances of deadly crashes. When it comes to autonomous vehicles, the public interest (making safe self-driving cars available as soon as possible) is clearly at odds with corporate interests (making as much money as possible on the technology).

We need to create industry and regulatory environments in which autonomous vehicle companies compete based upon the quality of their technology — not just upon their ability to spend hundreds of millions of dollars to collect and silo as much data as possible (yes, this is how much gathering this data costs). In today’s environment the inverse is true: autonomous car manufacturers are focusing on are gathering as many miles of data as possible, with the intention of feeding more information into their models than their competitors, all the while avoiding working together….

The complexity of this data is diverse, yet public — I am not suggesting that people hand over private, privileged data, but actively pool and combine what the cars are seeing. There’s a reason that many of the autonomous car companies are driving millions of virtual miles — they’re attempting to get as much active driving data as they can. Beyond the fact that they drove those miles, what truly makes that data something that they have to hoard? By sharing these miles, by seeing as much of the world in as much detail as possible, these companies can focus on making smarter, better autonomous vehicles and bring them to market faster.

If you’re reading this and thinking it’s deeply unfair, I encourage you to once again consider 40,000 people are preventably dying every year in America alone. If you are not compelled by the massive life-saving potential of the technology, consider that publicly licenseable self-driving data sets would accelerate innovation by removing a substantial portion of the capital barrier-to-entry in the space and increasing competition….(More)”

Blockchain systems are tracking food safety and origins


Nir Kshetri at The Conversation: “When a Chinese consumer buys a package labeled “Australian beef,” there’s only a 50-50 chance the meat inside is, in fact, Australian beef. It could just as easily contain rat, dog, horse or camel meat – or a mixture of them all. It’s gross and dangerous, but also costly.

Fraud in the global food industry is a multi-billion-dollar problem that has lingered for years, duping consumers and even making them ill. Food manufacturers around the world are concerned – as many as 39 percent of them are worried that their products could be easily counterfeited, and 40 percent say food fraud is hard to detect.

In researching blockchain for more than three years, I have become convinced that this technology’s potential to prevent fraud and strengthen security could fight agricultural fraud and improve food safety. Many companies agree, and are already running various tests, including tracking wine from grape to bottle and even following individual coffee beans through international trade.

Tracing food items

An early trial of a blockchain system to track food from farm to consumer was in 2016, when Walmart collected information about pork being raised in China, where consumers are rightly skeptical about sellers’ claims of what their food is and where it’s from. Employees at a pork farm scanned images of farm inspection reports and livestock health certificates, storing them in a secure online database where the records could not be deleted or modified – only added to.

As the animals moved from farm to slaughter to processing, packaging and then to stores, the drivers of the freight trucks played a key role. At each step, they would collect documents detailing the shipment, storage temperature and other inspections and safety reports, and official stamps as authorities reviewed them – just as they did normally. In Walmart’s test, however, the drivers would photograph those documents and upload them to the blockchain-based database. The company controlled the computers running the database, but government agencies’ systems could also be involved, to further ensure data integrity.

As the pork was packaged for sale, a sticker was put on each container, displaying a smartphone-readable code that would link to that meat’s record on the blockchain. Consumers could scan the code right in the store and assure themselves that they were buying exactly what they thought they were. More recent advances in the technology of the stickers themselves have made them more secure and counterfeitresistant.

Walmart did similar tests on mangoes imported to the U.S. from Latin America. The company found that it took only 2.2 seconds for consumers to find out an individual fruit’s weight, variety, growing location, time it was harvested, date it passed through U.S. customs, when and where it was sliced, which cold-storage facility the sliced mango was held in and for how long it waited before being delivered to a store….(More)”.

Public Attitudes Toward Computer Algorithms


Aaron Smith at the Pew Research Center: “Algorithms are all around us, utilizing massive stores of data and complex analytics to make decisions with often significant impacts on humans. They recommend books and movies for us to read and watch, surface news stories they think we might find relevant, estimate the likelihood that a tumor is cancerous and predict whether someone might be a criminal or a worthwhile credit risk. But despite the growing presence of algorithms in many aspects of daily life, a Pew Research Center survey of U.S. adults finds that the public is frequently skeptical of these tools when used in various real-life situations.

This skepticism spans several dimensions. At a broad level, 58% of Americans feel that computer programs will always reflect some level of human bias – although 40% think these programs can be designed in a way that is bias-free. And in various contexts, the public worries that these tools might violate privacy, fail to capture the nuance of complex situations, or simply put the people they are evaluating in an unfair situation. Public perceptions of algorithmic decision-making are also often highly contextual. The survey shows that otherwise similar technologies can be viewed with support or suspicion depending on the circumstances or on the tasks they are assigned to do….

The following are among the major findings.

The public expresses broad concerns about the fairness and acceptability of using computers for decision-making in situations with important real-world consequences

Majorities of Americans find it unacceptable to use algorithms to make decisions with real-world consequences for humans

By and large, the public views these examples of algorithmic decision-making as unfair to the people the computer-based systems are evaluating. Most notably, only around one-third of Americans think that the video job interview and personal finance score algorithms would be fair to job applicants and consumers. When asked directly whether they think the use of these algorithms is acceptable, a majority of the public says that they are not acceptable. Two-thirds of Americans (68%) find the personal finance score algorithm unacceptable, and 67% say the computer-aided video job analysis algorithm is unacceptable….

Attitudes toward algorithmic decision-making can depend heavily on context

Despite the consistencies in some of these responses, the survey also highlights the ways in which Americans’ attitudes toward algorithmic decision-making can depend heavily on the context of those decisions and the characteristics of the people who might be affected….

When it comes to the algorithms that underpin the social media environment, users’ comfort level with sharing their personal information also depends heavily on how and why their data are being used. A 75% majority of social media users say they would be comfortable sharing their data with those sites if it were used to recommend events they might like to attend. But that share falls to just 37% if their data are being used to deliver messages from political campaigns.

Across age groups, social media users are comfortable with their data being used to recommend events - but wary of that data being used for political messaging

In other instances, different types of users offer divergent views about the collection and use of their personal data. For instance, about two-thirds of social media users younger than 50 find it acceptable for social media platforms to use their personal data to recommend connecting with people they might want to know. But that view is shared by fewer than half of users ages 65 and older….(More)”.

Behavioural Insights Toolkit and Ethical Guidelines for Policy Makers


Consultation Document by the OECD: “BASIC (Behaviour, Analysis, Strategies, Intervention, and Change) is an overarching framework for applying behavioural insights to public policy from the beginning to the end of the policy cycle. It is built on five stages that guides the application of behavioural insights and is a repository of best practices, proof of concepts and methodological standards for behavioural insights practitioners and policymakers who have become interested in applying behavioural insights to public policy. Crucially, BASIC offers an approach to problem scoping that can be of relevance for any policymaker and practitioner when addressing a policy problem, be it behavioural or systemic.

The document provides an overview of the rationale, applicability and key tenets of BASIC. It walks practitioners through the five BASIC sequential stages with examples, and presents detailed ethical guidelines to be considered at each stage.

It has been developed by the OECD in partnership with Dr Pelle Guldborg Hansen of Roskilde University, Denmark. This version benefitted from feedback provided by the participants in the Western Cape Government – OECD Behavioural Insights Conference held in Cape Town on 27-28 September 2018….(More)”

Artificial Intelligence: Risks to Privacy and Democracy


Karl Manheim and Lyric Kaplan at Yale Journal of Law and Technology: “A “Democracy Index” is published annually by the Economist. For 2017, it reported that half of the world’s countries scored lower than the previous year. This included the United States, which was demoted from “full democracy” to “flawed democracy.” The principal factor was “erosion of confidence in government and public institutions.” Interference by Russia and voter manipulation by Cambridge Analytica in the 2016 presidential election played a large part in that public disaffection.

Threats of these kinds will continue, fueled by growing deployment of artificial intelligence (AI) tools to manipulate the preconditions and levers of democracy. Equally destructive is AI’s threat to decisional andinforma-tional privacy. AI is the engine behind Big Data Analytics and the Internet of Things. While conferring some consumer benefit, their principal function at present is to capture personal information, create detailed behavioral profiles and sell us goods and agendas. Privacy, anonymity and autonomy are the main casualties of AI’s ability to manipulate choices in economic and political decisions.

The way forward requires greater attention to these risks at the nation-al level, and attendant regulation. In its absence, technology giants, all of whom are heavily investing in and profiting from AI, will dominate not only the public discourse, but also the future of our core values and democratic institutions….(More)”.

Constitutional democracy and technology in the age of artificial intelligence


Paper by Paul Nemitz: “Given the foreseeable pervasiveness of artificial intelligence (AI) in modern societies, it is legitimate and necessary to ask the question how this new technology must be shaped to support the maintenance and strengthening of constitutional democracy.

This paper first describes the four core elements of today’s digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and to functioning markets. It then recalls the experience with the lawless Internet and the relationship between technology and the law as it has developed in the Internet economy and the experience with GDPR before it moves on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws.

The paper closes with a call for a new culture of incorporating the principles of democracy, rule of law and human rights by design in AI and a three-level technological impact assessment for new technologies like AI as a practical way forward for this purpose….(More)”.

Information and Technology in Open Justice


Introduction by Mila Gasco-Hernandez and Carlos Jimenez-Gomezto to Special Issue of Social Science Computer Review:  “The topic of open justice has only been little explored perhaps due to its traditionally having been considered a “closed” field. There is still a need to know what open justice really means, to explore the use of information and technology in enabling open justice, and to understand what openness in the judiciary can do to improve government, society, and democracy. This special issue aims to shed light on the concept of openness in the judiciary by identifying and analyzing initiatives across the world….(More)”.

Global Indicators of Regulatory Governance


The World Bank: “The Global Indicators of Regulatory Governance project is an initiative of the World Bank’s Global Indicators Group, which produces a range of datasets and benchmarking products on regulations and business activity around the world. These datasets include Doing BusinessEnterprise SurveysEnabling the Business of Agriculture and Women, Business and the Law.

The Global Indicators of Regulatory Governance project explores how governments interact with the public when shaping regulations that affect their business community. Concerned stakeholders could be professional associations, civic groups or foreign investors. The project charts how interested groups learn about new regulations being considered, and the extent to which they are able to engage with officials on the content. It also measures whether or not governments assess the possible impact of new regulations in their countries (including economic, social and environmental considerations) and whether those calculations form part of the public consultation. Finally, Global Indicators of Regulatory Governance capture two additional components of a predictable regulatory environment: the ability of stakeholders to challenge regulations, and the ability of people to access all the laws and regulations currently in force in one, consolidated place.

http://rulemaking.worldbank.org/en/about-usThe project grew out of an increasing recognition of the importance of transparency and accountability in government actions. Citizen access to the government rulemaking process is central for the creation of a business environment in which investors make long-range plans and investments. Greater levels of consultation are also associated with a higher quality of regulation….(More)”

Regulating the Regulators: Tracing the Emergence of the Political Transparency Laws in Chile


Conference Paper by Bettina Schorr: “Due to high social inequalities and weak public institutions, political corruption and the influence of business elites on policy-makers are widespread in the Andean region. The consequences for the opportunities of sustainable development are serious: regulation limiting harmful business activities or (re-)distributive reforms are difficult to achieve and public resources often end up as private gains instead of serving development purposes.

Given international and domestic pressures, political corruption has reached the top of the political agendas in many countries. However, frequently transparency goals do not materialize into new binding policies or, when reforms are enacted, they suffer from severe implementation gaps.

The paper analyses transparency politics in Chile where a series of reforms regarding political transparency were implemented since 2014. Hence, Chile counts among the few successful cases in the region. By tracing the process that led to the emergence of new transparency policies in Chile, the paper elaborates an analytical framework for the explanation of institutional innovation in the case of political transparency. In particular, the study emphasizes the importance of civil society actors´ involvement in the whole policy cycle, particularly in the stages of formulation, implementation and evaluation….(More)”.

NHS Pulls Out Of Data-Sharing Deal With Home Office Immigration Enforcers


Jasmin Gray at Huffington Post: “The NHS has pulled out of a controversial data-sharing arrangement with the Home Office which saw confidential patients’ details passed on to immigration enforcers.

In May, the government suspended the ‘memorandum of understanding’ agreement between the health service and the Home Office after MPs, doctors and health charities warned it was leaving seriously ill migrants too afraid to seek medical treatment. 

But on Tuesday, NHS Digital announced that it was cutting itself out of the agreement altogether. 

“NHS Digital has received a revised narrowed request from the Home Office and is discussing this request with them,” a spokesperson for the data-branch of the health service said, adding that they have “formally closed-out our participation” in the previous memorandum of understanding. 

The anxieties of “multiple stakeholder communities” to ensure the agreement made by the government was respected was taken into account in the decision, they added. 

Meanwhile, the Home Office confirmed it was working to agree a new deal with NHS Digital which would only allow it to make requests for data about migrants “facing deportation action because they have committed serious crimes, or where information necessary to protect someone’s welfare”. 

The move has been welcomed by campaigners, with Migrants’ Rights Network director Rita Chadra saying that many migrants had missed out on “the right to privacy and access to healthcare” because of the data-sharing mechanism….(More)”.