Optimal Scope for Free Flow of Non-Personal Data in Europe


Paper by Simon Forge for the European Parliament Think Tank: “Data is not static in a personal/non-personal classification – with modern analytic methods, certain non-personal data can help to generate personal data – so the distinction may become blurred. Thus, de-anonymisation techniques with advances in artificial intelligence (AI) and manipulation of large datasets will become a major issue. In some new applications, such as smart cities and connected cars, the enormous volumes of data gathered may be used for personal information as well as for non-personal functions, so such data may cross over from the technical and non-personal into the personal domain. A debate is taking place on whether current EU restrictions on confidentiality of personal private information should be relaxed so as to include personal information in free and open data flows. However, it is unlikely that a loosening of such rules will be positive for the growth of open data. Public distrust of open data flows may be exacerbated because of fears of potential commercial misuse of such data, as well of leakages, cyberattacks, and so on. The proposed recommendations are: to promote the use of open data licences to build trust and openness, promote sharing of private enterprises’ data within vertical sectors and across sectors to increase the volume of open data through incentive programmes, support testing for contamination of open data mixed with personal data to ensure open data is scrubbed clean – and so reinforce public confidence, ensure anti-competitive behaviour does not compromise the open data initiative….(More)”.

Digital Skills Toolkit


Report by the International Telecommunications Union: “This toolkit provides stakeholders with guidance on developing a digital skills strategy. It is intended for policymakers, along with partners in the private sector, non-governmental organizations, and academia. Its overarching aim is to facilitate the development of a comprehensive digital skills strategy at country level. It is also possible to use this guide to focus on selected priorities that require a fresh approach.

Why do countries need a digital skills strategy?

Digital skills underpin nearly every aspect of work and life. From filling in a government form to communicating for work, it is difficult to find a job or life-task that does not require a basic level of digital functioning. And with new technologies emerging every day, we need lifelong opportunities to learn new skills that will allow us to succeed in an era of ongoing digital transformation. Digital skills are essential in opening the door to a wide range of opportunities in the 21st century. Countries that implement comprehensive digital skills strategies ensure their populations have the skills they need to be more employable, productive, creative, and successful while ensuring they remain safe, secure and healthy online. Critically, digital skills strategies need to be updated regularly to respond to the emergence of new technologies and their impact on the digital economy and digital society. The digital economy has created a huge shortage of people with the necessary digital skills. ITU research shows that there will be tens of millions of jobs for people with advanced digital skills in the coming years. In Europe, for example, estimates suggest there will be 500,000 unfilled positions for ICT professionals by 2020. Every region faces similar challenges. In addition to existing skills gaps, experts forecast that advances in areas like artificial intelligence, nanotechnology, 3D printing, and other technologies will usher in a new era that will radically alter patterns of consumption, production, and employment. Many countries view digital skills as one of the core foundations of the digital transformation….(More)”

200,000 Volunteers Have Become the Fact Checkers of the Internet


Hanna Kozlowska and Heather Timmons, at Quartz/NextGov: “Founded in 2001, Wikipedia is on the verge of adulthood. It’s the world’s fifth-most popular website, with 46 million articles in 300 languages, while having less than 300 full-time employees. What makes it successful is the 200,000 volunteers who create it, said Katherine Maher, the executive director of the Wikimedia Foundation, the parent-organization for Wikipedia and its sister sites.

Unlike other tech companies, Wikipedia has avoided accusations of major meddling from malicious actors to subvert elections around the world. Part of this is because of the site’s model, where the creation process is largely transparent, but it’s also thanks to its community of diligent editors who monitor the content…

Somewhat unwittingly, Wikipedia has become the internet’s fact-checker. Recently, both YouTube and Facebook started using the platform to show more context about videos or posts in order to curb the spread of disinformation—even though Wikipedia is crowd-sourced, and can be manipulated as well….

While no evidence of organized, widespread election-related manipulation on the platform has emerged so far, Wikipedia is not free of malicious actors, or people trying to grab control of the narrative. In Croatia, for instance, the local-language Wikipedia was completely taken over by right-wing ideologues several years ago.

The platform has also been battling the problem of “black-hat editing”— done surreptitiously by people who are trying to push a certain view—on the platform for years….

About 200,000 editors contribute to Wikimedia projects every month, and together with AI-powered bots they made a total of 39 million edits in February of 2018. In the chart below, group-bots are bots approved by the community, which do routine maintenance on the site, looking for examples of vandalism, for example. Name-bots are users who have “bot” in their name.

Like every other tech platform, Wikimedia is looking into how AI could help improve the site. “We are very interested in how AI can help us do things like evaluate the quality of articles, how deep and effective the citations are for a particular article, the relative neutrality of an article, the relative quality of an article,” said Maher. The organization would also like to use it to catch gaps in its content….(More)”.

Privacy and Freedom of Expression In the Age of Artificial Intelligence


Joint Paper by Privacy International and ARTICLE 19: “Artificial Intelligence (AI) is part of our daily lives. This technology shapes how people access information, interact with devices, share personal information, and even understand foreign languages. It also transforms how individuals and groups can be tracked and identified, and dramatically alters what kinds of information can be gleaned about people from their data. AI has the potential to revolutionise societies in positive ways. However, as with any scientific or technological advancement, there is a real risk that the use of new tools by states or corporations will have a negative impact on human rights. While AI impacts a plethora of rights, ARTICLE 19 and Privacy International are particularly concerned about the impact it will have on the right to privacy and the right to freedom of expression and information. This scoping paper focuses on applications of ‘artificial narrow intelligence’: in particular, machine learning and its implications for human rights.

The aim of the paper is fourfold:

1. Present key technical definitions to clarify the debate;

2. Examine key ways in which AI impacts the right to freedom of expression and the right to privacy and outline key challenges;

3. Review the current landscape of AI governance, including various existing legal, technical, and corporate frameworks and industry-led AI initiatives that are relevant to freedom of expression and privacy; and

4. Provide initial suggestions for rights-based solutions which can be pursued by civil society organisations and other stakeholders in AI advocacy activities….(More)”.

What To Do With The Urban Spaces Technology Makes Obsolete


Peter Madden at the Huffington Post: “Digital tech will make many city spaces redundant: artificial intelligence doesn’t care where it works; autonomous vehicles don’t care they where they park. These spaces must be repurposed for cities to thrive in the future….

This is an opportunity to ask what people want from their cities and how redundant spaces can meet these needs.

There have been multiple academic studies and marketing surveys on this, and they boil down to two main things. Citizens first want the basics: employment opportunities, affordable housing, good transport, and safe streets. Further up the hierarchy of needs, they also care about the physical appearance of the city, including the availability of parks and green spaces, the feel of the city in terms of openness, diversity and social interaction, and the experience in the city whether that’s tasting new foods, buying an unexpected gift, or discovering a new band.

Re-Greening

The places that were once reserved for cars can be spaces for pedestrians and bike lanes, with walkable and cycle-friendly cities offering cheaper transit, healthier citizens, and stronger communities. Greenery could flourish, with new parks, trees and allotments providing access to nature, sponges to absorb flood-water and urban cooling in a warming world.

Flexible Working

Who really wants a lengthy commute to a regimented workplace? Future office spaces will harness new technology to help people work flexibly, collaboratively and from multiple locations. When they do travel into the city centre office, this will be oriented around the experience of the individual employee, beautifully designed, technologically responsive, with different spaces for how they work best at different times of the day and on different tasks.

Making in Cities

The 4th industrial revolution allows manufacturing to return to urban centres for just-in-time, on demand and hyper-personalised production. Some ‘on-shoring’ is already happening, with McLaren car chassis, Clarks boots and Frog bikes being made again in British towns again. Data analytics, virtual reality, new materials, robotics and 3D printing will make it possible to produce or customise things on the high-street, right where the consumer wants them.

Affordable Housing

Unused buildings and empty land will be filled by new types of housing. In my home city, Bristol, a redundant building in a parade of shops is being turned into living space for the homeless, AEOB will ‘buy and convert empty offices into homes for people’, and ‘We Can Make’ is offering affordable prefabricated houses for empty urban plots. Housing innovations like this are springing up in cities across the world….(More)”.

The Efficiency Paradox: What Big Data Can’t Do


Book by Edward Tenner: “A bold challenge to our obsession with efficiency–and a new understanding of how to benefit from the powerful potential of serendipity

Algorithms, multitasking, the sharing economy, life hacks: our culture can’t get enough of efficiency. One of the great promises of the Internet and big data revolutions is the idea that we can improve the processes and routines of our work and personal lives to get more done in less time than we ever have before. There is no doubt that we’re performing at higher levels and moving at unprecedented speed, but what if we’re headed in the wrong direction?

Melding the long-term history of technology with the latest headlines and findings of computer science and social science, The Efficiency Paradox questions our ingrained assumptions about efficiency, persuasively showing how relying on the algorithms of digital platforms can in fact lead to wasted efforts, missed opportunities, and above all an inability to break out of established patterns. Edward Tenner offers a smarter way of thinking about efficiency, revealing what we and our institutions, when equipped with an astute combination of artificial intelligence and trained intuition, can learn from the random and unexpected….(More)”

How Artificial Intelligence Could Increase the Risk of Nuclear War


Rand Corporation: “The fear that computers, by mistake or malice, might lead humanity to the brink of nuclear annihilation has haunted imaginations since the earliest days of the Cold War.

The danger might soon be more science than fiction. Stunning advances in AI have created machines that can learn and think, provoking a new arms race among the world’s major nuclear powers. It’s not the killer robots of Hollywood blockbusters that we need to worry about; it’s how computers might challenge the basic rules of nuclear deterrence and lead humans into making devastating decisions.

That’s the premise behind a new paper from RAND Corporation, How Might Artificial Intelligence Affect the Risk of Nuclear War? It’s part of a special project within RAND, known as Security 2040, to look over the horizon and anticipate coming threats.

“This isn’t just a movie scenario,” said Andrew Lohn, an engineer at RAND who coauthored the paper and whose experience with AI includes using it to route drones, identify whale calls, and predict the outcomes of NBA games. “Things that are relatively simple can raise tensions and lead us to some dangerous places if we are not careful.”…(More)”.

How artificial intelligence is transforming the world


Report by Darrell West and John Allen at Brookings: “Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity….(More)

Table of Contents
I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion

Artificial Unintelligence


Book by Meredith Broussard: “A guide to understanding the inner workings and outer limits of technology and why we should never assume that computers always get it right.

In Artificial Unintelligence, Meredith Broussard argues that our collective enthusiasm for applying computer technology to every aspect of life has resulted in a tremendous amount of poorly designed systems. We are so eager to do everything digitally—hiring, driving, paying bills, even choosing romantic partners—that we have stopped demanding that our technology actually work. Broussard, a software developer and journalist, reminds us that there are fundamental limits to what we can (and should) do with technology. With this book, she offers a guide to understanding the inner workings and outer limits of technology—and issues a warning that we should never assume that computers always get things right.

Making a case against technochauvinism—the belief that technology is always the solution—Broussard argues that it’s just not true that social problems would inevitably retreat before a digitally enabled Utopia. To prove her point, she undertakes a series of adventures in computer programming. She goes for an alarming ride in a driverless car, concluding “the cyborg future is not coming any time soon”; uses artificial intelligence to investigate why students can’t pass standardized tests; deploys machine learning to predict which passengers survived the Titanic disaster; and attempts to repair the U.S. campaign finance system by building AI software. If we understand the limits of what we can do with technology, Broussard tells us, we can make better choices about what we should do with it to make the world better for everyone…(More)”.

Leveraging the Power of Bots for Civil Society


Allison Fine & Beth Kanter  at the Stanford Social Innovation Review: “Our work in technology has always centered around making sure that people are empowered, healthy, and feel heard in the networks within which they live and work. The arrival of the bots changes this equation. It’s not enough to make sure that people are heard; we now have to make sure that technology adds value to human interactions, rather than replacing them or steering social good in the wrong direction. If technology creates value in a human-centered way, then we will have more time to be people-centric.

So before the bots become involved with almost every facet of our lives, it is incumbent upon those of us in the nonprofit and social-change sectors to start a discussion on how we both hold on to and lead with our humanity, as opposed to allowing the bots to lead. We are unprepared for this moment, and it does not feel like an understatement to say that the future of humanity relies on our ability to make sure we’re in charge of the bots, not the other way around.

To Bot or Not to Bot?

History shows us that bots can be used in positive ways. Early adopter nonprofits have used bots to automate civic engagement, such as helping citizens register to votecontact their elected officials, and elevate marginalized voices and issues. And nonprofits are beginning to use online conversational interfaces like Alexa for social good engagement. For example, the Audubon Society has released an Alexa skill to teach bird calls.

And for over a decade, Invisible People founder Mark Horvath has been providing “virtual case management” to homeless people who reach out to him through social media. Horvath says homeless agencies can use chat bots programmed to deliver basic information to people in need, and thus help them connect with services. This reduces the workload for case managers while making data entry more efficient. He explains it working like an airline reservation: The homeless person completes the “paperwork” for services by interacting with a bot and then later shows their ID at the agency. Bots can greatly reduce the need for a homeless person to wait long hours to get needed services. Certainly this is a much more compassionate use of bots than robot security guards who harass homeless people sleeping in front of a business.

But there are also examples where a bot’s usefulness seems limited. A UK-based social service charity, Mencap, which provides support and services to children with learning disabilities and their parents, has a chatbot on its website as part of a public education effort called #HereIAm. The campaign is intended to help people understand more about what it’s like having a learning disability, through the experience of a “learning disabled” chatbot named Aeren. However, this bot can only answer questions, not ask them, and it doesn’t become smarter through human interaction. Is this the best way for people to understand the nature of being learning disabled? Is it making the difficulties feel more or less real for the inquirers? It is clear Mencap thinks the interaction is valuable, as they reported a 3 percent increase in awareness of their charity….

The following discussion questions are the start of conversations we need to have within our organizations and as a sector on the ethical use of bots for social good:

  • What parts of our work will benefit from greater efficiency without reducing the humanness of our efforts? (“Humanness” meaning the power and opportunity for people to learn from and help one another.)
  • Do we have a privacy policy for the use and sharing of data collected through automation? Does the policy emphasize protecting the data of end users? Is the policy easily accessible by the public?
  • Do we make it clear to the people using the bot when they are interacting with a bot?
  • Do we regularly include clients, customers, and end users as advisors when developing programs and services that use bots for delivery?
  • Should bots designed for service delivery also have fundraising capabilities? If so, can we ensure that our donors are not emotionally coerced into giving more than they want to?
  • In order to truly understand our clients’ needs, motivations, and desires, have we designed our bots’ conversational interactions with empathy and compassion, or involved social workers in the design process?
  • Have we planned for weekly checks of the data generated by the bots to ensure that we are staying true to our values and original intentions, as AI helps them learn?….(More)”.