AI Nationalism


Blog by Ian Hogarth: “The central prediction I want to make and defend in this post is that continued rapid progress in machine learning will drive the emergence of a new kind of geopolitics; I have been calling it AI Nationalism. Machine learning is an omni-use technology that will come to touch all sectors and parts of society.

The transformation of both the economy and the military by machine learning will create instability at the national and international level forcing governments to act. AI policy will become the single most important area of government policy. An accelerated arms race will emerge between key countries and we will see increased protectionist state action to support national champions, block takeovers by foreign firms and attract talent. I use the example of Google, DeepMind and the UK as a specific example of this issue.

This arms race will potentially speed up the pace of AI development and shorten the timescale for getting to AGI. Although there will be many common aspects to this techno-nationalist agenda, there will also be important state specific policies. There is a difference between predicting that something will happen and believing this is a good thing. Nationalism is a dangerous path, particular when the international order and international norms will be in flux as a result and in the concluding section I discuss how a period of AI Nationalism might transition to one of global cooperation where AI is treated as a global public good….(More)”.

Big Data and AI – A transformational shift for government: So, what next for research?


Irina Pencheva, Marc Esteve and Slava Jenkin Mikhaylov in Public Policy and Administration: “Big Data and artificial intelligence will have a profound transformational impact on governments around the world. Thus, it is important for scholars to provide a useful analysis on the topic to public managers and policymakers. This study offers an in-depth review of the Policy and Administration literature on the role of Big Data and advanced analytics in the public sector. It provides an overview of the key themes in the research field, namely the application and benefits of Big Data throughout the policy process, and challenges to its adoption and the resulting implications for the public sector. It is argued that research on the subject is still nascent and more should be done to ensure that the theory adds real value to practitioners. A critical assessment of the strengths and limitations of the existing literature is developed, and a future research agenda to address these gaps and enrich our understanding of the topic is proposed…(More)”.

Our Infant Information Revolution


Joseph Nye at Project Syndicate: “…When people are overwhelmed by the volume of information confronting them, it is hard to know what to focus on. Attention, not information, becomes the scarce resource. The soft power of attraction becomes an even more vital power resource than in the past, but so does the hard, sharp power of information warfare. And as reputation becomes more vital, political struggles over the creation and destruction of credibility multiply. Information that appears to be propaganda may not only be scorned, but may also prove counterproductive if it undermines a country’s reputation for credibility.

During the Iraq War, for example, the treatment of prisoners at Abu Ghraib and Guantanamo Bay in a manner inconsistent with America’s declared values led to perceptions of hypocrisy that could not be reversed by broadcasting images of Muslims living well in America. Similarly, President Donald Trump’s tweets that prove to be demonstrably false undercut American credibility and reduce its soft power.

The effectiveness of public diplomacy is judged by the number of minds changed (as measured by interviews or polls), not dollars spent. It is interesting to note that polls and the Portland index of the Soft Power 30show a decline in American soft power since the beginning of the Trump administration. Tweets can help to set the global agenda, but they do not produce soft power if they are not credible.

Now the rapidly advancing technology of artificial intelligence or machine learning is accelerating all of these processes. Robotic messages are often difficult to detect. But it remains to be seen whether credibility and a compelling narrative can be fully automated….(More)”.

Data Protection and e-Privacy: From Spam and Cookies to Big Data, Machine Learning and Profiling


Chapter by Lilian Edwards in L Edwards ed Law, Policy and the Internet (Hart , 2018): “In this chapter, I examine in detail how data subjects are tracked, profiled and targeted by their activities on line and, increasingly, in the “offline” world as well. Tracking is part of both commercial and state surveillance, but in this chapter I concentrate on the former. The European law relating to spam, cookies, online behavioural advertising (OBA), machine learning (ML) and the Internet of Things (IoT) is examined in detail, using both the GDPR and the forthcoming draft ePrivacy Regulation. The chapter concludes by examining both code and law solutions which might find a way forward to protect user privacy and still enable innovation, by looking to paradigms not based around consent, and less likely to rely on a “transparency fallacy”. Particular attention is drawn to the new work around Personal Data Containers (PDCs) and distributed ML analytics….(More)”.

The Open Revolution: Rewriting the rules of the information age


Book by Rufus Pollock: “Forget everything you think you know about the digital age. It’s not about privacy, surveillance, AI or blockchain—it’s about ownership. Because, in a digital age, who owns information controls the future.

In this urgent and provocative book, Rufus Pollock shows how today’s “Closed” digital economy is the source of problems ranging from growing inequality, to unaffordable medicines, to the power of a handful of tech monopolies to control how we think and vote. He proposes a solution that charts a path to a more equitable, innovative and profitable future for all….(More)”.

The distributed power of smartphones for medical research


Adi Gaskell: “One of the more significant areas of promise in health technology is the ability for data to be generated by us as individuals, and for AI to provide insights based upon this live stream of lifestyle data.  An example of what’s possible comes via a project researchers at Imperial College London have undertaken with the Vodafone Foundation.

The project aims to tap into the power of users smartphones to crunch cancer related data whilst they sleep.  Such distributed computing projects have been popular for some time, but this is one of the first to utilize the power in our smartphones.

The rationale for the project is identical to that of the early distributed computing ventures, such as SETI@Home, which utilized spare computing resources to process data from space.  The average smartphone contains a huge amount of computing power that generally lies dormant over night.

Dream Lab

Users participate by downloading the DreamLab app onto their phone and run it for six hours overnight as the phone charges.  The sleep downloads a small packet of data overnight, with the processors in the phone then running millions of calculations, uploading the results to a central server, and clearing the data from the phone.

The app has already been used in Australia, with researchers using it to crunch data for pancreatic cancer, and is now ready to be used for the first time in Europe.  If they can secure 100,000 users running the app each night, the team can process as much data as a single desktop computer could process in 100 years.

“Through harnessing distributed computing power, DreamLab is helping to make personalised medicine a reality,” the researchers say.  “This project demonstrates how Imperial’s innovative research partnerships with corporate partners and members of the public are working together to tackle some of the biggest problems we face today, generating real societal impact.”…(More)”.

Artificial intelligence in non-profit organizations


Darrell M. West and Theron Kelso at Brookings: “Artificial intelligence provides a way to use automated software to perform a number of different tasks. Private industry, government, and universities have deployed it to manage routine requests and common administrative processes. Fields from finance and healthcare to retail and defense are witnessing a dramatic expansion in the use of these tools.

Yet non-profits often lack the financial resources or organizational capabilities to innovate through technology. Most non-profits struggle with small budgets and inadequate staffing, and they fall behind the cutting edge of new technologies. This limits their group’s efficiency and effectiveness, and makes it difficult to have the kind of impact they would like.

However, there is growing interest in artificial intelligence (AI), machine learning (ML), and data analytics in non-profit organizations. Below are some of the many examples of non-profits using emerging technologies to handle finance, human resources, communications, internal operations, and sustainability.

FINANCE

Fraud and corruption are major challenges for any kind of organization as it is hard to monitor every financial transaction and business contract. AI tools can help managers automatically detect actions that warrant additional investigation. Businesses long have used AI and ML to create early warning systems, spot abnormalities, and thereby minimize financial misconduct. These tools offer ways to combat fraud and detect unusual transactions.

HUMAN RESOURCES

Advanced software helps organizations advertise, screen, and hire promising staff members. Once managers have decided what qualities they are seeking, AI can match applicants with employers. Automated systems can pre-screen resumes, check for relevant experience and skills, and identify applicants who are best suited for particular organizations. They also can weed out those who lack the required skills or do not pass basic screening criteria.

COMMUNICATIONS

Every non-profit faces challenges in terms of communications. In a rapidly-changing world, it is hard to keep in touch with outside donors, internal staff, and interested individuals. Chatbots automate conversations for commonly asked questions through text messaging. These tools can help with customer service and routine requests such as how to contribute money, address a budget question, or learn about upcoming programs. They represent an efficient and effective way to communicate with internal and external audiences….(More)”.

NZ to perform urgent algorithm ‘stocktake’ fearing data misuse within government


Asha McLean at ZDNet: “The New Zealand government has announced it will be assessing how government agencies are using algorithms to analyse data, hoping to ensure transparency and fairness in decisions that affect citizens.

A joint statement from Minister for Government Digital Services Clare Curran and Minister of Statistics James Shaw said the algorithm “stocktake” will be conducted with urgency, but cites only the growing interest in data analytics as the reason for the probe.

“The government is acutely aware of the need to ensure transparency and accountability as interest grows regarding the challenges and opportunities associated with emerging technology such as artificial intelligence,” Curran said.

It was revealed in April that Immigration New Zealand may have been using citizen data for less than desirable purposes, with claims that data collected through the country’s visa application process that was being used to determine those in breach of their visa conditions was in fact filtering people based on their age, gender, and ethnicity.

Rejecting the idea the data-collection project was racial profiling, Immigration Minister Iain Lees-Galloway told Radio New Zealand that Immigration looks at a range of issues, including at those who have made — and have had rejected — multiple visa applications.

“It looks at people who place the greatest burden on the health system, people who place the greatest burden on the criminal justice system, and uses that data to prioritise those people,” he said.

“It is important that we protect the integrity of our immigration system and that we use the resources that immigration has as effectively as we can — I do support them using good data to make good decisions about where best to deploy their resources.”

In the statement on Wednesday, Shaw pointed to two further data-modelling projects the government had embarked on, with one from the Ministry of Health looking into the probability of five-year post-transplant survival in New Zealand.

“Using existing data to help model possible outcomes is an important part of modern government decision-making,” Shaw said….(More)”.

Technology and satellite companies open up a world of data


Gabriel Popkin at Nature: “In the past few years, technology and satellite companies’ offerings to scientists have increased dramatically. Thousands of researchers now use high-resolution data from commercial satellites for their work. Thousands more use cloud-computing resources provided by big Internet companies to crunch data sets that would overwhelm most university computing clusters. Researchers use the new capabilities to track and visualize forest and coral-reef loss; monitor farm crops to boost yields; and predict glacier melt and disease outbreaks. Often, they are analysing much larger areas than has ever been possible — sometimes even encompassing the entire globe. Such studies are landing in leading journals and grabbing media attention.

Commercial data and cloud computing are not panaceas for all research questions. NASA and the European Space Agency carefully calibrate the spectral quality of their imagers and test them with particular types of scientific analysis in mind, whereas the aim of many commercial satellites is to take good-quality, high-resolution pictures for governments and private customers. And no company can compete with Landsat’s free, publicly available, 46-year archive of images of Earth’s surface. For commercial data, scientists must often request images of specific regions taken at specific times, and agree not to publish raw data. Some companies reserve cloud-computing assets for researchers with aligned interests such as artificial intelligence or geospatial-data analysis. And although companies publicly make some funding and other resources available for scientists, getting access to commercial data and resources often requires personal connections. Still, by choosing the right data sources and partners, scientists can explore new approaches to research problems.

Mapping poverty

Joshua Blumenstock, an information scientist at the University of California, Berkeley (UCB), is always on the hunt for data he can use to map wealth and poverty, especially in countries that do not conduct regular censuses. “If you’re trying to design policy or do anything to improve living conditions, you generally need data to figure out where to go, to figure out who to help, even to figure out if the things you’re doing are making a difference.”

In a 2015 study, he used records from mobile-phone companies to map Rwanda’s wealth distribution (J. Blumenstock et al. Science 350, 1073–1076; 2015). But to track wealth distribution worldwide, patching together data-sharing agreements with hundreds of these companies would have been impractical. Another potential information source — high-resolution commercial satellite imagery — could have cost him upwards of US$10,000 for data from just one country….

Use of commercial images can also be restricted. Scientists are free to share or publish most government data or data they have collected themselves. But they are typically limited to publishing only the results of studies of commercial data, and at most a limited number of illustrative images.

Many researchers are moving towards a hybrid approach, combining public and commercial data, and running analyses locally or in the cloud, depending on need. Weiss still uses his tried-and-tested ArcGIS software from Esri for studies of small regions, and jumps to Earth Engine for global analyses.

The new offerings herald a shift from an era when scientists had to spend much of their time gathering and preparing data to one in which they’re thinking about how to use them. “Data isn’t an issue any more,” says Roy. “The next generation is going to be about what kinds of questions are we going to be able to ask?”…(More)”.

Bonding with Your Algorithm


Conversation with Nicolas Berggruen at the Edge: “The relationship between parents and children is the most important relationship. It gets more complicated in this case because, beyond the children being our natural children, we can influence them even beyond. We can influence them biologically, and we can use artificial intelligence as a new tool. I’m not a scientist or a technologist whatsoever, but the tools of artificial intelligence, in theory, are algorithm- or computer-based. In reality, I would argue that even an algorithm is biological because it comes from somewhere. It doesn’t come from itself. If it’s related to us as creators or as the ones who are, let’s say, enabling the algorithms, well, we’re the parents.

Who are those children that we are creating? What do we want them to be like as part of the earth, compared to us as a species and, frankly, compared to us as parents? They are our children. We are the parents. How will they treat us as parents? How do we treat our own parents? How do we treat our children? We have to think of these in the exact same way. Separating technology and humans the way we often think about these issues is almost wrong. If it comes from us, it’s the same thing. We have a responsibility. We have the power and the imagination to shape this future generation. It’s exciting, but let’s just make sure that they view us as their parents. If they view us as their parents, we will have a connection….(More)”