Artificial intelligence in non-profit organizations


Darrell M. West and Theron Kelso at Brookings: “Artificial intelligence provides a way to use automated software to perform a number of different tasks. Private industry, government, and universities have deployed it to manage routine requests and common administrative processes. Fields from finance and healthcare to retail and defense are witnessing a dramatic expansion in the use of these tools.

Yet non-profits often lack the financial resources or organizational capabilities to innovate through technology. Most non-profits struggle with small budgets and inadequate staffing, and they fall behind the cutting edge of new technologies. This limits their group’s efficiency and effectiveness, and makes it difficult to have the kind of impact they would like.

However, there is growing interest in artificial intelligence (AI), machine learning (ML), and data analytics in non-profit organizations. Below are some of the many examples of non-profits using emerging technologies to handle finance, human resources, communications, internal operations, and sustainability.

FINANCE

Fraud and corruption are major challenges for any kind of organization as it is hard to monitor every financial transaction and business contract. AI tools can help managers automatically detect actions that warrant additional investigation. Businesses long have used AI and ML to create early warning systems, spot abnormalities, and thereby minimize financial misconduct. These tools offer ways to combat fraud and detect unusual transactions.

HUMAN RESOURCES

Advanced software helps organizations advertise, screen, and hire promising staff members. Once managers have decided what qualities they are seeking, AI can match applicants with employers. Automated systems can pre-screen resumes, check for relevant experience and skills, and identify applicants who are best suited for particular organizations. They also can weed out those who lack the required skills or do not pass basic screening criteria.

COMMUNICATIONS

Every non-profit faces challenges in terms of communications. In a rapidly-changing world, it is hard to keep in touch with outside donors, internal staff, and interested individuals. Chatbots automate conversations for commonly asked questions through text messaging. These tools can help with customer service and routine requests such as how to contribute money, address a budget question, or learn about upcoming programs. They represent an efficient and effective way to communicate with internal and external audiences….(More)”.

NZ to perform urgent algorithm ‘stocktake’ fearing data misuse within government


Asha McLean at ZDNet: “The New Zealand government has announced it will be assessing how government agencies are using algorithms to analyse data, hoping to ensure transparency and fairness in decisions that affect citizens.

A joint statement from Minister for Government Digital Services Clare Curran and Minister of Statistics James Shaw said the algorithm “stocktake” will be conducted with urgency, but cites only the growing interest in data analytics as the reason for the probe.

“The government is acutely aware of the need to ensure transparency and accountability as interest grows regarding the challenges and opportunities associated with emerging technology such as artificial intelligence,” Curran said.

It was revealed in April that Immigration New Zealand may have been using citizen data for less than desirable purposes, with claims that data collected through the country’s visa application process that was being used to determine those in breach of their visa conditions was in fact filtering people based on their age, gender, and ethnicity.

Rejecting the idea the data-collection project was racial profiling, Immigration Minister Iain Lees-Galloway told Radio New Zealand that Immigration looks at a range of issues, including at those who have made — and have had rejected — multiple visa applications.

“It looks at people who place the greatest burden on the health system, people who place the greatest burden on the criminal justice system, and uses that data to prioritise those people,” he said.

“It is important that we protect the integrity of our immigration system and that we use the resources that immigration has as effectively as we can — I do support them using good data to make good decisions about where best to deploy their resources.”

In the statement on Wednesday, Shaw pointed to two further data-modelling projects the government had embarked on, with one from the Ministry of Health looking into the probability of five-year post-transplant survival in New Zealand.

“Using existing data to help model possible outcomes is an important part of modern government decision-making,” Shaw said….(More)”.

Technology and satellite companies open up a world of data


Gabriel Popkin at Nature: “In the past few years, technology and satellite companies’ offerings to scientists have increased dramatically. Thousands of researchers now use high-resolution data from commercial satellites for their work. Thousands more use cloud-computing resources provided by big Internet companies to crunch data sets that would overwhelm most university computing clusters. Researchers use the new capabilities to track and visualize forest and coral-reef loss; monitor farm crops to boost yields; and predict glacier melt and disease outbreaks. Often, they are analysing much larger areas than has ever been possible — sometimes even encompassing the entire globe. Such studies are landing in leading journals and grabbing media attention.

Commercial data and cloud computing are not panaceas for all research questions. NASA and the European Space Agency carefully calibrate the spectral quality of their imagers and test them with particular types of scientific analysis in mind, whereas the aim of many commercial satellites is to take good-quality, high-resolution pictures for governments and private customers. And no company can compete with Landsat’s free, publicly available, 46-year archive of images of Earth’s surface. For commercial data, scientists must often request images of specific regions taken at specific times, and agree not to publish raw data. Some companies reserve cloud-computing assets for researchers with aligned interests such as artificial intelligence or geospatial-data analysis. And although companies publicly make some funding and other resources available for scientists, getting access to commercial data and resources often requires personal connections. Still, by choosing the right data sources and partners, scientists can explore new approaches to research problems.

Mapping poverty

Joshua Blumenstock, an information scientist at the University of California, Berkeley (UCB), is always on the hunt for data he can use to map wealth and poverty, especially in countries that do not conduct regular censuses. “If you’re trying to design policy or do anything to improve living conditions, you generally need data to figure out where to go, to figure out who to help, even to figure out if the things you’re doing are making a difference.”

In a 2015 study, he used records from mobile-phone companies to map Rwanda’s wealth distribution (J. Blumenstock et al. Science 350, 1073–1076; 2015). But to track wealth distribution worldwide, patching together data-sharing agreements with hundreds of these companies would have been impractical. Another potential information source — high-resolution commercial satellite imagery — could have cost him upwards of US$10,000 for data from just one country….

Use of commercial images can also be restricted. Scientists are free to share or publish most government data or data they have collected themselves. But they are typically limited to publishing only the results of studies of commercial data, and at most a limited number of illustrative images.

Many researchers are moving towards a hybrid approach, combining public and commercial data, and running analyses locally or in the cloud, depending on need. Weiss still uses his tried-and-tested ArcGIS software from Esri for studies of small regions, and jumps to Earth Engine for global analyses.

The new offerings herald a shift from an era when scientists had to spend much of their time gathering and preparing data to one in which they’re thinking about how to use them. “Data isn’t an issue any more,” says Roy. “The next generation is going to be about what kinds of questions are we going to be able to ask?”…(More)”.

Bonding with Your Algorithm


Conversation with Nicolas Berggruen at the Edge: “The relationship between parents and children is the most important relationship. It gets more complicated in this case because, beyond the children being our natural children, we can influence them even beyond. We can influence them biologically, and we can use artificial intelligence as a new tool. I’m not a scientist or a technologist whatsoever, but the tools of artificial intelligence, in theory, are algorithm- or computer-based. In reality, I would argue that even an algorithm is biological because it comes from somewhere. It doesn’t come from itself. If it’s related to us as creators or as the ones who are, let’s say, enabling the algorithms, well, we’re the parents.

Who are those children that we are creating? What do we want them to be like as part of the earth, compared to us as a species and, frankly, compared to us as parents? They are our children. We are the parents. How will they treat us as parents? How do we treat our own parents? How do we treat our children? We have to think of these in the exact same way. Separating technology and humans the way we often think about these issues is almost wrong. If it comes from us, it’s the same thing. We have a responsibility. We have the power and the imagination to shape this future generation. It’s exciting, but let’s just make sure that they view us as their parents. If they view us as their parents, we will have a connection….(More)”

New Technologies Won’t Reduce Scarcity, but Here’s Something That Might


Vasilis Kostakis and Andreas Roos at the Harvard Business Review: “In a book titled Why Can’t We All Just Get Along?, MIT scientists Henry Lieberman and Christopher Fry discuss why we have wars, mass poverty, and other social ills. They argue that we cannot cooperate with each other to solve our major problems because our institutions and businesses are saturated with a competitive spirit. But Lieberman and Fry have some good news: modern technology can address the root of the problem. They believe that we compete when there is scarcity, and that recent technological advances, such as 3D printing and artificial intelligence, will end widespread scarcity. Thus, a post-scarcity world, premised on cooperation, would emerge.

But can we really end scarcity?

We believe that the post-scarcity vision of the future is problematic because it reflects an understanding of technology and the economy that could worsen the problems it seeks to address. This is the bad news. Here’s why:

New technologies come to consumers as finished products that can be exchanged for money. What consumers often don’t understand is that the monetary exchange hides the fact that many of these technologies exist at the expense of other humans and local environments elsewhere in the global economy….

The good news is that there are alternatives. The wide availability of networked computers has allowed new community-driven and open-source business models to emerge. For example, consider Wikipedia, a free and open encyclopedia that has displaced the Encyclopedia Britannica and Microsoft Encarta. Wikipedia is produced and maintained by a community of dispersed enthusiasts primarily driven by other motives than profit maximization.  Furthermore, in the realm of software, see the case of GNU/Linux on which the top 500 supercomputers and the majority of websites run, or the example of the Apache Web Server, the leading software in the web-server market. Wikipedia, Apache and GNU/Linux demonstrate how non-coercive cooperation around globally-shared resources (i.e. a commons) can produce artifacts as innovative, if not more, as those produced by industrial capitalism.

In the same way, the emergence of networked micro-factories are giving rise to new open-source business models in the realm of design and manufacturing. Such spaces can either be makerspaces, fab labs, or other co-working spaces, equipped with local manufacturing technologies, such as 3D printing and CNC machines or traditional low-tech tools and crafts. Moreover, such spaces often offer collaborative environments where people can meet in person, socialize and co-create.

This is the context in which a new mode of production is emerging. This mode builds on the confluence of the digital commons of knowledge, software, and design with local manufacturing technologies.  It can be codified as “design global, manufacture local” following the logic that what is light (knowledge, design) becomes global, while what is heavy (machinery) is local, and ideally shared. Design global, manufacture local (DGML) demonstrates how a technology project can leverage the digital commons to engage the global community in its development, celebrating new forms of cooperation. Unlike large-scale industrial manufacturing, the DGML model emphasizes application that is small-scale, decentralized, resilient, and locally controlled. DGML could recognize the scarcities posed by finite resources and organize material activities accordingly. First, it minimizes the need to ship materials over long distances, because a considerable part of the manufacturing takes place locally. Local manufacturing also makes maintenance easier, and also encourages manufacturers to design products to last as long as possible. Last, DGML optimizes the sharing of knowledge and design as there are no patent costs to pay for….(More)”

The Slippery Math of Causation


Pradeep Mutalik for Quanta Magazine: “You often hear the admonition “correlation does not imply causation.” But what exactly is causation? Unlike correlation, which has a specific mathematical meaning, causation is a slippery concept that has been debated by philosophers for millennia. It seems to get conflated with our intuitions or preconceived notions about what it means to cause something to happen. One common-sense definition might be to say that causation is what connects one prior process or agent — the cause — with another process or state — the effect. This seems reasonable, except that it is useful only when the cause is a single factor, and the connection is clear. But reality is rarely so simple.

Although we tend to credit or blame things on a single major cause, in nature and in science there are almost always multiple factors that have to be exactly right for an event to take place. For example, we might attribute a forest fire to the carelessly thrown cigarette butt, but what about the grassy tract leading to the forest, the dryness of the vegetation, the direction of the wind and so on? All of these factors had to be exactly right for the fire to start. Even though many tossed cigarette butts don’t start fires, we zero in on human actions as causes, ignoring other possibilities, such as sparks from branches rubbing together or lightning strikes, or acts of omission, such as failing to trim the grassy path short of the forest. And we tend to focus on things that can be manipulated: We overlook the direction of the wind because it is not something we can control. Our scientifically incomplete intuitive model of causality is nevertheless very useful in practice, and helps us execute remedial actions when causes are clearly defined. In fact, artificial intelligence pioneer Judea Pearl has published a new book about why it is necessary to teach cause and effect to intelligent machines.

However, clearly defined causes may not always exist. Complex, interdependent multifactorial causes arise often in nature and therefore in science. Most scientific disciplines focus on different aspects of causality in a simplified manner. Physicists may talk about causal influences being unable to propagate faster than the speed of light, while evolutionary biologists may discuss proximate and ultimate causes as mentioned in our previous puzzle on triangulation and motion sickness. But such simple situations are rare, especially in biology and the so-called “softer” sciences. In the world of genetics, the complex multifactorial nature of causality was highlighted in a recent Quanta article by Veronique Greenwood that described the intertwined effects of genes.

One well-known approach to understanding causality is to separate it into two types: necessary and sufficient….(More)”

How Citizens Can Hack EU Democracy


Stephen Boucher at Carnegie Europe: “…To connect citizens with the EU’s decisionmaking center, European politicians will need to provide ways to effectively hack this complex system. These democratic hacks need to be visible and accessible, easily and immediately implementable, viable without requiring changes to existing European treaties, and capable of having a traceable impact on policy. Many such devices could be imagined around these principles. Here are three ideas to spur debate.

Hack 1: A Citizens’ Committee for the Future in the European Parliament

The European Parliament has proposed that twenty-seven of the seventy-three seats left vacant by Brexit should be redistributed among the remaining member states. According to one concept, the other forty-six unassigned seats could be used to recruit a contingent of ordinary citizens from around the EU to examine legislation from the long-term perspective of future generations. Such a “Committee for the Future” could be given the power to draft a response to a yearly report on the future produced by the president of the European Parliament, initiate debates on important political themes of their own choosing, make submissions on future-related issues to other committees, and be consulted by members of the European Parliament (MEPs) on longer-term matters.

MEPs could decide to use these forty-six vacant seats to invite this Committee for the Future to sit, at least on a trial basis, with yearly evaluations. This arrangement would have real benefits for EU politics, acting as an antidote to the union’s existential angst and helping the EU think systemically and for the longer term on matters such as artificial intelligence, biodiversity, climate concerns, demography, mobility, and energy.

Hack 2: An EU Participatory Budget

In 1989, the city of Porto Alegre, Brazil, decided to cede control of a share of its annual budget for citizens to decide upon. This practice, known as participatory budgets, has since spread globally. As of 2015, over 1,500 instances of participatory budgets have been implemented across five continents. These processes generally have had a positive impact, with people proving that they take public spending matters seriously.

To replicate these experiences at the European level, the complex realities of EU budgeting would require specific features. First, participative spending probably would need to be both local and related to wider EU priorities in order to ensure that citizens see its relevance and its wider European implications. Second, significant resources would need to be allocated to help citizens come up with and promote projects. For instance, the city of Paris has ensured that each suggested project that meets the eligibility requirements has a desk officer within its administration to liaise with the idea’s promoters. It dedicates significant resources to reach out to citizens, in particular in the poorer neighborhoods of Paris, both online and face-to-face. Similar efforts would need to be deployed across Europe. And third, in order to overcome institutional complexities, the European Parliament would need to work with citizens as part of its role in negotiating the budget with the European Council.

Hack 3: An EU Collective Intelligence Forum

Many ideas have been put forward to address popular dissatisfaction with representative democracy by developing new forums such as policy labs, consensus conferences, and stakeholder facilitation groups. Yet many citizens still feel disenchanted with representative democracy, including at the EU level, where they also strongly distrust lobby groups. They need to be involved more purposefully in policy discussions.

A yearly Deliberative Poll could be run on a matter of significance, ahead of key EU summits and possibly around the president of the commission’s State of the Union address. On the model of the first EU-wide Deliberative Poll, Tomorrow’s Europe, this event would bring together in Brussels a random sample of citizens from all twenty-seven EU member states, and enable them to discuss various social, economic, and foreign policy issues affecting the EU and its member states. This concept would have a number of advantages in terms of promoting democratic participation in EU affairs. By inviting a truly representative sample of citizens to deliberate on complex EU matters over a weekend, within the premises of the European Parliament, the European Parliament would be the focus of a high-profile event that would draw media attention. This would be especially beneficial if—unlike Tomorrow’s Europe—the poll was not held at arm’s length by EU policymakers, but with high-level national officials attending to witness good-quality deliberation remolding citizens’ views….(More)”.

Data Governance in the Digital Age


Centre for International Governance Innovation: “Data is being hailed as “the new oil.” The analogy seems appropriate given the growing amount of data being collected, and the advances made in its gathering, storage, manipulation and use for commercial, social and political purposes.

Big data and its application in artificial intelligence, for example, promises to transform the way we live and work — and will generate considerable wealth in the process. But data’s transformative nature also raises important questions around how the benefits are shared, privacy, public security, openness and democracy, and the institutions that will govern the data revolution.

The delicate interplay between these considerations means that they have to be treated jointly, and at every level of the governance process, from local communities to the international arena. This series of essays by leading scholars and practitioners, which is also published as a special report, will explore topics including the rationale for a data strategy, the role of a data strategy for Canadian industries, and policy considerations for domestic and international data governance…

RATIONALE OF A DATA STRATEGY

THE ROLE OF A DATA STRATEGY FOR CANADIAN INDUSTRIES

BALANCING PRIVACY AND COMMERCIAL VALUES

DOMESTIC POLICY FOR DATA GOVERNANCE

INTERNATIONAL POLICY CONSIDERATIONS

EPILOGUE

Public Policy in an AI Economy


NBER Working Paper by Austan Goolsbee: “This paper considers the role of policy in an AI-intensive economy (interpreting AI broadly). It emphasizes the speed of adoption of the technology for the impact on the job market and the implications for inequality across people and across places. It also discusses the challenges of enacting a Universal Basic Income as a response to widespread AI adoption, discuss pricing, privacy and competition policy the question of whether AI could improve policy making itself….(More).

How Policymakers Can Foster Algorithmic Accountability


Report by Joshua New and Daniel Castro: “Increased automation with algorithms, particularly through the use of artificial intelligence (AI), offers opportunities for the public and private sectors to complete increasingly complex tasks with a level of productivity and effectiveness far beyond that of humans, generating substantial social and economic benefits in the process. However, many believe an increased use of algorithms will lead to a host of harms, including exacerbating existing biases and inequalities, and have therefore called for new public policies, such as establishing an independent commission to regulate algorithms or requiring companies to explain publicly how their algorithms make decisions. Unfortunately, all of these proposals would lead to less AI use, thereby hindering social and economic progress.

Policymakers should reject these proposals and instead support algorithmic decision-making by promoting policies that ensure its robust development and widespread adoption. Like any new technology, there are strong incentives among both developers and adopters to improve algorithmic decision-making and ensure its applications do not contain flaws, such as bias, that reduce their effectiveness. Thus, rather than establish a master regulatory framework for all algorithms, policymakers should do what they have always done with regard to technology regulation: enact regulation only where it is required, targeting specific harms in particular application areas through dedicated regulatory bodies that are already charged with oversight of that particular sector. To accomplish this, regulators should pursue algorithmic accountability—the principle that an algorithmic system should employ a variety of controls to ensure the operator (i.e., the party responsible for deploying the algorithm) can verify it acts in accordance with its intentions, as well as identify and rectify harmful outcomes. Adopting this framework would both promote the vast benefits of algorithmic decision-making and minimize harmful outcomes, while also ensuring laws that apply to human decisions can be effectively applied to algorithmic decisions….(More)”.