Paper by Charles Kenny and Ben Crisman: “Governments buy about $9 trillion worth of goods and services a year, and their procurement policies are increasingly subject to international standards and institutional regulation including the WTO Plurilateral Agreement on Government Procurement, Open Government Partnership commitments and International Financial Institution procurement rules. These standards focus on transparency and open competition as key tools to improve outcomes. While there is some evidence on the impact of competition on prices in government procurement, there is less on the impact of specific procurement rules including transparency on competition or procurement outcomes. Using a database of World Bank financed contracts, we explore the impact of a relatively minor procurement rule governing advertising on competition using regression discontinuity design and matching methods….(More)”
Digital Government: Leveraging Innovation to Improve Public Sector Performance and Outcomes for Citizens
Book edited by Svenja Falk, Andrea Römmele, Andrea and Michael Silverman: “This book focuses on the implementation of digital strategies in the public sectors in the US, Mexico, Brazil, India and Germany. The case studies presented examine different digital projects by looking at their impact as well as their alignment with their national governments’ digital strategies. The contributors assess the current state of digital government, analyze the contribution of digital technologies in achieving outcomes for citizens, discuss ways to measure digitalization and address the question of how governments oversee the legal and regulatory obligations of information technology. The book argues that most countries formulate good strategies for digital government, but do not effectively prescribe and implement corresponding policies and programs. Showing specific programs that deliver results can help policy makers, knowledge specialists and public-sector researchers to develop best practices for future national strategies….(More)”
Crowd-sourcing pollution control in India
Springwise: “Following orders by the national government to improve the air quality of the New Delhi region by reducing air pollution, the Environment Pollution (Prevention and Control) Authority created the Hawa Badlo app. Designed for citizens to report cases of air pollution, each complaint is sent to the appropriate official for resolution.
Free to use, the app is available for both iOS and Android. Complaints are geo-tagged, and there are two different versions available – one for citizens and one for government officials. Officials must provide photographic evidence to close a case. The app itself produces weekly reports listings the numbers and status of complaints, along with any actions taken to resolve the problem. Currently focusing on pollution from construction, unpaved roads and the burning of garbage, the team behind the app plans to expand its use to cover other types of pollution as well.
From providing free wi-fi when the air is clean enough to mapping air-quality in real-time, air pollution solutions are increasingly involving citizens….(More)”
Microsoft Shows Searches Can Boost Early Detection of Lung Cancer
Lung cancer can be detected a year prior to current methods of diagnosis in more than one-third of cases by analyzing a patient’s internet searches for symptoms and demographic data that put them at higher risk, according to research from Microsoft published Thursday in the journal JAMA Oncology. The study shows it’s possible to use search data to give patients or doctors enough reason to seek cancer screenings earlier, improving the prospects for treatment for lung cancer, which is the leading cause of cancer deaths worldwide.
To train their algorithms, researchers Ryen White and Eric Horvitz scanned anonymous queries in Bing, the company’s search engine. They took searchers who had asked Bing something that indicated a recent lung cancer diagnosis, such as questions about specific treatments or the phrase “I was just diagnosed with lung cancer.”
Then they went back over the user’s previous searches to see if there were other queries that might have indicated the possibility of cancer prior to diagnosis. They looked for searches such as those related to symptoms, including bronchitis, chest pain and blood in sputum. The researchers reviewed other risk factors such as gender, age, race and whether searchers lived in areas with high levels of asbestos and radon, both of which increase the risk of lung cancer. And they looked for indications the user was a smoker, such as people searching for smoking cessation products like Nicorette gum.
How effective this method can be depends on how many false positives — people who don’t end up having cancer but are told they may — you are willing to tolerate, the researchers said. More false positives also mean catching more cases early. With one false positive in 1,000, 39 percent of cases can be caught a year earlier, according to the study. Dropping to one false positive per 100,000 still could allow researchers to catch 3 percent of cases a year earlier, Horvitz said. The company published similar research on pancreatic cancer in June….(More)”
Open data aims to boost food security prospects
Mark Kinver at BBC News: “Rothamsted Research, a leading agricultural research institution, is attempting to make data from long-term experiments available to all.
In partnership with a data consultancy, is it developing a method to make complex results accessible and useable.
The institution is a member of the Godan Initiative that aims to make data available to the scientific community.
In September, Godan called on the public to sign its global petition to open agricultural research data.
“The continuing challenge we face is that the raw data alone is not sufficient enough on its own for people to make sense of it,” said Chris Rawlings, head of computational and systems biology at Rothamsted Research.
“This is because the long-term experiments are very complex, and they are looking at agriculture and agricultural ecosystems so you need to know a lot of about what the intention of the studies are, how they are being used, and the changes that have taken place over time.”
However, he added: “Even with this level of complexity, we do see significant number of users contacting us or developing links with us.”
One size fits all
The ability to provide open data to all is one of the research organisation’s national capabilities, and forms a defining principle of its web portal to the experiments carried out at its North Wyke Farm Platform in North Devon.
Rothamsted worked in partnership with Tessella, a data consultancy, on the data collected from the experiments, which focused on livestock pastures.
The information being collected, as often as every 15 minutes, includes water run-off levels, soil moisture, meteorological data, and soil nutrients, and this is expected to run for decades.
“The data is quite varied and quite diverse, and [Rothamsted] wants to make to make this data available to the wider research community,” explained Tessella’s Andrew Bowen.
“What Rothamsted needed was a way to store it and a way to present it in a portal in which people could see what they had to offer.”
He told BBC News that there were a number of challenges that needed to be tackled.
One was the management of the data, and the team from Tessella adopted an “agile scrum” approach.
“Basically, what you do is draw up a list of the requirements, of what you need, and we break the project down into short iterations, starting with the highest priority,” he said.
“This means that you are able to take a more exploratory approach to the process of developing software. This is very well suited to the research environment.”…(More)”
Transforming government through digitization
Bjarne Corydon, Vidhya Ganesan, and Martin Lundqvist at McKinsey: “By digitizing processes and making organizational changes, governments can enhance services, save money, and improve citizens’ quality of life.
As companies have transformed themselves with digital technologies, people are calling on governments to follow suit. By digitizing, governments can provide services that meet the evolving expectations of citizens and businesses, even in a period of tight budgets and increasingly complex challenges. Our estimates suggest that government digitization, using current technology, could generate over $1 trillion annually worldwide.
Digitizing a government requires attention to two major considerations: the core capabilities for engaging citizens and businesses, and the organizational enablers that support those capabilities (exhibit). These make up a framework for setting digital priorities. In this article, we look at the capabilities and enablers in this framework, along with guidelines and real-world examples to help governments seize the opportunities that digitization offers.
Governments typically center their digitization efforts on four capabilities: services, processes, decisions, and data sharing. For each, we believe there is a natural progression from quick wins to transformative efforts….(More)”
See also: Digital by default: A guide to transforming government (PDF–474KB) and “Never underestimate the importance of good government,” a New at McKinsey blog post with coauthor Bjarne Corydon, director of the McKinsey Center for Government.
Understanding the four types of AI, from reactive robots to self-aware beings
The Conversation: “…We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.
Type I AI: Reactive machines
The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.
Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.
But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.
This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world….
Type II AI: Limited memory
This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.
These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.
But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel…;
Type III AI: Theory of mind
We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.
Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.
This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.
If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.
Type IV AI: Self-awareness
The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it….
While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences….(More)”
The Cost of Cooperating
If you think about the puzzle of cooperation being “why should I incur a personal cost of time or money or effort in order to do something that’s going to benefit other people and not me?” the general answer is that if you can create future consequences for present behavior, that can create an incentive to cooperate. Cooperation requires me to incur some costs now, but if I’m cooperating with someone who I’ll interact with again, it’s worth it for me to pay the cost of cooperating now in order to get the benefit of them cooperating with me in the future, as long as there’s a large enough likelihood that we’ll interact again.
Even if it’s with someone that I’m not going to interact with again, if other people are observing that interaction, then it affects my reputation. It can be worth paying the cost of cooperating in order to earn a good reputation, and to attract new interaction partners.
There’s a lot of evidence to show that this works. There are game theory models and computer simulations showing that if you build these kinds of future consequences, you can get either evolution to lead to cooperative agents dominating populations, and also learning and strategic reasoning leading to people cooperating. There are also lots of behavioral experiments supporting this. These are lab experiments where you bring people into the lab, give them money, and you have them engage in economic cooperation games where they choose whether to keep the money for themselves or to contribute it to a group project that benefits other people. If you make it so that future consequences exist in any of these various ways, it makes people more inclined to cooperate. Typically, it leads to cooperation paying off, and being the best-performing strategy.
In these situations, it’s not altruistic to be cooperative because the interactions are designed in a way that makes cooperating pay off. For example, we have a paper that shows that in the context of repeated interactions, there’s not any relationship between how altruistic people are and how much they cooperate. Basically, everybody cooperates, even the selfish people. Under certain situations, selfish people can even wind up cooperating more because they’re better at identifying that that’s what is going to pay off.
This general class of solutions to the cooperation problem boils down to creating future consequences, and therefore creating a self-interested motivation in the long run to be cooperative. Strategic cooperation is extremely important; it explains a lot of real-world cooperation. From an institution design perspective, it’s important for people to be thinking about how you set up the rules of interaction—interaction structures and incentive structures—in a way that makes working for the greater good a good strategy.
At the same time that this strategic cooperation is important, it’s also clearly the case that people often cooperate even when there’s not a self-interested motive to do so. That willingness to help strangers (or to not exploit them) is a core piece of well-functioning societies. It makes society much more efficient when you don’t constantly have to be watching your back, afraid that people are going to take advantage of you. If you can generally trust that other people are going to do the right thing and you’re going to do the right thing, it makes life much more socially efficient.
Strategic incentives can motivate people to cooperate, but people also keep cooperating even when there are not incentives to do so, at least to some extent. What motivates people to do that? The way behavioral economists and psychologists talk about that is at a proximate psychological level—saying things like, “Well, it feels good to cooperate with other people. You care about others and that’s why you’re willing to pay costs to help them. You have social preferences.” …
Most people, both in the scientific world and among laypeople, are of the former opinion, which is that we are by default selfish—we have to use rational deliberation to make ourselves do the right thing. I try to think about this question from a theoretical principle position and say, what should it be? From a perspective of either evolution or strategic reasoning, which of these two stories makes more sense, and should we expect to observe?
If you think about it that way, the key question is “where do our intuitive defaults come from?” There’s all this work in behavioral economics and psychology on heuristics and biases which suggests that these intuitions are usually rules of thumb for the behavior that typically works well. It makes sense: If you’re going to have something as your default, what should you choose as your default? You should choose the thing that works well in general. In any particular situation, you might stop and ask, “Does my rule of thumb fit this specific situation?” If not, then you can override it….(More)”
Innovation Labs: 10 Defining Features
Essay by Lidia Gryszkiewicz, Tuukka Toivonen, & Ioanna Lykourentzou: “Innovation labs, with their aspirations to foster systemic change, have become a mainstay of the social innovation scene. Used by city administrations, NGOs, think tanks, and multinational corporations, labs are becoming an almost default framework for collaborative innovation. They have resonance in and across myriad fields: London’s pioneering Finance Innovation Lab, for example, aims to create a system of finance that benefits “people and planet”; the American eLab is searching for the future of the electricity sector; and the Danish MindLab helps the government co-create better social services and solutions. Hundreds of other, similar initiatives around the world are taking on a range of grand challenges (or, if you prefer, wicked problems) and driving collective social impact.
Yet for all their seeming popularity, labs face a basic problem that closely parallels the predicament of hub organizations: There is little clarity on their core features, and few shared definitions exist that would make sense amid their diverse themes and settings. …
Building on observations previously made in the SSIR and elsewhere, we contribute to the task of clarifying the logic of modern innovation labs by distilling 10 defining features. …
1. Imposed but open-ended innovation themes…
2. Preoccupation with large innovation challenges…
3. Expectation of breakthrough solutions…
4. Heterogeneous participants…
5. Targeted collaboration…
6. Long-term perspectives…
7. Rich innovation toolbox…
8. Applied orientation…
9. Focus on experimentation…
10. Systemic thinking…
In a recent academic working paper, we condense the above into this definition: An innovation lab is a semi-autonomous organization that engages diverse participants—on a long-term basis—in open collaboration for the purpose of creating, elaborating, and prototyping radical solutions to pre-identified systemic challenges…(More)”
The crowdsourcing movement to improve African maps
Chris Stein for Quartz: “In map after map after map, many African countries appear as a void, marked with a color that signifies not a percentage or a policy but merely offers an explanation: “no data available.”
Where numbers or cartography has left African countries behind, developers are stepping in with open-source tools that allow anyone from academics to your everyday smartphone user to improve maps of the continent.
One such project is Missing Maps, which invites people to use satellite imagery on mapping platform OpenStreetMap to fill out roads, buildings and other features in various parts of Africa that lack these markers. Active projects on åMissing Maps include everything from mapping houses in Malawi to marking roads in the Democratic Republic of Congo.Missing Maps co-founder Ivan Gayton said humanitarian organizations could use the refined maps for development projects or to respond to future disasters or disease outbreaks….
In July, Missing Maps launched MapSwipe, a smartphone app that helps whittle down the areas needed for mapping on OpenStreetMap by giving anyone with an iPhone or Android phone the ability to swipe through satellite images and indicate if they contain features like houses, roads or paths. These are then forwarded onto Missing Maps for precise marking of these features….Missing Maps’s approach is similar to that of Mapping Africa, a project developed at Princeton University that pays users to look at satellite images and identify croplands….People who sign up on Amazon’s Mechanical Turk service are given satellite images of random patches of land across Africa and asked to determine if the land is being used for farming.
…One outlet for Mapping Africa’s data could be AfricaMap, a Harvard University project where users can compile data on everything from ethnic groups to mother tongues to slave trade routes and layer it over a map of the continent….(More)”