Stefaan Verhulst
The Conversation: “…We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.
Type I AI: Reactive machines
The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.
Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.
But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.
This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world….
Type II AI: Limited memory
This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.
These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.
But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel…;
Type III AI: Theory of mind
We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.
Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.
This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.
If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.
Type IV AI: Self-awareness
The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it….
While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences….(More)”
If you think about the puzzle of cooperation being “why should I incur a personal cost of time or money or effort in order to do something that’s going to benefit other people and not me?” the general answer is that if you can create future consequences for present behavior, that can create an incentive to cooperate. Cooperation requires me to incur some costs now, but if I’m cooperating with someone who I’ll interact with again, it’s worth it for me to pay the cost of cooperating now in order to get the benefit of them cooperating with me in the future, as long as there’s a large enough likelihood that we’ll interact again.
Even if it’s with someone that I’m not going to interact with again, if other people are observing that interaction, then it affects my reputation. It can be worth paying the cost of cooperating in order to earn a good reputation, and to attract new interaction partners.
There’s a lot of evidence to show that this works. There are game theory models and computer simulations showing that if you build these kinds of future consequences, you can get either evolution to lead to cooperative agents dominating populations, and also learning and strategic reasoning leading to people cooperating. There are also lots of behavioral experiments supporting this. These are lab experiments where you bring people into the lab, give them money, and you have them engage in economic cooperation games where they choose whether to keep the money for themselves or to contribute it to a group project that benefits other people. If you make it so that future consequences exist in any of these various ways, it makes people more inclined to cooperate. Typically, it leads to cooperation paying off, and being the best-performing strategy.
In these situations, it’s not altruistic to be cooperative because the interactions are designed in a way that makes cooperating pay off. For example, we have a paper that shows that in the context of repeated interactions, there’s not any relationship between how altruistic people are and how much they cooperate. Basically, everybody cooperates, even the selfish people. Under certain situations, selfish people can even wind up cooperating more because they’re better at identifying that that’s what is going to pay off.
This general class of solutions to the cooperation problem boils down to creating future consequences, and therefore creating a self-interested motivation in the long run to be cooperative. Strategic cooperation is extremely important; it explains a lot of real-world cooperation. From an institution design perspective, it’s important for people to be thinking about how you set up the rules of interaction—interaction structures and incentive structures—in a way that makes working for the greater good a good strategy.
At the same time that this strategic cooperation is important, it’s also clearly the case that people often cooperate even when there’s not a self-interested motive to do so. That willingness to help strangers (or to not exploit them) is a core piece of well-functioning societies. It makes society much more efficient when you don’t constantly have to be watching your back, afraid that people are going to take advantage of you. If you can generally trust that other people are going to do the right thing and you’re going to do the right thing, it makes life much more socially efficient.
Strategic incentives can motivate people to cooperate, but people also keep cooperating even when there are not incentives to do so, at least to some extent. What motivates people to do that? The way behavioral economists and psychologists talk about that is at a proximate psychological level—saying things like, “Well, it feels good to cooperate with other people. You care about others and that’s why you’re willing to pay costs to help them. You have social preferences.” …
Most people, both in the scientific world and among laypeople, are of the former opinion, which is that we are by default selfish—we have to use rational deliberation to make ourselves do the right thing. I try to think about this question from a theoretical principle position and say, what should it be? From a perspective of either evolution or strategic reasoning, which of these two stories makes more sense, and should we expect to observe?
If you think about it that way, the key question is “where do our intuitive defaults come from?” There’s all this work in behavioral economics and psychology on heuristics and biases which suggests that these intuitions are usually rules of thumb for the behavior that typically works well. It makes sense: If you’re going to have something as your default, what should you choose as your default? You should choose the thing that works well in general. In any particular situation, you might stop and ask, “Does my rule of thumb fit this specific situation?” If not, then you can override it….(More)”
Essay by Lidia Gryszkiewicz, Tuukka Toivonen, & Ioanna Lykourentzou: “Innovation labs, with their aspirations to foster systemic change, have become a mainstay of the social innovation scene. Used by city administrations, NGOs, think tanks, and multinational corporations, labs are becoming an almost default framework for collaborative innovation. They have resonance in and across myriad fields: London’s pioneering Finance Innovation Lab, for example, aims to create a system of finance that benefits “people and planet”; the American eLab is searching for the future of the electricity sector; and the Danish MindLab helps the government co-create better social services and solutions. Hundreds of other, similar initiatives around the world are taking on a range of grand challenges (or, if you prefer, wicked problems) and driving collective social impact.
Yet for all their seeming popularity, labs face a basic problem that closely parallels the predicament of hub organizations: There is little clarity on their core features, and few shared definitions exist that would make sense amid their diverse themes and settings. …
Building on observations previously made in the SSIR and elsewhere, we contribute to the task of clarifying the logic of modern innovation labs by distilling 10 defining features. …
1. Imposed but open-ended innovation themes…
2. Preoccupation with large innovation challenges…
3. Expectation of breakthrough solutions…
4. Heterogeneous participants…
5. Targeted collaboration…
6. Long-term perspectives…
7. Rich innovation toolbox…
8. Applied orientation…
9. Focus on experimentation…
10. Systemic thinking…
In a recent academic working paper, we condense the above into this definition: An innovation lab is a semi-autonomous organization that engages diverse participants—on a long-term basis—in open collaboration for the purpose of creating, elaborating, and prototyping radical solutions to pre-identified systemic challenges…(More)”
Chris Stein for Quartz: “In map after map after map, many African countries appear as a void, marked with a color that signifies not a percentage or a policy but merely offers an explanation: “no data available.”
Where numbers or cartography has left African countries behind, developers are stepping in with open-source tools that allow anyone from academics to your everyday smartphone user to improve maps of the continent.
One such project is Missing Maps, which invites people to use satellite imagery on mapping platform OpenStreetMap to fill out roads, buildings and other features in various parts of Africa that lack these markers. Active projects on åMissing Maps include everything from mapping houses in Malawi to marking roads in the Democratic Republic of Congo.Missing Maps co-founder Ivan Gayton said humanitarian organizations could use the refined maps for development projects or to respond to future disasters or disease outbreaks….
In July, Missing Maps launched MapSwipe, a smartphone app that helps whittle down the areas needed for mapping on OpenStreetMap by giving anyone with an iPhone or Android phone the ability to swipe through satellite images and indicate if they contain features like houses, roads or paths. These are then forwarded onto Missing Maps for precise marking of these features….Missing Maps’s approach is similar to that of Mapping Africa, a project developed at Princeton University that pays users to look at satellite images and identify croplands….People who sign up on Amazon’s Mechanical Turk service are given satellite images of random patches of land across Africa and asked to determine if the land is being used for farming.
…One outlet for Mapping Africa’s data could be AfricaMap, a Harvard University project where users can compile data on everything from ethnic groups to mother tongues to slave trade routes and layer it over a map of the continent….(More)”
Report by sparks & honey: “Through our interaction with machines, we develop emotional, human expectations of them. Alexa, for example, comes alive when we speak with it. AI is and will be a representation of its cultural context, the values and ethics we apply to one another as humans.
This machinery is eerily familiar as it mirrors us, and eventually becomes even smarter than us mere mortals. We’re programming its advantages based on how we see ourselves and the world around us, and we’re doing this at an incredible pace. This shift is pervading culture from our perceptions of beauty and aesthetics to how we interact with one another – and our AI.
Infused with technology, we’re asking: what does it mean to be human?
Our report examines:
• The evolution of our empathy from humans to animals and robots
• How we treat AI in its infancy like we do a child, allowing it space to grow
• The spectrum of our emotional comfort in a world embracing AI
• The cultural contexts fueling AI biases, such as gender stereotypes, that drive the direction of AI
• How we place an innate trust in machines, more than we do one another (Download for free)”
Katherine Curchin at LSE Blog: “…behavioural scientists are calling for a second generation of behaviourally-informed policy. In some policy areas, nudges simply aren’t enough. Behavioural research shows stronger action is required to attack the underlying cause of problems. For example, many scholars have argued that behavioural insights provide a rationale for regulation to protect consumers from manipulation by private sector companies. But what might a second generation of behaviourally-informed social policy look like?
Behavioural insights could provide a justification to change the trajectory of income support policy. Since the 1990s policy attention has focused on the moral character of benefits recipients. Inspired by Lawrence Mead’s paternalist philosophy, governments have tried to increase the resolve of the unemployed to work their way out of poverty. More and more behavioural requirements have been attached to benefits to motivate people to fulfil their obligations to society.
But behavioural research now suggests that these harsh policies are misguided. Behavioural science supports the idea that people often make poor decisions and do things which are not in their long term interests. But the weakness of individuals’ moral constitution isn’t so much the problem as the unequal distribution of opportunities in society. There are circumstances in which humans are unlikely to flourish no matter how motivated they are.
Normal human psychological limitations – our limited cognitive capacity, limited attention and limited self-control – interact with environment to produce the behaviour that advocates of harsh welfare regimes attribute to permissive welfare. In their book Scarcity, Sendhil Mullainathan and Eldar Shafir argue that the experience of deprivation creates a mindset that makes it harder to process information, pay attention, make good decisions, plan for the future, and resist temptations.
Importantly, behavioural scientists have demonstrated that this mindset can be temporarily created in the laboratory by placing subjects in artificial situations which induce the feeling of not having enough. As a consequence, experimental subjects from middle-class backgrounds suddenly display the short-term thinking and irrational decision making often attributed to a culture of poverty.
Tying inadequate income support to a list of behavioural conditions will most punish those who are suffering most. Empirical studies of welfare conditionality have found that benefit claimants often do not comprehend the complicated rules that apply to them. Some are being punished for lack of understanding rather than deliberate non-compliance.
Behavioural insights can be used to mount a case for a more generous, less punitive approach to income support. The starting point is to acknowledge that some of Mead’s psychological assumptions have turned out to be wrong. The nature of the cognitive machinery humans share imposes limits on how self-disciplined and conscientious we can reasonably expect people living in adverse circumstances to be. We have placed too much emphasis on personal responsibility in recent decades. Why should people internalize the consequences of their behaviour when this behaviour is to a large extent the product of their environment?…(More)”
Michele Catanzaro at EuroScientist: “Imagine a world without peer review committees, project proposals or activity reports. Imagine a world where research funds seamlessly flow where they are best employed, like nutrients in a food-web or materials in a river network. Many scientists would immediately signup to live in such a world.
The Netherlands is set to become the place where this academic paradise will be tested, in the next few years. In July 2016, the Dutch parliament approved a motion related to implementing alternative funding procedures to alleviate the research bureaucracy, which is increasingly burdening scientists. Here EuroScientistinvestigates whether the self-organisation power of the scientific community could help resolve one of researchers’ worse burden.
Self-organisation
The Dutch national funding agency is planning to adopt a radically new system to allocate part of its funding, promoted by ecologist Marten Sheffer, who is professor of aquatic ecology and water quality management at Wageningen University and Research Centre. Under the proposed approach, funds would intially be evenly divided among all scientists in the country. Then, they would each have to allocate half of what they have received to the person who, in their opinion, is the most deserving scientist in their network. Then, the process would be iterated.
The promoters of the system believe that the “wisdom of the crowd” of the scientific community would assigning more funds to the most deserving scientists among them; with minimal amount of paperwork. The Dutch initiative is part of a broader effort to use a scientific approach to improve science.
In other words, it is part of a trend aiming to employ scientific evidence to tweak the social mechanisms of academia. Specifically, findings from what is known as complexity research are increasingly brought forward as a way of reducing bureaucracy, removing red tape, and maximising the time scientists spend in thinking….
Abandoning the current bureaucratic, top-down system to evaluate and fund research, based on labour-intensive peer-review, may not be too much of a loss. “Peer-review is an imperfect, fragile mechanism. Our simulations show that assigning funds at random would not distort too much the results of the traditional mechanism,” says Flaminio Squazzoni, an economist at the University of Brescia, Italy, and the coordinator of the PEERE-New Frontiers of Peer Review COST action.
In reality peer-review is never quite neutral. “If scientists behave perfectly, then peer review works,” Squazzoni explains, “but if strategic motivations are taken into account, like saving time or competition, then the results are worse than random.” Squazzoni believes that automation, economic incentives, or the creation of professional reviewers may improve the situation….(More)”
Essay by William Boyd: “How will big data impact environmental law in the near future? This Essay imagines one possible future for environmental law in 2030 that focuses on the implications of big data for the protection of public health from risks associated with pollution and industrial chemicals. It assumes the perspective of an historian looking back from the end of the twenty-first century at the evolution of environmental law during the late twentieth and early twenty-first centuries.
The premise of the Essay is that big data will drive a major shift in the underlying knowledge practices of environmental law (along with other areas of law focused on health and safety). This change in the epistemic foundations of environmental law, it is argued, will in turn have important, far-reaching implications for environmental law’s normative commitments and for its ability to discharge its statutory responsibilities. In particular, by significantly enhancing the ability of environmental regulators to make harm more visible and more traceable, big data will put considerable pressure on previous understandings of acceptable risk across populations, pushing toward a more singular and more individualized understanding of harm. This will raise new and difficult questions regarding environmental law’s capacity to confront and take responsibility for the actual lives caught up in the tragic choices it is called upon to make. In imagining this near future, the Essay takes a somewhat exaggerated and, some might argue, overly pessimistic view of the implications of big data for environmental law’s efforts to protect public health. This is done not out of a conviction that such a future is likely, but rather to highlight some of the potential problems that may arise as big data becomes a more prominent part of environmental protection. In an age of data triumphalism, such a perspective, it is hoped, may provide grounds for a more critical engagement with the tools and knowledge practices that inform environmental law and the implications of those tools for environmental law’s ability to meet its obligations. Of course, there are other possible futures, and big data surely has the potential to make many positive contributions to environmental protection in the coming decades. Whether it will do so will depend in no small part on the collective choices we make to manage these new capabilities in the years ahead….(More)”
Graça Fonseca at apolitical:”Portugal has announced the world’s first participatory budget on a national scale. The project will let people submit ideas for what the government should spend its money on, and then vote on which ideas are adopted.
Although participatory budgeting has become increasingly popular around the world in the past few years, it has so far been confined to cities and regions, and no country that we know of has attempted it nationwide. To reach as many people as possible, Portugal is also examining another innovation: letting people cast their votes via ATM machines.
‘It’s about quality of life, it’s about the quality of public space, it’s about the quality of life for your children, it’s about your life, OK?’ Graça Fonseca, the minister responsible, told Apolitical. ‘And you have a huge deficit of trust between people and the institutions of democracy. That’s the point we’re starting from and, if you look around, Portugal is not an exception in that among Western societies. We need to build that trust and, in my opinion, it’s urgent. If you don’t do anything, in ten, twenty years you’ll have serious problems.’
Although the official window for proposals begins in January, some have already been submitted to the project’s website. One suggests equipping kindergartens with technology to teach children about robotics. Using the open-source platform Arduino, the plan is to let children play with the tech and so foster scientific understanding from the earliest age.
Proposals can be made in the areas of science, culture, agriculture and lifelong learning, and there will be more than forty events in the new year for people to present and discuss their ideas.
The organisers hope that it will go some way to restoring closer contact between government and its citizens. Previous projects have shown that people who don’t vote in general elections often do cast their ballot on the specific proposals that participatory budgeting entails. Moreover, those who make the proposals often become passionate about them, campaigning for votes, flyering, making YouTube videos, going door-to-door and so fuelling a public discussion that involves ever more people in the process.
On the other side, it can bring public servants nearer to their fellow citizens by sharpening their understanding of what people want and what their priorities are. It can also raise the quality of public services by directing them more precisely to where they’re needed as well as by tapping the collective intelligence and imagination of thousands of participants….
Although it will not be used this year, because the project is still very much in the trial phase, the use of ATMs is potentially revolutionary. As Fonseca puts it, ‘In every remote part of the country, you might have nothing else, but you have an ATM.’ Moreover, an ATM could display proposals and allow people to vote directly, not least because it already contains a secure way of verifying their identity. At the moment, for comparison, people can vote by text or online, sending in the number from their ID card, which is checked against a database….(More)”.