Report by the OECD: “…provides an in-depth, evidence-based analysis of open government initiatives and the challenges countries face in implementing and co-ordinating them. It also explores new trends in OECD member countries as well as a selection of countries from Latin America, MENA and South East Asia regions. Based on the 2015 Survey on Open Government and Citizen Participation in the Policy Cycle, the report identifies future areas of work, including the effort to mobilise and engage all branches and all levels of government in order to move from open governments to open states; how open government principles and practices can help achieve the UN Sustainable Development Goals; the role of the Media to create an enabling environment for open government initiatives to thrive; and the growing importance of subnational institutions to implement successful open government reforms….(More)”
Towards Scalable Governance: Sensemaking and Cooperation in the Age of Social Media
Radical thinking reveals the secrets of making change happen
Extract from his new book in The Guardian where “Duncan Green explores how change actually occurs – and what that means: Political and economic earthquakes are often sudden and unforeseeable, despite the false pundits who pop up later to claim they predicted them all along – take the fall of the Berlin Wall, the 2008 global financial crisis, or the Arab Spring (and ensuing winter). Even at a personal level, change is largely unpredictable: how many of us can say our lives have gone according to the plans we had as 16-year-olds?
The essential mystery of the future poses a huge challenge to activists. If change is only explicable in the rear-view mirror, how can we accurately envision the future changes we seek, let alone achieve them? How can we be sure our proposals will make things better, and not fall victim to unintended consequences? People employ many concepts to grapple with such questions. I find “systems” and “complexity” two of the most helpful.
A “system” is an interconnected set of elements coherently organised in a way that achieves something. It is more than the sum of its parts: a body is more than an aggregate of individual cells; a university is not merely an agglomeration of individual students, professors, and buildings; an ecosystem is not just a set of individual plants and animals.
A defining property of human systems is complexity: because of the sheer number of relationships and feedback loops among their many elements, they cannot be reduced to simple chains of cause and effect. Think of a crowd on a city street, or a flock of starlings wheeling in the sky at dusk. Even with supercomputers, it is impossible to predict the movement of any given person or starling, but there is order; amazingly few collisions occur even on the most crowded streets.
In complex systems, change results from the interplay of many diverse and apparently unrelated factors. Those of us engaged in seeking change need to identify which elements are important and understand how they interact.
My interest in systems thinking began when collecting stories for my book FromPoverty to Power. The light-bulb moment came on a visit to India’s Bundelkhandregion, where the poor fishing communities of Tikamgarh had won rights to more than 150 large ponds. In that struggle numerous factors interacted to create change. First, a technological shift triggered changes in behaviour: the introduction of new varieties of fish, which made the ponds more profitable,induced landlords to seize ponds that had been communal. Conflict then built pressure for government action: a group of 12 brave young fishers in one village fought back, prompting a series of violent clashes that radicalized and inspired other communities; women’s groups were organized for the first time, taking control of nine ponds. Enlightened politicians and non-governmental organizations (NGOs) helped pass new laws and the police amazed everyone by enforcing them.
The fishing communities were the real heroes of the story. They tenaciously faced down a violent campaign of intimidation, moved from direct action to advocacy, and ended up winning not only access to the ponds but a series of legal and policy changes that benefited all fishing families.
The neat narrative sequence of cause and effect I’ve just written, of course, is only possible in hindsight. In the thick of the action, no-one could have said why the various actors acted as they did, or what transformed the relative power of each. Tikamgarh’s experience highlights how unpredictable is the interaction between structures (such as state institutions), agency (by communities and individuals), and the broader context (characterized by shifts in technology,environment, demography, or norms).
Unfortunately, the way we commonly think about change projects onto the future the neat narratives we draw from the past. Many of the mental models we use are linear plans – “if A,then B” – with profound consequences in terms of failure, frustration, and missed opportunities. AsMike Tyson memorably said, “Everyone has a plan ’til they get punched in the mouth”….(More)
See also http://how-change-happens.com/
Teaching an Algorithm to Understand Right and Wrong
Greg Satell at Harvard Business Review: “In his Nicomachean Ethics, Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma. We all agree that we should be good and just, but it’s much harder to decide what that entails.
Since Aristotle’s time, the questions he raised have been continually discussed and debated. From the works of great philosophers like Kant, Bentham, andRawls to modern-day cocktail parties and late-night dorm room bull sessions, the issues are endlessly mulled over and argued about but never come to a satisfying conclusion.
Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.
Designing a Learning Environment
Every parent worries about what influences their children are exposed to. What TV shows are they watching? What video games are they playing? Are they hanging out with the wrong crowd at school? We try not to overly shelter our kids because we want them to learn about the world, but we don’t want to expose them to too much before they have the maturity to process it.
In artificial intelligence, these influences are called a “machine learning corpus.”For example, if you want to teach an algorithm to recognize cats, you expose it to thousands of pictures of cats and things that are not cats. Eventually, it figures out how to tell the difference between, say, a cat and a dog. Much as with human beings, it is through learning from these experiences that algorithms become useful.
However, the process can go horribly awry, as in the case of Microsoft’s Tay, aTwitter bot that the company unleashed on the microblogging platform. In under a day, Tay went from being friendly and casual (“Humans are super cool”) to downright scary (“Hitler was right and I hate Jews”). It was profoundly disturbing.
Francesca Rossi, an AI researcher at IBM, points out that we often encode principles regarding influences into societal norms, such as what age a child needs to be to watch an R-rated movie or whether they should learn evolution in school. “We need to decide to what extent the legal principles that we use to regulate humans can be used for machines,” she told me.
However, in some cases algorithms can alert us to bias in our society that we might not have been aware of, such as when we Google “grandma” and see only white faces. “There is a great potential for machines to alert us to bias,” Rossi notes. “We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.”…
Another issue that we will have to contend with is that we will have to decide not only what ethical principles to encode in artificial intelligences but also how they are coded. As noted above, for the most part, “Thou shalt not kill” is a strict principle. Other than a few rare cases, such as the Secret Service or a soldier, it’s more like a preference that is greatly affected by context….
As pervasive as artificial intelligence is set to become in the near future, the responsibility rests with society as a whole. Put simply, we need to take the standards by which artificial intelligences will operate just as seriously as those that govern how our political systems operate and how are children are educated.
It is a responsibility that we cannot shirk….(More)
Crowd-sourcing pollution control in India
Springwise: “Following orders by the national government to improve the air quality of the New Delhi region by reducing air pollution, the Environment Pollution (Prevention and Control) Authority created the Hawa Badlo app. Designed for citizens to report cases of air pollution, each complaint is sent to the appropriate official for resolution.
Free to use, the app is available for both iOS and Android. Complaints are geo-tagged, and there are two different versions available – one for citizens and one for government officials. Officials must provide photographic evidence to close a case. The app itself produces weekly reports listings the numbers and status of complaints, along with any actions taken to resolve the problem. Currently focusing on pollution from construction, unpaved roads and the burning of garbage, the team behind the app plans to expand its use to cover other types of pollution as well.
From providing free wi-fi when the air is clean enough to mapping air-quality in real-time, air pollution solutions are increasingly involving citizens….(More)”
Open data aims to boost food security prospects
Mark Kinver at BBC News: “Rothamsted Research, a leading agricultural research institution, is attempting to make data from long-term experiments available to all.
In partnership with a data consultancy, is it developing a method to make complex results accessible and useable.
The institution is a member of the Godan Initiative that aims to make data available to the scientific community.
In September, Godan called on the public to sign its global petition to open agricultural research data.
“The continuing challenge we face is that the raw data alone is not sufficient enough on its own for people to make sense of it,” said Chris Rawlings, head of computational and systems biology at Rothamsted Research.
“This is because the long-term experiments are very complex, and they are looking at agriculture and agricultural ecosystems so you need to know a lot of about what the intention of the studies are, how they are being used, and the changes that have taken place over time.”
However, he added: “Even with this level of complexity, we do see significant number of users contacting us or developing links with us.”
One size fits all
The ability to provide open data to all is one of the research organisation’s national capabilities, and forms a defining principle of its web portal to the experiments carried out at its North Wyke Farm Platform in North Devon.
Rothamsted worked in partnership with Tessella, a data consultancy, on the data collected from the experiments, which focused on livestock pastures.
The information being collected, as often as every 15 minutes, includes water run-off levels, soil moisture, meteorological data, and soil nutrients, and this is expected to run for decades.
“The data is quite varied and quite diverse, and [Rothamsted] wants to make to make this data available to the wider research community,” explained Tessella’s Andrew Bowen.
“What Rothamsted needed was a way to store it and a way to present it in a portal in which people could see what they had to offer.”
He told BBC News that there were a number of challenges that needed to be tackled.
One was the management of the data, and the team from Tessella adopted an “agile scrum” approach.
“Basically, what you do is draw up a list of the requirements, of what you need, and we break the project down into short iterations, starting with the highest priority,” he said.
“This means that you are able to take a more exploratory approach to the process of developing software. This is very well suited to the research environment.”…(More)”
Understanding the four types of AI, from reactive robots to self-aware beings
The Conversation: “…We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.
Type I AI: Reactive machines
The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.
Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.
But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.
This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world….
Type II AI: Limited memory
This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.
These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.
But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel…;
Type III AI: Theory of mind
We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.
Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.
This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.
If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.
Type IV AI: Self-awareness
The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it….
While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences….(More)”
Beyond nudging: it’s time for a second generation of behaviourally-informed social policy
Katherine Curchin at LSE Blog: “…behavioural scientists are calling for a second generation of behaviourally-informed policy. In some policy areas, nudges simply aren’t enough. Behavioural research shows stronger action is required to attack the underlying cause of problems. For example, many scholars have argued that behavioural insights provide a rationale for regulation to protect consumers from manipulation by private sector companies. But what might a second generation of behaviourally-informed social policy look like?
Behavioural insights could provide a justification to change the trajectory of income support policy. Since the 1990s policy attention has focused on the moral character of benefits recipients. Inspired by Lawrence Mead’s paternalist philosophy, governments have tried to increase the resolve of the unemployed to work their way out of poverty. More and more behavioural requirements have been attached to benefits to motivate people to fulfil their obligations to society.
But behavioural research now suggests that these harsh policies are misguided. Behavioural science supports the idea that people often make poor decisions and do things which are not in their long term interests. But the weakness of individuals’ moral constitution isn’t so much the problem as the unequal distribution of opportunities in society. There are circumstances in which humans are unlikely to flourish no matter how motivated they are.
Normal human psychological limitations – our limited cognitive capacity, limited attention and limited self-control – interact with environment to produce the behaviour that advocates of harsh welfare regimes attribute to permissive welfare. In their book Scarcity, Sendhil Mullainathan and Eldar Shafir argue that the experience of deprivation creates a mindset that makes it harder to process information, pay attention, make good decisions, plan for the future, and resist temptations.
Importantly, behavioural scientists have demonstrated that this mindset can be temporarily created in the laboratory by placing subjects in artificial situations which induce the feeling of not having enough. As a consequence, experimental subjects from middle-class backgrounds suddenly display the short-term thinking and irrational decision making often attributed to a culture of poverty.
Tying inadequate income support to a list of behavioural conditions will most punish those who are suffering most. Empirical studies of welfare conditionality have found that benefit claimants often do not comprehend the complicated rules that apply to them. Some are being punished for lack of understanding rather than deliberate non-compliance.
Behavioural insights can be used to mount a case for a more generous, less punitive approach to income support. The starting point is to acknowledge that some of Mead’s psychological assumptions have turned out to be wrong. The nature of the cognitive machinery humans share imposes limits on how self-disciplined and conscientious we can reasonably expect people living in adverse circumstances to be. We have placed too much emphasis on personal responsibility in recent decades. Why should people internalize the consequences of their behaviour when this behaviour is to a large extent the product of their environment?…(More)”
The Risk to Civil Liberties of Fighting Crime With Big Data
in the New York Times: “…Sharing data, both among the parts of a big police department and between the police and the private sector, “is a force multiplier,” he said.
Companies working with the military and intelligence agencies have long practiced these kinds of techniques, which the companies are bringing to domestic policing, in much the way surplus military gear has beefed upAmerican SWAT teams.
Palantir first built up its business by offering products like maps of social networks of extremist bombers and terrorist money launderers, and figuring out efficient driving routes to avoid improvised explosive devices.
Palantir used similar data-sifting techniques in New Orleans to spot individuals most associated with murders. Law enforcement departments around Salt Lake City used Palantir to allow common access to 40,000 arrest photos, 520,000 case reports and information like highway and airport data — building human maps of suspected criminal networks.
People in the predictive business sometimes compare what they do to controlling the other side’s “OODA loop,” a term first developed by a fighter pilot and military strategist named John Boyd.
OODA stands for “observe, orient, decide, act” and is a means of managing information in battle.
“Whether it’s war or crime, you have to get inside the other side’s decision cycle and control their environment,” said Robert Stasio, a project manager for cyberanalysis at IBM, and a former United States government intelligence official. “Criminals can learn to anticipate what you’re going to do and shift where they’re working, employ more lookouts.”
IBM sells tools that also enable police to become less predictable, for example, by taking different routes into an area identified as a crime hotspot. It has also conducted studies that show changing tastes among online criminals — for example, a move from hacking retailers’ computers to stealing health care data, which can be used to file for federal tax refunds.
But there are worries about what military-type data analysis means for civil liberties, even among the companies that get rich on it.
“It definitely presents challenges to the less sophisticated type of criminal,but it’s creating a lot of what is called ‘Big Brother’s little helpers,’” Mr.Bowman said. For now, he added, much of the data abundance problem is that “most police aren’t very good at this.”…(More)’
How to ensure smart cities benefit everyone
60 percent of the world’s population is expected to live in mega-cities. How all those people live, and what their lives are like, will depend on important choices leaders make today and in the coming years.
By 2030,Technology has the power to help people live in communities that are more responsive to their needs and that can actually improve their lives. For example, Beijing, notorious for air pollution, is testing a 23-foot-tall air purifier that vacuums up smog, filters the bad particles and releases clear air.
This isn’t a vision of life like on “The Jetsons.” It’s real urban communities responding in real-time to changing weather, times of day and citizen needs. These efforts can span entire communities. They can vary from monitoring traffic to keep cars moving efficiently or measuring air quality to warn residents (or turn on massive air purifiers) when pollution levels climb.
Using data and electronic sensors in this way is often referred to as building “smart cities,” which are the subject of a major global push to improve how cities function. In part a response to incoherent infrastructure design and urban planning of the past, smart cities promise real-time monitoring, analysis and improvement of city decision-making. The results, proponents say, will improve efficiency, environmental sustainability and citizen engagement.
Smart city projects are big investments that are supposed to drive social transformation. Decisions made early in the process determine what exactly will change. But most research and planning regarding smart cities is driven by the technology, rather than the needs of the citizens. Little attention is given to the social, policy and organizational changes that will be required to ensure smart cities are not just technologically savvy but intelligently adaptive to their residents’ needs. Design will make the difference between smart city projects offering great promise or actually reinforcing or even widening the existing gaps in unequal ways their cities serve residents.
City benefits from efficiency
A key feature of smart cities is that they create efficiency. Well-designed technology tools can benefit government agencies, the environment and residents. Smart cities can improve the efficiency of city services by eliminating redundancies, finding ways to save money and streamlining workers’ responsibilities. The results can provide higher-quality services at lower cost….
Environmental effects
Another way to save money involves real-time monitoring of energy use, which can also identify opportunities for environmental improvement.
The city of Chicago has begun implementing an “Array of Things” initiative by installing boxes on municipal light poles with sensors and cameras that can capture air quality, sound levels, temperature, water levels on streets and gutters, and traffic.
The data collected are expected to serve as a sort of “fitness tracker for the city,” by identifying ways to save energy, to address urban flooding and improve living conditions.
Helping residents
Perhaps the largest potential benefit from smart cities will come from enhancing residents’ quality of life. The opportunities cover a broad range of issues, including housing and transportation, happiness and optimism, educational services, environmental conditions and community relationships.
Efforts along this line can include tracking and mapping residents’ health, using data to fight neighborhood blight, identifying instances of discrimination and deploying autonomous vehicles to increase residents’ safety and mobility….(More)“.