Smart cities need thick data, not big data


Adrian Smith at The Guardian: “…The Smart City is an alluring prospect for many city leaders. Even if you haven’t heard of it, you may have already joined in by looking up bus movements on your phone, accessing Council services online or learning about air contamination levels. By inserting sensors across city infrastructures and creating new data sources – including citizens via their mobile devices – Smart City managers can apply Big Data analysis to monitor and anticipate urban phenomena in new ways, and, so the argument goes, efficiently manage urban activity for the benefit of ‘smart citizens’.

Barcelona has been a pioneering Smart City. The Council’s business partners have been installing sensors and opening data platforms for years. Not everyone is comfortable with this technocratic turn. After Ada Colau was elected Mayor on a mandate of democratising the city and putting citizens centre-stage, digital policy has sought to go ‘beyond the Smart City’. Chief Technology Officer Francesca Bria is opening digital platforms to greater citizen participation and oversight. Worried that the city’s knowledge was being ceded to tech vendors, the Council now promotes technological sovereignty.

On the surface, the noise project in Plaça del Sol is an example of such sovereignty. It even features in Council presentations. Look more deeply, however, and it becomes apparent that neighbourhood activists are really appropriating new technologies into the old-fashioned politics of community development….

What made Plaça del Sol stand out can be traced to a group of technology activists who got in touch with residents early in 2017. The activists were seeking participants in their project called Making Sense, which sought to resurrect a struggling ‘Smart Citizen Kit’ for environmental monitoring. The idea was to provide residents with the tools to measure noise levels, compare them with officially permissible levels, and reduce noise in the square. More than 40 neighbours signed up and installed 25 sensors on balconies and inside apartments.

The neighbours had what project coordinator Mara Balestrini from Ideas for Change calls ‘a matter of concern’. The earlier Smart Citizen Kit had begun as a technological solution looking for a problem: a crowd-funded gadget for measuring pollution, whose data users could upload to a web-platform for comparison with information from other users. Early adopters found the technology trickier to install than developers had presumed. Even successful users stopped monitoring because there was little community purpose. A new approach was needed. Noise in Plaça del Sol provided a problem for this technology fix….

Anthropologist Clifford Geertz argued many years ago that situations can only be made meaningful through ‘thick description’. Applied to the Smart City, this means data cannot really be explained and used without understanding the contexts in which it arises and gets used. Data can only mobilise people and change things when it becomes thick with social meaning….(More)”

Online gamers control trash collecting water robot


Springwise: “Urban Rivers is a Chicago-based charity focused on cleaning up the city’s rivers and re-wilding bankside habitats. One of their most visible pieces of work is a floating habitat installed in the middle of the river that runs through the city. An immediate problem that arose after installation was the accumulation of trash. At first, the company sent someone out on a kayak every other day to clean the habitat. Yet in less than a day, huge amounts of garbage would again be choking the space. The company’s solution was to create a Trash Task Force. The outcome of the Task Force’s work is the TrashBot, a remote-controlled garbage-collecting robot. The TrashBot allows gamers all over the world to do their bit in cleaning up Chicago’s river.

Anyone interested in playing the cleaning game can sign up via the Urban River website. Future development of the bot will likely focus on wildlife monitoring. Similarly, the end goal of the game will be that no one wants to play because there is no more garbage for collection.

From crowdsourced ocean data gathered by the fins of surfers’ boards to a solar-powered autonomous drone that gathers waste from harbor waters, the health of the world’s waterways is being improved in a number of ways. The surfboard fins use sensors to monitor sea salinity, acidity levels and wave motion. Those are all important coastal ecosystem factors that could be affected by climate change. The water drones are intelligent and use on-board cameras and sensors to learn about their environment and avoid other craft as they collect garbage from rivers, canals and harbors….(More)”.

Obfuscating with transparency


“These approaches…limit the impact of valuable information in developing policies…”

Under the new policy, studies that do not fully meet transparency criteria would be excluded from use in EPA policy development. This proposal follows unsuccessful attempts to enact the Honest and Open New EPA Science Treatment (HONEST) Act and its predecessor, the Secret Science Reform Act. These approaches undervalue many scientific publications and limit the impact of valuable information in developing policies in the areas that the EPA regulates….In developing effective policies, earnest evaluations of facts and fair-minded assessments of the associated uncertainties are foundational. Policy discussions require an assessment of the likelihood that a particular observation is true and examinations of the short- and long-term consequences of potential actions or inactions, including a wide range of different sorts of costs. Those with training in making these judgments with access to as much relevant information as possible are crucial for this process. Of course, policy development requires considerations other than those related to science. Such discussions should follow clear assessment after access to all of the available evidence. The scientific enterprise should stand up against efforts that distort initiatives aimed to improve scientific practice, just to pursue other agendas…(More)”.

What if a nuke goes off in Washington, D.C.? Simulations of artificial societies help planners cope with the unthinkable


Mitchell Waldrop at Science: “…The point of such models is to avoid describing human affairs from the top down with fixed equations, as is traditionally done in such fields as economics and epidemiology. Instead, outcomes such as a financial crash or the spread of a disease emerge from the bottom up, through the interactions of many individuals, leading to a real-world richness and spontaneity that is otherwise hard to simulate.

That kind of detail is exactly what emergency managers need, says Christopher Barrett, a computer scientist who directs the Biocomplexity Institute at Virginia Polytechnic Institute and State University (Virginia Tech) in Blacksburg, which developed the NPS1 model for the government. The NPS1 model can warn managers, for example, that a power failure at point X might well lead to a surprise traffic jam at point Y. If they decide to deploy mobile cell towers in the early hours of the crisis to restore communications, NPS1 can tell them whether more civilians will take to the roads, or fewer. “Agent-based models are how you get all these pieces sorted out and look at the interactions,” Barrett says.

The downside is that models like NPS1 tend to be big—each of the model’s initial runs kept a 500-microprocessor computing cluster busy for a day and a half—forcing the agents to be relatively simple-minded. “There’s a fundamental trade-off between the complexity of individual agents and the size of the simulation,” says Jonathan Pfautz, who funds agent-based modeling of social behavior as a program manager at the Defense Advanced Research Projects Agency in Arlington, Virginia.

But computers keep getting bigger and more powerful, as do the data sets used to populate and calibrate the models. In fields as diverse as economics, transportation, public health, and urban planning, more and more decision-makers are taking agent-based models seriously. “They’re the most flexible and detailed models out there,” says Ira Longini, who models epidemics at the University of Florida in Gainesville, “which makes them by far the most effective in understanding and directing policy.”

he roots of agent-based modeling go back at least to the 1940s, when computer pioneers such as Alan Turing experimented with locally interacting bits of software to model complex behavior in physics and biology. But the current wave of development didn’t get underway until the mid-1990s….(More)”.

Modernizing Crime Statistics: New Systems for Measuring Crime


(Second) Report by the National Academies of Sciences, Engineering, and Medicine: “To derive statistics about crime – to estimate its levels and trends, assess its costs to and impacts on society, and inform law enforcement approaches to prevent it – a conceptual framework for defining and thinking about crime is virtually a prerequisite. Developing and maintaining such a framework is no easy task, because the mechanics of crime are ever evolving and shifting: tied to shifts and development in technology, society, and legislation.

Interest in understanding crime surged in the 1920s, which proved to be a pivotal decade for the collection of nationwide crime statistics. Now established as a permanent agency, the Census Bureau commissioned the drafting of a manual for preparing crime statistics—intended for use by the police, corrections departments, and courts alike. The new manual sought to solve a perennial problem by suggesting a standard taxonomy of crime. Shortly after the Census Bureau issued its manual, the International Association of Chiefs of Police in convention adopted a resolution to create a Committee on Uniform Crime Records —to begin the process of describing what a national system of data on crimes known to the police might look like.

Report 1 performed a comprehensive reassessment of what is meant by crime in U.S. crime statistics and recommends a new classification of crime to organize measurement efforts. This second report examines methodological and implementation issues and presents a conceptual blueprint for modernizing crime statistics….(More)”.

UK can lead the way on ethical AI, says Lords Committee


Lords Select Committee: “The UK is in a strong position to be a world leader in the development of artificial intelligence (AI). This position, coupled with the wider adoption of AI, could deliver a major boost to the economy for years to come. The best way to do this is to put ethics at the centre of AI’s development and use concludes a report by the House of Lords Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able?, published today….

One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Other conclusions from the report include:

  • Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. Significant Government investment in skills and training will be necessary to mitigate the negative effects of AI. Retraining will become a lifelong necessity.
  • Individuals need to be able to have greater personal control over their data, and the way in which it is used. The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency. This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability and data trusts.
  • The monopolisation of data by big technology companies must be avoided, and greater competition is required. The Government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.
  • The prejudices of the past must not be unwittingly built into automated systems. The Government should incentivise the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.
  • Transparency in AI is needed. The industry, through the AI Council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.
  • At earlier stages of education, children need to be adequately prepared for working with, and using, AI. The ethical design and use of AI should become an integral part of the curriculum.
  • The Government should be bold and use targeted procurement to provide a boost to AI development and deployment. It could encourage the development of solutions to public policy challenges through speculative investment. There have been impressive advances in AI for healthcare, which the NHS should capitalise on.
  • It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users, and clarity in this area is needed. The Committee recommend that the Law Commission investigate this issue.
  • The Government needs to draw up a national policy framework, in lockstep with the Industrial Strategy, to ensure the coordination and successful delivery of AI policy in the UK….(More)”.

From Texts to Tweets to Satellites: The Power of Big Data to Fill Gender Data Gaps


 at UN Foundation Blog: “Twitter posts, credit card purchases, phone calls, and satellites are all part of our day-to-day digital landscape.

Detailed data, known broadly as “big data” because of the massive amounts of passively collected and high-frequency information that such interactions generate, are produced every time we use one of these technologies. These digital traces have great potential and have already developed a track record for application in global development and humanitarian response.

Data2X has focused particularly on what big data can tell us about the lives of women and girls in resource-poor settings. Our research, released today in a new report, Big Data and the Well-Being of Women and Girls, demonstrates how four big data sources can be harnessed to fill gender data gaps and inform policy aimed at mitigating global gender inequality. Big data can complement traditional surveys and other data sources, offering a glimpse into dimensions of girls’ and women’s lives that have otherwise been overlooked and providing a level of precision and timeliness that policymakers need to make actionable decisions.

Here are three findings from our report that underscore the power and potential offered by big data to fill gender data gaps:

  1. Social media data can improve understanding of the mental health of girls and women.

Mental health conditions, from anxiety to depression, are thought to be significant contributors to the global burden of disease, particularly for young women, though precise data on mental health is sparse in most countries. However, research by Georgia Tech University, commissioned by Data2X, finds that social media provides an accurate barometer of mental health status…..

  1. Cell phone and credit card records can illustrate women’s economic and social patterns – and track impacts of shocks in the economy.

Our spending priorities and social habits often indicate economic status, and these activities can also expose economic disparities between women and men.

By compiling cell phone and credit card records, our research partners at MIT traced patterns of women’s expenditures, spending priorities, and physical mobility. The research found that women have less mobility diversity than men, live further away from city centers, and report less total expenditure per capita…..

  1. Satellite imagery can map rivers and roads, but it can also measure gender inequality.

Satellite imagery has the power to capture high-resolution, real-time data on everything from natural landscape features, like vegetation and river flows, to human infrastructure, like roads and schools. Research by our partners at the Flowminder Foundation finds that it is also able to measure gender inequality….(More)”.

Participatory Budgeting: Step to Building Active Citizenship or a Distraction from Democratic Backsliding?


David Sasaki: “Is there any there there? That’s what we wanted to uncover beneath the hype and skepticism surrounding participatory budgeting, an innovation in democracy that began in Brazil in 1989 and has quickly spread to nearly every corner of the world like a viral hashtag….We ended up selecting two groups of consultants for two phases of work. The first phase was led by three academic researchers — Brian WamplerMike Touchton and Stephanie McNulty — to synthesize what we know broadly about PB’s impact and where there are gaps in the evidence. mySociety led the second phase, which originally intended to identify the opportunities and challenges faced by civil society organizations and public officials that implement participatory budgeting. However, a number of unforeseen circumstances, including contested elections in Kenya and a major earthquake in Mexico, shifted mySociety’s focus to take a global, field-wide perspective.

In the end, we were left with two reports that were similar in scope and differed in perspective. Together they make for compelling reading. And while they come from different perspectives, they settle on similar recommendations. I’ll focus on just three: 1) the need for better research, 2) the lack of global coordination, and 3) the emerging opportunity to link natural resource governance with participatory budgeting….

As we consider some preliminary opportunities to advance participatory budgeting, we are clear-eyed about the risks and challenges. In the face of democratic backsliding and the concern that liberal democracy may not survive the 21st century, are these efforts to deepen local democracy merely a distraction from a larger threat, or is this a way to build active citizenship? Also, implementing PB is expensive — both in terms of money and time; is it worth the investment? Is PB just the latest checkbox for governments that want a reputation for supporting citizen participation without investing in the values and process it entails? Just like the proliferation of fake “consultation meetings,” fake PB could merely exacerbate our disappointment with democracy. What should we make of the rise of participatory budgeting in quasi-authoritarian contexts like China and Russia? Is PB a tool for undemocratic central governments to keep local governments in check while giving citizens a simulacrum of democratic participation? Crucially, without intentional efforts to be inclusive like we’ve seen in Boston, PB could merely direct public resources to those neighborhoods with the most outspoken and powerful residents.

On the other hand, we don’t want to dismiss the significant opportunities that come with PB’s rapid global expansion. For example, what happens when social movements lose their momentum between election cycles? Participatory budgeting could create a civic space for social movements to pursue concrete outcomes while engaging with neighbors and public officials. (In China, it has even helped address the urban-rural divide on perspectives toward development policy.) Meanwhile, social media have exacerbated our human tendency to complain, but participatory budgeting requires us to shift our perspective from complaints to engaging with others on solutions. It could even serve as a gateway to deeper forms of democratic participation and increased trust between governments, civil society organizations, and citizens. Perhaps participatory budgeting is the first step we need to rebuild our civic infrastructure and make space for more diverse voices to steer our complex public institutions.

Until we have more research and evidence, however, these possibilities remain speculative….(More)”.

Behavioral Economics: Are Nudges Cost-Effective?


Carla Fried at UCLA Anderson Review: “Behavioral science does not suffer from a lack of academic focus. A Google Scholar search for the term delivers more than three million results.

While there is an abundance of research into how human nature can muck up our decision making process and the potential for well-placed nudges to help guide us to better outcomes, the field has kept rather mum on a basic question: Are behavioral nudges cost-effective?

That’s an ever more salient question as the art of the nudge is increasingly being woven into public policy initiatives. In 2009, the Obama administration set up a nudge unit within the White House Office of Information and Technology, and a year later the U.K. government launched its own unit. Harvard’s Cass Sunstein, co-author of the book Nudge, headed the U.S. effort. His co-author, the University of Chicago’s Richard Thaler — who won the 2017 Nobel Prize in Economics — helped develop the U.K.’s Behavioral Insights office. Nudge units are now humming away in other countries, including Germany and Singapore, as well as at the World Bank, various United Nations agencies and the Organisation for Economic Co-operation and Development (OECD).

Given the interest in the potential for behavioral science to improve public policy outcomes, a team of nine experts, including UCLA Anderson’s Shlomo Benartzi, Sunstein and Thaler, set out to explore the cost-effectiveness of behavioral nudges relative to more traditional forms of government interventions.

In addition to conducting their own experiments, the researchers looked at published research that addressed four areas where public policy initiatives aim to move the needle to improve individuals’ choices: saving for retirement, applying to college, energy conservation and flu vaccinations.

For each topic, they culled studies that focused on both nudge approaches and more traditional mandates such as tax breaks, education and financial incentives, and calculated cost-benefit estimates for both types of studies. Research used in this study was published between 2000 and 2015. All cost estimates were inflation-adjusted…

The study itself should serve as a nudge for governments to consider adding nudging to their policy toolkits, as this approach consistently delivered a high return on investment, relative to traditional mandates and policies….(More)”.

Making sense of evidence: A guide to using evidence in policy


Handbook by the Government of New Zealand: “…helps you take a structured approach to using evidence at every stage of the policy and programme development cycle. Whether you work for central or local government, or the community and voluntary sector, you’ll find advice to help you:

  • understand different types and sources of evidence
  • know what you can learn from evidence
  • appraise evidence and rate its quality
  • decide how to select and use evidence to the best effect
  • take into account different cultural values and knowledge systems
  • be transparent about how you’ve considered evidence in your policy development work…(More)”

(See also Summary; This handbook is a companion to Making sense of evaluation: A handbook for everyone.).