Building Better Cities with Civic Technology


Mapping by Kate Gasporro: “…The field of civic technology is relatively new. There are limited strategies to measure effectiveness of these tools. Scholars and practitioners are eager to communicate benefits, including improved efficiency and transparency. But, platforms and cities are having difficulty measuring the impacts of civic technology on infrastructure delivery. Even though civic technology platforms write case studies and provide anecdotal information to market their tools, this information does not communicate the challenges and failures that local governments face when implementing these new technologies. At the same time, the nascency of such tools means local governments are still trying to understand how to leverage and protect the enormous amount of data that civic technology tools acquire.

By mapping the landscape of civic technology, we can see more clearly how eParticipation is being used to address public service challenges, including infrastructure delivery. Although many scholars and practitioners have created independent categories for eParticipation, these categorization frameworks follow a similar pattern. At one end of the spectrum, eParticipation efforts provide public service information and relevant updates to citizens or allowing citizens to contact their officials in a unidirectional flow of information. At the other end, eParticipation efforts allow for deliberate democracy where citizens share decision-making with local government officials. Of the dozen categorization frameworks we found, we selected the most comprehensive one accepted by practitioners. This framework draws from public participation practices and identifies five categories:

  • eInforming: One-way communication providing online information to citizens (in the form of a website) or to government (via ePetitions)
  • eConsulting: Limited two-way communication where citizens can voice their opinions and provide feedback
  • eInvolving: Two-way communication where citizens go through an online process to capture public concerns
  • eCollaborating: Enhanced two-way communication that allows citizens to develop alternative solutions and identify the preferred solution, but decision making remains the government’s responsibility
  • eEmpowerment: Advanced two-way communication that allows citizens to influence and make decisions as co-producers of policies…

After surveying the civic technology space, we found 24 tools that use eParticipation for infrastructure delivery. We map these technologies according to their intended use phase in infrastructure delivery and type of eParticipation. The horizontal axis divides the space into the different infrastructure delivery phases and the vertical axis shows the five eParticipation categories. Together, we can see how civic technology is attempting to include citizens throughout infrastructure delivery. The majority of the civic technologies available operate as eInforming and eConsulting tools, allowing citizens to provide information to local governments about infrastructure issues. This information is then channeled into the project selection and prioritization process that occurs during the planning phase. A few technologies span multiple infrastructure phases because of their abilities to aggregate many eParticipation technologies to address the functions of each infrastructure phase. Based on this cursory map, we see that there are spaces in the infrastructure delivery process where there are only a few civic technologies. This is often because there are fewer opportunities to influence decision making during the later phases….

eParticipation chart

The field of civic technology is relatively new. There are limited strategies to measure effectiveness of these tools. Scholars and practitioners are eager to communicate benefits, including improved efficiency and transparency. But, platforms and cities are having difficulty measuring the impacts of civic technology on infrastructure delivery. Even though civic technology platforms write case studies and provide anecdotal information to market their tools, this information does not communicate the challenges and failures that local governments face when implementing these new technologies. At the same time, the nascency of such tools means local governments are still trying to understand how to leverage and protect the enormous amount of data that civic technology tools acquire…..(More)”

Getting Serious About Evidence-Based Public Management


Philip Joyce at Governing: “In a column in this space in 2015, the late Paul L. Posner, who was one of the most thoughtful observers of public management and intergovernmental relations of the last half-century, decried the disappearance of report cards of government management. In particular, he issued an appeal for someone to move into the space that had been occupied by the Government Performance Project, the decade-long effort funded by the Pew Charitable Trusts to assess the management of states and large local governments.

If anything, what Posner advocated is needed even more today. In an era in which the call for evidence-based decision-making is ubiquitous in government, we have been lacking any real analysis, or even description, of what states and local governments are doing. A couple of recent notable efforts, however, have moved to partially fill this void at the state level.

First, a 2017 report by Pew and the MacArthur Foundation looked across the states at ways in which evidence-based policymaking was used in human services. The study looked at six types of actions that could be undertaken by states and identified states that were engaging, in some way, across four specific policy areas (behavioral health, child welfare, criminal justice and juvenile justice): defining levels of evidence (40 states); inventorying existing programs (50); comparing costs and benefits at a program level (17); reporting outcomes in the budget (42); targeting funds to evidence-based programs (50); and requiring action through state law (34)….

The second notable effort is an ongoing study of the use of data and evidence in the states that was launched recently by the National Association of State Budget Officers (NASBO). Previously, no one had attempted to summarize and categorize all of the initiatives – including those with their impetus in both laws and executive orders — underway across the 50 states. NASBO’s inventory of “Statewide Initiatives to Advance the Use of Data & Evidence for Decision-Making” is part of a set of resources aimed at providing state officials and other interested parties with a summary demonstrating the breadth of these initiatives.

The resulting “living” inventory, which is updated as additional practices are discovered, categorizes these state efforts into five types, listing a total of 90 as of this writing: data analytics (13 initiatives in 9 states), evidence-based policymaking (12 initiatives in 10 states), performance budgeting (18 initiatives in 16 states), performance management (27 initiatives in 24 states) and process improvement (20 initiatives in 19 states).

NASBO acknowledges that it is difficult to draw a bright line between these categories and classifies the initiatives according to the one that appears to be the most dominant. Nevertheless, this inventory provides a very useful catalogue of what states report they are doing, with links to further resources that make it a valuable resource for those considering launching similar initiatives….(More)”.

Crowd-mapping gender equality – a powerful tool for shaping a better city launches in Melbourne


Nicole Kalms at The Conversation: “Inequity in cities has a long history. The importance of social and community planning to meet the challenge of creating people-centred cities looms large. While planners, government and designers have long understood the problem, uncovering the many important marginalised stories is an enormous task.

ion: “Inequity in cities has a long history. The importance of social and community planning to meet the challenge of creating people-centred cities looms large. While planners, government and designers have long understood the problem, uncovering the many important marginalised stories is an enormous task.

Technology – so often bemoaned – has provided an unexpected and powerful primary tool for designers and makers of cities. Crowd-mapping asks the community to anonymously engage and map their experiences using their smartphones and via a web app. The focus of the new Gender Equality Map launched today in two pilot locations in Melbourne is on equality or inequality in their neighbourhood.

How does it work?

Participants can map their experience of equality or inequality in their neighbourhood using locator pins. Author provided

Crowd-mapping generates geolocative data. This is made up of points “dropped” to a precise geographical location. The data can then be analysed and synthesised for insights, tendencies and “hotspots”.

The diversity of its applications shows the adaptability of the method. The digital, community-based method of crowd-mapping has been used across the globe. Under-represented citizens have embraced the opportunity to tell their stories as a way to engage with and change their experience of cities….(More)”

Better “nowcasting” can reveal what weather is about to hit within 500 meters


MIT Technology Review: “Weather forecasting is impressively accurate given how changeable and chaotic Earth’s climate can be. It’s not unusual to get 10-day forecasts with a reasonable level of accuracy.

But there is still much to be done.  One challenge for meteorologists is to improve their “nowcasting,” the ability to forecast weather in the next six hours or so at a spatial resolution of a square kilometer or less.

In areas where the weather can change rapidly, that is difficult. And there is much at stake. Agricultural activity is increasingly dependent on nowcasting, and the safety of many sporting events depends on it too. Then there is the risk that sudden rainfall could lead to flash flooding, a growing problem in many areas because of climate change and urbanization. That has implications for infrastructure, such as sewage management, and for safety, since this kind of flooding can kill.

So meteorologists would dearly love to have a better way to make their nowcasts.

Enter Blandine Bianchi from EPFL in Lausanne, Switzerland, and a few colleagues, who have developed a method for combining meteorological data from several sources to produce nowcasts with improved accuracy. Their work has the potential to change the utility of this kind of forecasting for everyone from farmers and gardeners to emergency services and sewage engineers.

Current forecasting is limited by the data and the scale on which it is gathered and processed. For example, satellite data has a spatial resolution of 50 to 100 km and allows the tracking and forecasting of large cloud cells over a time scale of six to nine hours. By contrast, radar data is updated every five minutes, with a spatial resolution of about a kilometer, and leads to predictions on the time scale of one to three hours. Another source of data is the microwave links used by telecommunications companies, which are degraded by rainfall….(More)”

The role of blockchain, cryptoeconomics, and collective intelligence in building the future of justice


Blog by Federico Ast at Thomson Reuters: “Human communities of every era have had to solve the problem of social order. For this, they developed governance and legal systems. They did it with the technologies and systems of belief of their time….

A better justice system may not come from further streamlining existing processes but from fundamentally rethinking them from a first principles perspective.

In the last decade, we have witnessed how collective intelligence could be leveraged to produce an encyclopaedia like Wikipedia, a transport system like Uber, a restaurant rating system like Yelp!, and a hotel system like Airbnb. These companies innovated by crowdsourcing value creation. Instead of having an in-house team of restaurant critics as the Michelin Guide, Yelp! crowdsourced ratings in users.

Satoshi Nakamoto’s invention of Bitcoin (and the underlying blockchain technology) may be seen as the next step in the rise of the collaborative economy. The Bitcoin Network proved that, given the right incentives, anonymous users could cooperate in creating and updating a distributed ledger which could act as a monetary system. A nationless system, inherently global, and native to the Internet Age.

Cryptoeconomics is a new field of study that leverages cryptography, computer science and game theory to build secure distributed systems. It is the science that underlies the incentive system of open distributed ledgers. But its potential goes well beyond cryptocurrencies.

Kleros is a dispute resolution system which relies on cryptoeconomics. It uses a system of incentives based on “focal points”, a concept developed by game theorist Thomas Schelling, winner of the Nobel Prize in Economics 2005. Using a clever mechanism design, it seeks to produce a set of incentives for randomly selected users to adjudicate different types of disputes in a fast, affordable and secure way. Users who adjudicate disputes honestly will make money. Users who try to abuse the system will lose money.

Kleros does not seek to compete with governments or traditional arbitration systems, but provide a new method that will leverage the wisdom of the crowd to resolve many disputes of the global digital economy for which existing methods fall short: e-commerce, crowdfunding and many types of small claims are among the early adopters….(More)”.

The future’s so bright, I gotta wear blinders


Nicholas Carr’s blog: “A few years ago, the technology critic Michael Sacasas introduced the term “Borg Complex” to describe the attitude and rhetoric of modern-day utopians who believe that computer technology is an unstoppable force for good and that anyone who resists or even looks critically at the expanding hegemony of the digital is a benighted fool. (The Borg is an alien race in Star Trekthat sucks up the minds of other races, telling its victims that “resistance is futile.”) Those afflicted with the complex, Sacasas observed, rely on a a set of largely specious assertions to dismiss concerns about any ill effects of technological progress. The Borgers are quick, for example, to make grandiose claims about the coming benefits of new technologies (remember MOOCs?) while dismissing past cultural achievements with contempt (“I don’t really give a shit if literary novels go away”).

To Sacasas’s list of such obfuscating rhetorical devices, I would add the assertion that we are “only at the beginning.” By perpetually refreshing the illusion that progress is just getting under way, gadget worshippers like Kelly are able to wave away the problems that progress is causing. Any ill effect can be explained, and dismissed, as just a temporary bug in the system, which will soon be fixed by our benevolent engineers. (If you look at Mark Zuckerberg’s responses to Facebook’s problems over the years, you’ll find that they are all variations on this theme.) Any attempt to put constraints on technologists and technology companies becomes, in this view, a short-sighted and possibly disastrous obstruction of technology’s march toward a brighter future for everyone — what Kelly is still calling the “long boom.” You ain’t seen nothing yet, so stay out of our way and let us work our magic.

In his books Empire and Communication (1950) and The Bias of Communication (1951), the Canadian historian Harold Innis argued that all communication systems incorporate biases, which shape how people communicate and hence how they think. These biases can, in the long run, exert a profound influence over the organization of society and the course of history. “Bias,” it seems to me, is exactly the right word. The media we use to communicate push us to communicate in certain ways, reflecting, among other things, the workings of the underlying technologies and the financial and political interests of the businesses or governments that promulgate the technologies. (For a simple but important example, think of the way personal correspondence has been changed by the shift from letters delivered through the mail to emails delivered via the internet to messages delivered through smartphones.) A bias is an inclination. Its effects are not inevitable, but they can be strong. To temper them requires awareness and, yes, resistance.

For much of this year, I’ve been exploring the biases of digital media, trying to trace the pressures that the media exert on us as individuals and as a society. I’m far from done, but it’s clear to me that the biases exist and that at this point they have manifested themselves in unmistakable ways. Not only are we well beyond the beginning, but we can see where we’re heading — and where we’ll continue to head if we don’t consciously adjust our course….(More)”.

Don’t Believe the Algorithm


Hannah Fry at the Wall Street Journal: “The Notting Hill Carnival is Europe’s largest street party. A celebration of black British culture, it attracts up to two million revelers, and thousands of police. At last year’s event, the Metropolitan Police Service of London deployed a new type of detective: a facial-recognition algorithm that searched the crowd for more than 500 people wanted for arrest or barred from attending. Driving around in a van rigged with closed-circuit TVs, the police hoped to catch potentially dangerous criminals and prevent future crimes.

It didn’t go well. Of the 96 people flagged by the algorithm, only one was a correct match. Some errors were obvious, such as the young woman identified as a bald male suspect. In those cases, the police dismissed the match and the carnival-goers never knew they had been flagged. But many were stopped and questioned before being released. And the one “correct” match? At the time of the carnival, the person had already been arrested and questioned, and was no longer wanted.

Given the paltry success rate, you might expect the Metropolitan Police Service to be sheepish about its experiment. On the contrary, Cressida Dick, the highest-ranking police officer in Britain, said she was “completely comfortable” with deploying such technology, arguing that the public expects law enforcement to use cutting-edge systems. For Dick, the appeal of the algorithm overshadowed its lack of efficacy.

She’s not alone. A similar system tested in Wales was correct only 7% of the time: Of 2,470 soccer fans flagged by the algorithm, only 173 were actual matches. The Welsh police defended the technology in a blog post, saying, “Of course no facial recognition system is 100% accurate under all conditions.” Britain’s police force is expanding the use of the technology in the coming months, and other police departments are following suit. The NYPD is said to be seeking access to the full database of drivers’ licenses to assist with its facial-recognition program….(More).

Doing good data science


Mike Loukides, Hilary Mason and DJ Patil at O’Reilly: “(This post is the first in a series on data ethics) The hard thing about being an ethical data scientist isn’t understanding ethics. It’s the junction between ethical ideas and practice. It’s doing good data science.

There has been a lot of healthy discussion about data ethics lately. We want to be clear: that discussion is good, and necessary. But it’s also not the biggest problem we face. We already have good standards for data ethics. The ACM’s code of ethics, which dates back to 1993, is clear, concise, and surprisingly forward-thinking; 25 years later, it’s a great start for anyone thinking about ethics. The American Statistical Association has a good set of ethical guidelines for working with data. So, we’re not working in a vacuum.

And, while there are always exceptions, we believe that most people want to be fair. Data scientists and software developers don’t want to harm the people using their products. There are exceptions, of course; we call them criminals and con artists. Defining “fairness” is difficult, and perhaps impossible, given the many crosscutting layers of “fairness” that we might be concerned with. But we don’t have to solve that problem in advance, and it’s not going to be solved in a simple statement of ethical principles, anyway.

The problem we face is different: how do we put ethical principles into practice? We’re not talking about an abstract commitment to being fair. Ethical principles are worse than useless if we don’t allow them to change our practice, if they don’t have any effect on what we do day-to-day. For data scientists, whether you’re doing classical data analysis or leading-edge AI, that’s a big challenge. We need to understand how to build the software systems that implement fairness. That’s what we mean by doing good data science.

Any code of data ethics will tell you that you shouldn’t collect data from experimental subjects without informed consent. But that code won’t tell you how to implement “informed consent.” Informed consent is easy when you’re interviewing a few dozen people in person for a psychology experiment. Informed consent means something different when someone clicks on an item in an online catalog (hello, Amazon), and ads for that item start following them around ad infinitum. Do you use a pop-up to ask for permission to use their choice in targeted advertising? How many customers would you lose? Informed consent means something yet again when you’re asking someone to fill out a profile for a social site, and you might (or might not) use that data for any number of experimental purposes. Do you pop up a consent form in impenetrable legalese that basically says “we will use your data, but we don’t know for what”? Do you phrase this agreement as an opt-out, and hide it somewhere on the site where nobody will find it?…

To put ethical principles into practice, we need space to be ethical. We need the ability to have conversations about what ethics means, what it will cost, and what solutions to implement. As technologists, we frequently share best practices at conferences, write blog posts, and develop open source technologies—but we rarely discuss problems such as how to obtain informed consent.

There are several facets to this space that we need to think about.

First, we need corporate cultures in which discussions about fairness, about the proper use of data, and about the harm that can be done by inappropriate use of data can be considered. In turn, this means that we can’t rush products out the door without thinking about how they’re used. We can’t allow “internet time” to mean ignoring the consequences. Indeed, computer security has shown us the consequences of ignoring the consequences: many companies that have never taken the time to implement good security practices and safeguards are now paying with damage to their reputations and their finances. We need to do the same when thinking about issues like fairness, accountability, and unintended consequences….(More)”.

Exploring New Labscapes: Converging and Diverging on Social Innovation Labs


Essay by Marlieke Kieboom:”…The question ‘what is a (social innovation) lab?’ is as old as the lab community itself and seems to return at every (social innovation) lab gathering. It came up at the very first event of its kind (Kennisland’s Lab2: Lab for Labs, Amsterdam 2013) and has been debated at every consequent event ever since under hashtags like #socinnlabs, #sociallabs and #psilabs (see MaRs’s Labs for Systems Change — 2014, Nesta’s Labworks — 2015, EU Policy lab’s Lab Connections — 2016 and ESADE’s Labs for Social Innovation — 2017).

However, the concept has remained roughly the same since we saw the first wave of labs (Helsinki Design LabMindLab and Reos’ Change Labs) in the early 2010’s. Social innovation labs are permanent or short term structures/projects/events that use a variety of experimental methods to support collaboration between stakeholders to collectively address social challenges at a systemic level. Stakeholders range from citizens and community action groups to businesses, universities and public administrations. Their specific characteristics (e.g. developing experimental user-led research methods, building innovation capacity building, convening multi-disciplinary teams, working to reach scale) and shapes (public sector innovation labs, social innovations labs, digital service labs, policy labs) are well described in many publications (e.g. Lab Matters, 2014; Labs for Social Innovation, 2017).

As Nesta neatly shows innovation labs are part of a family, or a movement of connected experimental, innovative approaches like service design, behavioural insights, citizen engagement, and so on.

 
Spot the labs (Source: https://www.nesta.org.uk/blog/landscape-of-innovation-approaches/)

So why does this question keep coming back? The roots of the confusion and debates may lie in the word ‘social’. The medical, technological, and business sectors know exactly what they aim for in their innovation labs. They are ‘controlled-for’ environments where experimentation leads to developing, testing and scaling futuristic (mostly for profit) products, like self-driving cars, cancer medicines, drug test strips and cultured meat. Some of these products contribute to a more just, equal, sustainable world, while others don’t.

For working on societal issues like climate change, immigration patterns or a drug overdose crisis, lab settings are and should be unmistakably more open and porous. Complex, systemic challenges are impossible to capture between four lab walls, nor should we even try as they arguably arose from isolated, closed, and disconnected socio-economic interactions. Value creation for these type of challenges therefore lies outside closed, competitive, measurable spaces: in forging new collaborations, open-sourcing methodologies, encouraging curious mindsets and diversifying social movements. Consequently social lab outcomes are less measurable and concrete, ranging from reframing existing (socio-cultural) paradigms, to designing new procurement procedures and policies, to delivering new (digital and non-digital) public services. Try to ‘randomize-control-trial’ that!…(More).

AI Nationalism


Blog by Ian Hogarth: “The central prediction I want to make and defend in this post is that continued rapid progress in machine learning will drive the emergence of a new kind of geopolitics; I have been calling it AI Nationalism. Machine learning is an omni-use technology that will come to touch all sectors and parts of society.

The transformation of both the economy and the military by machine learning will create instability at the national and international level forcing governments to act. AI policy will become the single most important area of government policy. An accelerated arms race will emerge between key countries and we will see increased protectionist state action to support national champions, block takeovers by foreign firms and attract talent. I use the example of Google, DeepMind and the UK as a specific example of this issue.

This arms race will potentially speed up the pace of AI development and shorten the timescale for getting to AGI. Although there will be many common aspects to this techno-nationalist agenda, there will also be important state specific policies. There is a difference between predicting that something will happen and believing this is a good thing. Nationalism is a dangerous path, particular when the international order and international norms will be in flux as a result and in the concluding section I discuss how a period of AI Nationalism might transition to one of global cooperation where AI is treated as a global public good….(More)”.