Advancing Urban Health and Wellbeing Through Collective and Artificial Intelligence: A Systems Approach 3.0


Policy brief by Franz Gatzweiler: “Many problems of urban health and wellbeing, such as pollution, obesity, ageing, mental health, cardiovascular diseases, infectious diseases, inequality and poverty (WHO 2016), are highly complex and beyond the reach of individual problem solving capabilities. Biodiversity loss, climate change, and urban health problems emerge at aggregate scales and are unpredictable. They are the consequence of complex interactions between many individual agents and their environments across urban sectors and scales. Another challenge of complex urban health problems is the knowledge approach we apply to understand and solve them. We are challenged to create a new, innovative knowledge approach to understand and solve the problems of urban health. The positivist approach of separating cause from effect, or observer from observed, is insufficient when human agents are both part of the problemand the solution.

Problems emerging from complexity can only be solved collectively by applying rules which govern complexity. For example, the law of requisite variety (Ashby 1960) tells us that we need as much variety in our problemsolving toolbox as there are different types of problemsto be solved, and we need to address these problems at the respective scale. No individual, hasthe intelligence to solve emergent problems of urban health alone….

  • Complex problems of urban health and wellbeing cause millions of premature deaths annually and are beyond the reach of individual problem-solving capabilities.
  • Collective and artificial intelligence (CI+AI) working together can address the complex challenges of urban health
  • The systems approach (SA) is an adaptive, intelligent and intelligence-creating, “data-metabolic” mechanism for solving such complex challenges
  • Design principles have been identified to successfully create CI and AI. Data metabolic costs are the limiting factor.
  • A call for collaborative action to build an “urban brain” by means of next generation systems approaches is required to save lives in the face of failure to tackle complex urban health challenges….(More)”.

Mapping service design and policy design


UK Policy Lab: “…Over the summer in the Policy Lab we have started mapping different policy options, showing the variety of ways policy-makers might use their power to influence people’s actions and behaviours. We have grouped these into seven categories from low level to large scale interventions.

Carrots or sticks

These styles of intervention include traditional law-making powers (sticks) or applying softer influencing powers (carrots)  such as system stewardship.   In reality policy-making is much more complex than most people imagine.  From this starting point we’ve created a grid of 28 different ways policy-makers operate at different stages of maturity. This is still work in progress so we would very much welcome your thoughts. We are currently building examples for each of the styles from across government.  The design choices for policy-makers early in policy development shape how a policy is delivered and the kind of results that can be achieved. We have played around a lot with the language, and will continue to test this in the Lab.  However, it should be clear from the array of possibilities that determining which course of action, which levers will deliver the outcomes needed, in any particular circumstance requires great skill and judgement…(More)”.

Voice or chatter? Making ICTs work for transformative citizen engagement


Research Report Summary by Making All Voices Count: “What are the conditions in democratic governance that make information and communication technology (ICT)-mediated citizen engagement transformative? While substantial scholarship exists on the role of the Internet and digital technologies in triggering moments of political disruption and cascading upheavals, academic interest in the sort of deep change that transforms institutional cultures of democratic governance, occurring in ‘slow time’, has been relatively muted.

This study attempts to fill this gap. It is inspired by the idea of participation in everyday democracy and seeks to explore how ICT-mediated citizen engagement can promote democratic governance and amplify citizen voice.

ICT-mediated citizen engagement is defined by this study as comprising digitally-mediated information outreach, dialogue, consultation, collaboration and decision-making, initiated either by government or by citizens, towards greater government accountability and responsiveness.

The study involved empirical explorations of citizen engagement initiatives in eight sites – two in Asia (India and Philippines), one in Africa (South Africa), three in South America (Brazil, Colombia, Uruguay) and two in Europe (Netherlands and Spain).

This summary of the larger Research Report presents recommendations for how public policies and programmes can promote ICTs for citizen engagement and transformative citizenship.  In doing so it provides an overview of the discussion the authors undertake on three inter-related dimensions, namely:

  • calibrating digitally mediated citizen participation as a measure of political empowerment and equality
  • designing techno-public spaces as bastions of inclusive democracy
  • ensuring that the rule of law upholds democratic principles in digitally mediated governance…(More. Full research report)

The Use of Big Data Analytics by the IRS: Efficient Solutions or the End of Privacy as We Know It?


Kimberly A. Houser and Debra Sanders in the Vanderbilt Journal of Entertainment and Technology Law: “This Article examines the privacy issues resulting from the IRS’s big data analytics program as well as the potential violations of federal law. Although historically, the IRS chose tax returns to audit based on internal mathematical mistakes or mismatches with third party reports (such as W-2s), the IRS is now engaging in data mining of public and commercial data pools (including social media) and creating highly detailed profiles of taxpayers upon which to run data analytics. This Article argues that current IRS practices, mostly unknown to the general public are violating fair information practices. This lack of transparency and accountability not only violates federal law regarding the government’s data collection activities and use of predictive algorithms, but may also result in discrimination. While the potential efficiencies that big data analytics provides may appear to be a panacea for the IRS’s budget woes, unchecked, these activities are a significant threat to privacy. Other concerns regarding the IRS’s entrée into big data are raised including the potential for political targeting, data breaches, and the misuse of such information. This Article intends to bring attention to these privacy concerns and contribute to the academic and policy discussions about the risks presented by the IRS’s data collection, mining and analytics activities….(More)”.

How to Regulate Artificial Intelligence


Oren Etzioni in the New York Times: “…we should regulate the tangible impact of A.I. systems (for example, the safety of autonomous vehicles) rather than trying to define and rein in the amorphous and rapidly developing field of A.I.

I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the “three laws of robotics” that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.

These three laws are elegant but ambiguous: What, exactly, constitutes harm when it comes to A.I.? I suggest a more concrete basis for avoiding A.I. harm, based on three rules of my own.

First, an A.I. system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems. We don’t want A.I. to engage in cyberbullying, stock manipulation or terrorist threats; we don’t want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don’t want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties.

Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior.

My second rule is that an A.I. system must clearly disclose that it is not human. As we have seen in the case of bots — computer programs that can engage in increasingly sophisticated dialogue with real people — society needs assurances that A.I. systems are clearly labeled as such. In 2016, a bot known as Jill Watson, which served as a teaching assistant for an online course at Georgia Tech, fooled students into thinking it was human. A more serious example is the widespread use of pro-Trump political bots on social media in the days leading up to the 2016 elections, according to researchers at Oxford….

My third rule is that an A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information…(More)”

Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing


Priscilla Guo, Danielle Kehl, and Sam Kessler at Responsive Communities (Harvard): “In the summer of 2016, some unusual headlines began appearing in news outlets across the United States. “Secret Algorithms That Predict Future Criminals Get a Thumbs Up From the Wisconsin Supreme Court,” read one. Another declared: “There’s software used across the country to predict future criminals. And it’s biased against blacks.” These news stories (and others like them) drew attention to a previously obscure but fast-growing area in the field of criminal justice: the use of risk assessment software, powered by sophisticated and sometimes proprietary algorithms, to predict whether individual criminals are likely candidates for recidivism. In recent years, these programs have spread like wildfire throughout the American judicial system. They are now being used in a broad capacity, in areas ranging from pre-trial risk assessment to sentencing and probation hearings. This paper focuses on the latest—and perhaps most concerning—use of these risk assessment tools: their incorporation into the criminal sentencing process, a development which raises fundamental legal and ethical questions about fairness, accountability, and transparency. The goal is to provide an overview of these issues and offer a set of key considerations and questions for further research that can help local policymakers who are currently implementing or considering implementing similar systems. We start by putting this trend in context: the history of actuarial risk in the American legal system and the evolution of algorithmic risk assessments as the latest incarnation of a much broader trend. We go on to discuss how these tools are used in sentencing specifically and how that differs from other contexts like pre-trial risk assessment. We then delve into the legal and policy questions raised by the use of risk assessment software in sentencing decisions, including the potential for constitutional challenges under the Due Process and Equal Protection clauses of the Fourteenth Amendment. Finally, we summarize the challenges that these systems create for law and policymakers in the United States, and outline a series of possible best practices to ensure that these systems are deployed in a manner that promotes fairness, transparency, and accountability in the criminal justice system….(More)”.

Crowdsourcing the Charlottesville Investigation


Internet sleuths got to work, and by Monday morning they were naming names and calling for arrests.

The name of the helmeted man went viral after New York Daily News columnist Shaun King posted a series of photos on Twitter and Facebook that more clearly showed his face and connected him to photos from a Facebook account. “Neck moles gave it away,” King wrote in his posts, which were shared more than 77,000 times. But the name of the red-bearded assailant was less clear: some on Twitter claimed it was a Texas man who goes by a Nordic alias online. Others were sure it was a Michigan man who, according to Facebook, attended high school with other white nationalist demonstrators depicted in photos from Charlottesville.

After being contacted for comment by The Marshall Project, the Michigan man removed his Facebook page from public view.

Such speculation, especially when it is not conclusive, has created new challenges for law enforcement. There is the obvious risk of false identification. In 2013, internet users wrongly identified university student Sunil Tripathi as a suspect in the Boston marathon bombing, prompting the internet forum Reddit to issue an apology for fostering “online witch hunts.” Already, an Arkansas professor was misidentified as as a torch-bearing protester, though not a criminal suspect, at the Charlottesville rallies.

Beyond the cost to misidentified suspects, the crowdsourced identification of criminal suspects is both a benefit and burden to investigators.

“If someone says: ‘hey, I have a picture of someone assaulting another person, and committing a hate crime,’ that’s great,” said Sgt. Sean Whitcomb, the spokesman for the Seattle Police Department, which used social media to help identify the pilot of a drone that crashed into a 2015 Pride Parade. (The man was convicted in January.) “But saying, ‘I am pretty sure that this person is so and so’. Well, ‘pretty sure’ is not going to cut it.”

Still, credible information can help police establish probable cause, which means they can ask a judge to sign off on either a search warrant, an arrest warrant, or both….(More)“.

E-residency and blockchain


Clare Sullivan and Eric Burger in Computer Law & Security Review: “In December 2014, Estonia became the first nation to open its digital borders to enable anyone, anywhere in the world to apply to become an e-Resident. Estonian e-Residency is essentially a commercial initiative. The e-ID issued to Estonian e-Residents enables commercial activities with the public and private sectors. It does not provide citizenship in its traditional sense, and the e-ID provided to e-Residents is not a travel document. However, in many ways it is an international ‘passport’ to the virtual world. E-Residency is a profound change and the recent announcement that the Estonian government is now partnering with Bitnation to offer a public notary service to Estonian e-Residents based on blockchain technology is of significance. The application of blockchain to e-Residency has the potential to fundamentally change the way identity information is controlled and authenticated. This paper examines the legal, policy, and technical implications of this development….(More)”.

 

Democratic Resilience for a Populist Age


Helmut K. Anheier at Project Syndicate: “… many democracies are plagued by serious maladies – such as electoral gerrymandering, voter suppression, fraud and corruption, violations of the rule of law, and threats to judicial independence and press freedom – there is little agreement about which solutions should be pursued.

How to make our democracies more resilient, if not altogether immune, to anti-democratic threats is a central question of our time. …

Democratic resilience demands that citizens do more than bemoan deficiencies and passively await constitutional reform. It requires openness to change and innovation. Such changes may occur incrementally, but their aggregate effect can be immense…

Governments and citizens thus have a rich set of options – such as diversity quotas, automatic voter registration, and online referenda – for addressing democratic deficiencies. Moreover, there are measures that can also help citizens mount a defense of democracy against authoritarian assaults.

To that end, organizations can be created to channel protest and dissent into the democratic process, so that certain voices are not driven to the political fringe. And watchdog groups can oversee deliberative assemblies and co-governance efforts – such as participatory budgeting – to give citizens more direct access to decision-making. At the same time, core governance institutions, like central banks and electoral commissions, should be depoliticized, to prevent their capture by populist opportunists.

When properly applied, these measures can encourage consensus building and thwart special interests. Moreover, such policies can boost public trust and give citizens a greater sense of ownership vis-à-vis their government.

Of course, some political innovations that work in one context may cause real damage in another. Referenda, for example, are easily manipulated by demagogues. Assemblies can become gridlocked, and quotas can restrict voters’ choices. Fixing contemporary democracy will inevitably require experimentation and adaptation.

Still, recent research can help us along the way. The Governance Report 2017 has compiled a diverse list of democratic tools that can be applied in different contexts around the globe – by governments, policymakers, civil-society leaders, and citizens.

In his contribution to the report, German sociologist Claus Offe, Professor Emeritus of the Hertie School and Humboldt University identifies two fundamental priorities for all democracies. The first is to secure all citizens’ basic rights and ability to participate in civic life; the second is to provide a just and open society with opportunities for all citizens. As it happens, these two imperatives are linked: democratic government should be “of,” “by,” and for the people….(More)”.

Chicago police see less violent crime after using predictive code


Jon Fingas at Engadget: “Law enforcement has been trying predictive policing software for a while now, but how well does it work when it’s put to a tough test? Potentially very well, according to Chicago police. The city’s 7th District police reportthat their use of predictive algorithms helped reduce the number of shootings 39 percent year-over-year in the first 7 months of 2017, with murders dropping by 33 percent. Three other districts didn’t witness as dramatic a change, but they still saw 15 to 29 percent reductions in shootings and a corresponding 9 to 18 percent drop in murders.

It mainly comes down to knowing where and when to deploy officers. One of the tools used in the 7th District, HunchLab, blends crime statistics with socioeconomic data, weather info and business locations to determine where crimes are likely to happen. Other tools (such as the Strategic Subject’s List and ShotSpotter) look at gang affiliation, drug arrest history and gunfire detection sensors.

If the performance holds, It’ll suggest that predictive policing can save lives when crime rates are particularly high, as they have been on Chicago’s South Side. However, both the Chicago Police Department and academics are quick to stress that algorithms are just one part of a larger solution. Officers still have be present, and this doesn’t tackle the underlying issues that cause crime, such as limited access to education and a lack of economic opportunity. Still, any successful reduction in violence is bound to be appreciated….(More)”.