Stefaan Verhulst
Press Release: “The Mobility Data Collaborative (the Collaborative), a multi-sector forum with the goal of creating a framework to improve mobility through data, launches today…
New mobility services, such as shared cars, bikes, and scooters, are emerging and integrating into the urban transportation landscape across the globe. Data generated by these new mobility services offers an exciting opportunity to inform local policies and infrastructure planning. The Collaborative brings together key members from the public and private sectors to develop best practices to harness the potential of this valuable data to support safe, equitable, and livable streets.
The Collaborative will leverage the knowledge of its current and future members to solve the complex challenges facing shared mobility operators and the public agencies who manage access to infrastructure that these new services require. A critical component of this collaboration is providing an open and impartial forum for sharing information and developing best practices.
Membership is open to public agencies, nonprofits, academic institutions and private companies….(More)”.
MIT Technology Review: “The city told its employees to shut down their computers as a precaution this weekend after an attempted cyberattack on Friday.
The news: New Orleans spotted suspicious activity in its networks at around 5 a.m. on Friday, with a spike in the attempted attacks at 8 a.m. It detected phishing attempts and ransomware, Kim LaGrue, the city’s head of IT, later told reporters. Once they were confident the city was under attack, the team shut down its servers and computers. City authorities then filed a declaration of a state of emergency with the Civil District Court, and pulled local, state, and federal authorities into a (still pending) investigation of the incident. The city is still working to recover data from the attack but will be open as usual from this morning, Mayor LaToya Cantrell said on Twitter.
Was it ransomware? The nature of the attack is still something of a mystery. Cantrell confirmed that ransomware had been detected, but the city hasn’t received any demands for ransom money.
The positives: New Orleans was at least fairly well prepared for this attack, thanks to training for this scenario and its ability to operate many of its services without internet access, officials told reporters.
A familiar story: New Orleans is just the latest government to face ransomware attacks, after nearly two dozen cities in Texas were targeted in August, plus Louisiana in November (causing the governor to declare a state of emergency). The phenomenon goes beyond the US, too: in October Johannesburg became the biggest city yet to face a ransomware attack.…(More)”.
Daniel Arribas-Bel at Catapult: ‘When trying to understand something as complex as the city, every bit of data helps create a better picture. Researchers, practitioners and policymakers gather as much information as they can to represent every aspect of their city – from noise levels captured by open-source sensors and the study of social isolation using tweets to where the latest hipster coffee shop has opened – exploration and creativity seem to have no limits.
But what about imagery?
You might well ask, what type of images? How do you analyse them? What’s the point anyway?
Let’s start with the why. Images contain visual cues that encode a host of socio-economic information. Imagine a picture of a street with potholes outside a derelict house next to a burnt out car. It may be easy to make some fairly sweeping assumptions about the average income of its resident population. Or the image of a street with a trendy barber-shop next door to a coffee-shop with bare concrete feature walls on one side, and an independent record shop on the other. Again, it may be possible to describe the character of this area.
These are just some of the many kinds of signals embedded in image data. In fact, there is entire literature in geography and sociology that document these associations (see, for example, Cityscapes by Daniel Aaron Silver and Terry Nichols Clark for a sociology approach and The Predictive Postcode by Richard Webber and Roger Burrows for a geography perspective). Imagine if we could figure out ways to condense such information into formal descriptors of cities that help us measure aspects that traditional datasets can’t, or to update them more frequently than standard sources currently allow…(More)”.
Damian J. Ruck, Luke J. Matthews, Thanos Kyritsis, Quentin D. Atkinson & R. Alexander Bentley at Nature Human Behavior: “National democracy is a rare thing in human history and its stability has long been tied to the cultural values of citizens. Yet it has not been established whether changing cultural values made modern democracy possible or whether those values were a response to democratic institutions. Here we combine longitudinal data and cohort information of nearly 500,000 individuals from 109 nations to track the co-evolution of democratic values and institutions over the last century.
We find that cultural values of openness towards diversity predict a shift towards democracy and that nations with low institutional confidence are prone to political instability. In addition, the presence of democratic institutions did not predict any substantive changes in the measured cultural values. These results hold accounting for other factors, including gross domestic product per capita and non-independence between nations due to shared cultural ancestry. Cultural values lead to, rather than follow, the emergence of democracy. This indicates that current stable democracies will be under threat, should cultural values of openness to diversity and institutional confidence substantially decline… (More).”
Paper by Howard M. Erichson: “In Ashcroft v. Iqbal, building on Bell Atlantic v. Twombly, the Supreme Court instructed district courts to treat a complaint’s conclusions differently from allegations of fact. Facts, but not conclusions, are assumed true for purposes of a motion to dismiss. The Court did little to help judges or lawyers understand the elusive distinction, and, indeed, obscured the distinction with its language. The Court said it was distinguishing “legal conclusions” from factual allegations. The application in Twombly and Iqbal, however, shows that the relevant distinction is not between law and fact, but rather between different types of factual assertions. This essay, written for a symposium on the tenth anniversary of Ashcroft v. Iqbal, explores the definitional problem with the conclusion-fact distinction and examines how district courts have applied the distinction in recent cases….(More)”.
Report by Caitlin Chin at Brookings: “When it comes to gender stereotypes in occupational roles, artificial intelligence (AI) has the potential to either mitigate historical bias or heighten it. In the case of the Word2vec model, AI appears to do both.
Word2vec is a publicly available algorithmic model built on millions of words scraped from online Google News articles, which computer scientists commonly use to analyze word associations. In 2016, Microsoft and Boston University researchers revealed that the model picked up gender stereotypes existing in online news sources—and furthermore, that these biased word associations were overwhelmingly job related. Upon discovering this problem, the researchers neutralized the biased word correlations in their specific algorithm, writing that “in a small way debiased word embeddings can hopefully contribute to reducing gender bias in society.”
Their study draws attention to a broader issue with artificial intelligence: Because algorithms often emulate the training datasets that they are built upon, biased input datasets could generate flawed outputs. Because many contemporary employers utilize predictive algorithms to scan resumes, direct targeted advertising, or even conduct face- or voice-recognition-based interviews, it is crucial to consider whether popular hiring tools might be susceptible to the same cultural biases that the researchers discovered in Word2vec.
In this paper, I discuss how hiring is a multi-layered and opaque process and how it will become more difficult to assess employer intent as recruitment processes move online. Because intent is a critical aspect of employment discrimination law, I ultimately suggest four ways upon which to include it in the discussion surrounding algorithmic bias….(More)”
This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI and Bias,” a series that explores ways to mitigate possible biases and create a pathway toward greater fairness in AI and emerging technologies.
Book by Denise Feldner: “This book offers readers a deeper understanding of the Cyberspace, of how institutions and industries are reinventing themselves, helping them excel in the transition to a fully digitally connected global economy. Though technology plays a key part in this regard, societal acceptance is the most important underlying condition, as it poses pressing challenges that cut across companies, developers, governments and workers.
The book explores the challenges and opportunities involved, current and potential future concepts, critical reflections and best practices. It addresses connected societies, new opportunities for governments, the role of trust in digital networks, and future education networks. In turn, a number of representative case studies demonstrate the current state of development in practice….(More)”.
Beth Noveck at The GovLab: “…The term, big health data, refers to the ability to gather and analyze vast quantities of online information about health, wellness and lifestyle. It includes not only our medical records but data from apps that track what we buy, how often we exercise and how well we sleep, among many other things. It provides an ocean of information about how healthy or ill we are, and unsurprisingly, doctors, medical researchers, healthcare organizations, insurance companies and governments are keen to get access to it. Should they be allowed to?
It’s a huge question, and AARP is partnering with GovLab to learn what older Americans think about it. AARP is a non-profit organization — the largest in the nation and the world — dedicated to empowering Americans to choose how they live as they age. In 2018 it had more than 38 million members. It is a key voice in policymaking in the United States, because it represents the views of people aged over 50 in this country.
From today, AARP and the GovLab are using the Internet to capture what AARP members feel are the most urgent issues confronting them to try to discover what worries people most: the use of big health data or the failure to use it.
The answers are not simple. On the one hand, increasing the use and sharing of data could enable doctors to make better diagnoses and interventions to prevent disease and make us healthier. It could lead medical researchers to find cures faster, while the creation of health data businesses could strengthen the economy.
On the other hand, the collection, sharing, and use of big health data could reveal sensitive personal information over which we have little control. This data could be sold without our consent, and be used by entities for surveillance or discrimination, rather than to promote well-being….(More)”.
IAPP: “The French data protection authority, the CNIL, has published a report that explores emerging issues of data protection and freedoms in democracy, technology and citizen participation. The report discusses the emergence of civic technologies and data protection issues associated. The CNIL also uses the report to propose its recommendations for creating “an environment of trust” with civic tech “that allows everyone to exercise their citizenship while respecting their rights and freedoms.” Those recommendations include more implementation of EU General Data Protection Regulation principles and considerations for improved digital communication and participation….(More)”.
Analysis of the state of the art and review of literature by Gianluca Misuraca et al: “This report presents the… results of the review of literature, based on almost 500 academic and grey literature sources, as well as the analysis of digital government policies in the EU Member States provide a synthetic overview of the main themes and topics of the digital government discourse.
The report depicts the variety of existing conceptualisations and definitions of the digital government phenomenon, measured and expected effects of the application of more disruptive innovations and emerging technologies in government, as well as key drivers and barriers for transforming the public sector. Overall, the literature review shows that many sources appear overly optimistic with regard to the impact of digital government transformation, although the majority of them are based on normative views or expectations, rather than empirically tested insights.
The authors therefore caution that digital government transformation should be researched empirically and with a due differentiation between evidence and hope. In this respect, the report paves the way to in-depth analysis of the effects that can be generated by digital innovation in public sector organisations. A digital transformation that implies the redesign of the tools and methods used in the machinery of government will require in fact a significant change in the institutional frameworks that regulate and help coordinate the governance systems in which such changing processes are implemented…(More)”.