How to predict citizen engagement in urban innovation projects?


Blogpost by Julien Carbonnell: “Citizen engagement in decision-making has proven to be a key factor for success in a smart city project and a must-have of contemporary democratic regimes. While inhabitants are all daily internet users, they widely inform themselves about their political electives’ achievements during the mandate, interact with each other on social networks, and by word-of-mouth on messaging apps or phone calls to form an opinion.

Unfortunately, most of the smart cities’ rankings lack resources to evaluate the citizen engagement dynamic around the urban innovations deployed. Indeed this data can’t be found on official open data portals, focused instead on cities’ infrastructure and quality of life. These include the number of metro stations, the length of bike lanes, air pollution, and tap water quality. Some of them also include field investigation such as the amount of investment in this or that urban area and communication dynamics about a new smart city project.

If this kind of formal information provides a good overview of the official state of development of a city, it does not give any insight from the inhabitants themselves and sounds out the street vibes of a city.

So, I’ve been working on filling this gap for the last 3 years and share in Democracy Studio all the elements of my method and tools built for conducting such analysis. To do so, I have notably been collecting inhabitants’ participation in a survey study in three case study cities: Taipei (Taiwan), Tel Aviv (Israel), and Tallinn (Estonia). I collected 366 answers by contacting inhabitants randomly online (Facebook groups, direct messages on LinkedIn, and through messaging apps) and in-person, in events related to my field of interest (Smart-City and Urban Innovation Startups). The resulting variables have been integrated into machine learning models, which finally performed a very satisfying prediction of the citizen engagement in my case studies….(More)”.

The One-Earth Balance Sheet


Essay by Andrew Sheng: “Modern science arose by breaking down complex problems into their parts. As Alvin Toffler, an American writer and futurist, wrote in his 1984 foreword to the chemist Ilya Prigogine’s classic book “Order out of Chaos”: “One of the most highly developed skills in contemporary Western civilization is dissection: the split-up of problems into their smallest possible components. We are good at it. So good, we often forget to put the pieces back together again.”

Specialization produces efficiency in production and output. But one unfortunate result is that silos produce a partial perspective from specialist knowledge; very few take a system-wide view on how the parts are related to the whole. When the parts do not fit or work together, the system may break down. As behavioral economist Daniel Kahnemann put it: “We can be blind to the obvious, and we are also blind to our blindness.”

Silos make group collective action more difficult; nation-states, tribes, communities and groups have different ways of knowing and different repositories of knowledge. A new collective mental map is needed, one that moves away from classical Newtonian science, with its linear and mechanical worldview, toward a systems-view of life. The ecologists Fritjof Capra and Pier Luigi Luisi argue that “the major problems of our time — energy, the environment, climate change, food security, financial security — cannot be understood in isolation. They are systemic problems, which means that they are all interconnected and interdependent.”

“Siloed thinking created many of our problems with inequality, injustice and planetary damage.”

A complex, non-linear, systemic view of life sees the whole as a constant interaction between the small and the large: diverse parts that are cooperating and competing at the same time. This organic view of life coincides with the ancient perspective found in numerous cultures — including Chinese, Indian, native Australian and Amerindian — that man and nature are one.

In short, modern Western science has begun to return to the pre-Enlightenment worldview that saw man, God and Earth in somewhat mystic entanglement. The late Chinese scientist Qian Xuesen argued the world was made up of “open giant complex systems” operating within larger open giant complex systems. Human beings themselves are open giant complex systems — every brain has billions of neurons connected to each other through trillions of pathways — continually exchanging and processing information with other humans and the environment. Life is much more complex, dynamic and uncertain than we once assumed.

To describe this dynamic, complex and uncertain systemic whole, we need to evolve trans-disciplinary thinking that integrates the natural, social, biological sciences and arts by transcending disciplinary boundaries. Qian concluded that the only way to describe such systemic complexity and uncertainty is to integrate quantitative with qualitative narratives, exactly what the Nobel Laureate Robert Shiller advocates for in “Narrative Economics.”…(More)”.

The Inevitable Weaponization of App Data Is Here


Joseph Cox at VICE: “…After years of warning from researchers, journalists, and even governments, someone used highly sensitive location data from a smartphone app to track and publicly harass a specific person. In this case, Catholic Substack publication The Pillar said it used location data ultimately tied to Grindr to trace the movements of a priest, and then outed him publicly as potentially gay without his consent. The Washington Post reported on Tuesday that the outing led to his resignation….

The data itself didn’t contain each mobile phone user’s real name, but The Pillar and its partner were able to pinpoint which device belonged to Burill by observing one that appeared at the USCCB staff residence and headquarters, locations of meetings that he was in, as well as his family lake house and an apartment that has him listed as a resident. In other words, they managed to, as experts have long said is easy to do, unmask this specific person and their movements across time from an supposedly anonymous dataset.

A Grindr spokesperson told Motherboard in an emailed statement that “Grindr’s response is aligned with the editorial story published by the Washington Post which describes the original blog post from The Pillar as homophobic and full of unsubstantiated inuendo. The alleged activities listed in that unattributed blog post are infeasible from a technical standpoint and incredibly unlikely to occur. There is absolutely no evidence supporting the allegations of improper data collection or usage related to the Grindr app as purported.”…

“The research from The Pillar aligns to the reality that Grindr has historically treated user data with almost no care or concern, and dozens of potential ad tech vendors could have ingested the data that led to the doxxing,” Zach Edwards, a researcher who has closely followed the supply chain of various sources of data, told Motherboard in an online chat. “No one should be doxxed and outed for adult consenting relationships, but Grindr never treated their own users with the respect they deserve, and the Grindr app has shared user data to dozens of ad tech and analytics vendors for years.”…(More)”.

What Is Behavioral Data Science and How to Get into It?


Blogpost by Ganna Pogrebna: “Behavioral Data Science is a new, emerging, interdisciplinary field, which combines techniques from the behavioral sciences, such as psychology, economics, sociology, and business, with computational approaches from computer science, statistics, data-centric engineering, information systems research and mathematics, all in order to better model, understand and predict behavior.

Behavioral Data Science lies at the interface of all these disciplines (and a growing list of others) — all interested in combining deep knowledge about the questions underlying human, algorithmic, and systems behavior with increasing quantities of data. The kinds of questions this field engages are not only exciting and challenging, but also timely, such as:

Behavioral Data Science is capable of addressing all these issues (and many more) partly because of the availability of new data sources and partly due to the emergence of new (hybrid) models, which merge behavioral science and data science models. The main advantage of these models is that they expand machine learning techniques, operating, essentially, as black boxes, to fully tractable, and explainable upgrades. Specifically, while a deep learning model can generate accurate prediction of why people select one product or brand over the other, it will not tell you what exactly drives people’s preferences; whereas hybrid models, such as anthropomorphic learning, will be able to provide this insight….(More)”

Enhancing teacher deployment in Sierra Leone: Using spatial analysis to address disparity


Blog by Paul Atherton and Alasdair Mackintosh:”Sierra Leone has made significant progress towards educational targets in recent years, but is still struggling to ensure equitable access to quality teachers for all its learners. The government is exploring innovative solutions to tackle this problem. In support of this, Fab Inc. has brought their expertise in data science and education systems, merging the two to use spatial analysis to unpack and explore this challenge….

Figure 1: Pupil-teacher ratio for primary education by district (left); and within Kailahun district, Sierra Leone, by chiefdom (right), 2020.

maps

Source: Mackintosh, A., A. Ramirez, P. Atherton, V. Collis, M. Mason-Sesay, & C. Bart-Williams. 2019. Education Workforce Spatial Analysis in Sierra Leone. Research and Policy Paper. Education Workforce Initiative. The Education Commission.

…Spatial analysis, also referred to as geospatial analysis, is a set of techniques to explain patterns and behaviours in terms of geography and locations. It uses geographical features, such as distances, travel times and school neighbourhoods, to identify relationships and patterns.

Our team, using its expertise in both data science and education systems, examined issues linked to remoteness to produce a clearer picture of Sierra Leone’s teacher shortage. To see how the current education workforce was distributed across the country, and how well it served local populations, we drew on geo-processed population data from the Grid-3 initiative and the Government of Sierra Leone’s Education Data Hub. The project benefited from close collaboration with the Ministry and Teaching Service Commission (TSC).

Our analysis focused on teacher development, training and the deployment of new teachers across regions, drawing on exam data. Surveys of teacher training colleges (TTCs) were conducted to assess how many future teachers will need to be trained to make up for shortages. Gender and subject speciality were analysed to better address local imbalances. The team developed a matching algorithm for teacher deployment, to illustrate how schools’ needs, including aspects of qualifications and subject specialisms, can be matched to teachers’ preferences, including aspects of language and family connections, to improve allocation of both current and future teachers….(More)”

Are we all social scientists now? The rise of citizen social science raises more questions about social science than it answers


Blog by Alexandra Albert: “…In many instances people outside of the academy can and do, do social research even when they do not consider what they are doing to be social research, since that is perceived to be the preserve of ‘experts’. What is it about social science that makes it a skilful and expert activity, and how or why is it practiced in a way that makes it difficult to do? CSS produces tensions between the ideals of inclusion of social actors in the generation of information about the everyday, and the notion that many participants do not necessarily feel entitled, or empowered, to participate in the analysis of this information, or in the interpretation of what it means. For example, in the case of the Empty Houses project, set up to explore some of these issues discussed here in more detail, some participants suggested they did not feel comfortable reporting on empty houses because they found them hard to identify and assumed that some prior knowledge or ‘expertise’ was required. CSS is the perfect place to interrogate these tensions since it challenges the closed nature of social science.

Second, CSS blurs the roles between researchers and researched, creating new responsibilities for participants and researchers alike. A notable distinction between expert and non-expert in social science research is the critique of the approach and the interpretation or analysis of the data. However, the way that traditional social science is done, with critical analysis being the preserve of the trained expert, means that many participants do not feel that it is their role to do the analysis. Does the professionalisation of observational techniques constitute a different category of sociological data that means that people need to be trained in formal and distinct sociological ways of collecting and analysing data? This is a challenge for research design and execution in CSS, and the potentially new perspectives that participating in CSS can engender.

Third, in addressing social worlds, CSS questions whether such observations are just a regular part of people’s everyday lives, or whether they entail a more active form of practice in observing everyday life. In this sense, what does it really mean to participate? Is there a distinction between ‘active’ and ‘passive’ observation? Arguably participating in a project is never just about this – it’s more of a conscious choice, and therefore, in some respects, a burden of some sort. This further raises the issue of how to appropriately compensate participants for their time and energy, potentially as co-researchers in a project and co-authors on papers?

Finally, while CSS can rearrange the power dynamics of citizenship, research and knowing, narratives of ‘duty’ to take part, and to ‘do your bit’, necessarily place a greater burden on the individual and raise questions about the supposed emancipatory potential of participatory methods such as CSS….(More)”

Why We Should End the Data Economy


Essay by Carissa Véliz: “…The data economy undermines equality and fairness. You and your neighbor are no longer treated as equal citizens. You aren’t given an equal opportunity because you are treated differently on the basis of your data. The ads and content you have access to, the prices you pay for the same services, and even how long you wait when you call customer service depend on your data.

We are much better at collecting personal data than we are at keeping it safe. But personal data is a serious threat, and we shouldn’t be collecting it in the first place if we are incapable of keeping it safe. Using smartphone location data acquired from a data broker, reporters from The New York Times were able to track military officials with security clearances, powerful lawyers and their guests, and even the president of the United States (through the phone of someone believed to be a Secret Service agent).

Our current data economy is based on collecting as much personal data as possible, storing it indefinitely, and selling it to the highest bidder. Having so much sensitive data circulating freely is reckless. By designing our economy around surveillance, we are building a dangerous structure for social control that is at odds with freedom. In the surveillance society we are constructing, there is no such thing as under the radar. It shouldn’t be up to us to constantly opt out of data collection. The default matters, and the default should be no data collection…(More)”.

Is there a role for consent in privacy?


Article by Robert Gellman: “After decades, we still talk about the role of notice and choice in privacy. Yet there seems to be broad recognition that notice and choice do nothing for the privacy of consumers. Some American businesses cling to notice and choice because they hate all the alternatives. Some legislators draft laws with elements of notice and choice, either because it’s easier to draft a law that way, because they don’t know any better or because they carry water for business.

For present purposes, I will talk about notice and choice generically as consent. Consent is a broader concept than choice, but the difference doesn’t matter for the point I want to make. How you frame consent is complex. There are many alternatives and many approaches. It’s not just a matter of opt-in or opt-out. While I’m discarding issues, I also want to acknowledge and set aside the eight basic Fair Information Practices. There is no notice and choice principle in FIPS, and FIPs are not specifically important here.

Until recently, my view was that consent in almost any form is pretty much death for consumer privacy. No matter how you structure it, websites and others will find a way to wheedle consent from consumers. Those who want to exploit consumer data will cajole, pressure, threaten, mystify, obscure, entice or otherwise coax consumers to agree.

Suddenly, I’m not as sure of my conclusion about consent. What changed my mind? There is a new data point from Apple’s App Tracking Transparency framework. Apple requires mobile application developers to obtain opt-in consent before serving targeted advertising via Apple’s Identifier for Advertisers. Early reports suggest consumers are saying “NO” in overwhelming numbers — overwhelming as in more than 90%.

It isn’t this strong consumer reaction that makes me think consent might possibly have a place. I want to highlight a different aspect of the Apple framework….(More)”.

Engaging with the public about algorithmic transparency in the public sector


Blog by the Centre for Data Ethics and Innovation (UK): “To move the recommendation that we made in our review into bias in algorithmic decision-making forward, we have been working with the Central Digital and Data Office (CDDO) and BritainThinks to scope what a transparency obligation could look like in practice, and in particular, which transparency measures would be most effective at increasing public understanding about the use of algorithms in the public sector. 

Due to the low levels of awareness about the use of algorithms in the public sector (CDEI polling in July 2020 found that 38% of the public were not aware that algorithmic systems were used to support decisions using personal data), we opted for a deliberative public engagement approach. This involved spending time gradually building up participants’ understanding and knowledge about algorithm use in the public sector and discussing their expectations for transparency, and co-designing solutions together. 

For this project, we worked with a diverse range of 36 members of the UK public, spending over five hours engaging with them over a three week period. We focused on three particular use-cases to test a range of emotive responses – policing, parking and recruitment.  

The final stage was an in-depth co-design session, where participants worked collaboratively to review and iterate prototypes in order to develop a practical approach to transparency that reflected their expectations and needs for greater openness in the public sector use of algorithms. 

What did we find? 

Our research validated that there was fairly low awareness or understanding of the use of algorithms in the public sector. Algorithmic transparency in the public sector was not a front-of-mind topic for most participants.

However, once participants were introduced to specific examples of potential public sector algorithms, they felt strongly that transparency information should be made available to the public, both citizens and experts. This included desires for; a description of the algorithm, why an algorithm was being used, contact details for more information, data used, human oversight, potential risks and technicalities of the algorithm…(More)”.

To regulate AI, try playing in a sandbox


Article by Dan McCarthy: “For an increasing number of regulators, researchers, and tech developers, the word “sandbox” is just as likely to evoke rulemaking and compliance as it is to conjure images of children digging, playing, and building. Which is kinda the point.

That’s thanks to the rise of regulatory sandboxes, which allow organizations to develop and test new technologies in a low-stakes, monitored environment before rolling them out to the general public. 

Supporters, from both the regulatory and the business sides, say sandboxes can strike the right balance of reining in potentially harmful technologies without kneecapping technological progress. They can also help regulators build technological competency and clarify how they’ll enforce laws that apply to tech. And while regulatory sandboxes originated in financial services, there’s growing interest in using them to police artificial intelligence—an urgent task as AI is expanding its reach while remaining largely unregulated. 

Even for all of its promise, experts told us, the approach should be viewed not as a silver bullet for AI regulation, but instead as a potential step in the right direction. 

Rashida Richardson, an AI researcher and visiting scholar at Rutgers Law School, is generally critical of AI regulatory sandboxes, but still said “it’s worth testing out ideas like this, because there is not going to be any universal model to AI regulation, and to figure out the right configuration of policy, you need to see theoretical ideas in practice.” 

But waiting for the theoretical to become concrete will take time. For example, in April, the European Union proposed AI regulation that would establish regulatory sandboxes to help the EU achieve its aim of responsible AI innovation, mentioning the word “sandbox” 38 times, compared to related terms like “impact assessment” (13 mentions) and “audit” (four). But it will likely take years for the EU’s proposal to become law. 

In the US, some well-known AI experts are working on an AI sandbox prototype, but regulators are not yet in the picture. However, the world’s first and (so far) only AI-specific regulatory sandbox did roll out in Norway this March, as a way to help companies comply with AI-specific provisions of the EU’s General Data Protection Regulation (GDPR). The project provides an early window into how the approach can work in practice.

“It’s a place for mutual learning—if you can learn earlier in the [product development] process, that is not only good for your compliance risk, but it’s really great for building a great product,” according to Erlend Andreas Gjære, CEO and cofounder of Secure Practice, an information security (“infosec”) startup that is one of four participants in Norway’s new AI regulatory sandbox….(More)”