How can data stop homelessness before it starts?


Article by Andrea Danes and Jessica Chamba: “When homelessness in Maidstone, England, soared by 58% over just five years, the Borough Council sought to shift its focus from crisis response to building early-intervention and prevention capacity. Working with EY teams and our UK technology partner, Xantura, the council created and implemented a data-focused tool — called OneView — that enabled the council to tackle their challenges in a new way.

Specifically, OneView’s predictive analytic and natural language generation capabilities enabled participating agencies in Maidstone to bring together their data to identify residents who were at risk of homelessness, and then to intervene before they were actually living on the street. In the initial pilot year, almost 100 households were prevented from becoming homeless, even as the COVID-19 pandemic took hold and grew. And, overall, the rate of homelessness fell by 40%. 

As evidenced by the Maidstone model, data analytics and predictive modeling will play an indispensable role in enabling us to realize a very big vision — a world in which everyone has a reliable roof over their heads.

Against that backdrop, it’s important to stress that the roadmap for preventing homelessness has to contain components beyond just better avenues for using data. It must also include shrewd approaches for dealing with complex issues such as funding, standards, governance, cultural differences and informed consent to permit the exchange of personal information, among others. Perhaps most importantly, the work needs to be championed by organizational and governmental leaders who believe transformative, systemic change is possible and are committed to achieving it.

Introducing the Smart Safety Net

To move forward, human services organizations need to look beyond modernizing service delivery to transforming it, and to evolve from integration to intuitive design. New technologies provide opportunities to truly rethink and redesign in ways that would have been impossible in the past.

A Smart Safety Net can shape a bold new future for social care. Doing so will require broad, fundamental changes at an organizational level, more collaboration across agencies, data integration and greater care co-ordination. At its heart, a Smart Safety Net entails:

  • A system-wide approach to addressing the needs of each individual and family, including pooled funding that supports coordination so that, for example, users in one program are automatically enrolled in other programs for which they are eligible.
  • Human-centered design that genuinely integrates the recipients of services (patients, clients, customers, etc.), as well as their experiences and insights, into the creation and implementation of policies, systems and services that affect them.
  • Data-driven policy, services, workflows, automation and security to improve processes, save money and facilitate accurate, real-time decision-making, especially to advance the overarching priority of nearly every program and service; that is, early intervention and prevention.
  • Frontline case workers who are supported and empowered to focus on their core purpose. With a lower administrative burden, they are able to invest more time in building relationships with vulnerable constituents and act as “coaches” to improve people’s lives.
  • Outcomes-based commissioning of services, measured against a more holistic wellbeing framework, from an ecosystem of public, private and not-for-profit providers, with government acting as system stewards and service integrators…(More)”.

Stories to Work By


Essay by William E. Spriggs: “In Charlie Chaplin’s 1936 film Modern Times, humans in a factory are reduced to adjuncts to a massive series of cogs and belts. Overlords bark commands from afar to a servant class, and Chaplin’s hapless hero is literally consumed by the machine … and then spit out by it. In the film, the bosses have all the power, and machines keep workers in check.

Modern Times’s dystopian narrative remains with us today. In particular, it is still held by many policymakers who assume that increasing technological progress, whether mechanical or informational, inevitably means that ordinary workers will lose. This view perpetuates itself when policies that could give workers more power in times of technological change are overlooked, while those that disempower workers are adopted. If we are to truly consider science policy for the future, we need to understand how this narrative about workers and technology functions, where it is misleading, and how deliberate policies can build a better world for all….

Today’s tales of pending technological dystopia—echoed in economics papers as well as in movies and news reports—blind us to the lessons we could glean from the massive disruptions of earlier periods of even greater change. Today the threat of AI is portrayed as revolutionary, and previous technological change as slow and inconsequential—but this was never the case. These narratives of technological inevitability limit the tools we have at our disposal to promote equality and opportunity.

The challenges we face today are far from insurmountable: technology is not destiny. Workers are not doomed to be Chaplin’s victim of technology with one toe caught in the gears of progress. We have choices, and the central challenge of science and technology policy for the next century will be confronting those choices head on. Policymakers should focus on the fundamental tasks of shaping how technology is deployed and enacting the economic rules we need to ensure that technology works for us all, rather than only the few….(More)”.

Facial Expressions Do Not Reveal Emotions


Lisa Feldman Barrett at Scientific American: “Do your facial movements broadcast your emotions to other people? If you think the answer is yes, think again. This question is under contentious debate. Some experts maintain that people around the world make specific, recognizable faces that express certain emotions, such as smiling in happiness, scowling in anger and gasping with widened eyes in fear. They point to hundreds of studies that appear to demonstrate that smiles, frowns, and so on are universal facial expressions of emotion. They also often cite Charles Darwin’s 1872 book The Expression of the Emotions in Man and Animals to support the claim that universal expressions evolved by natural selection.

Other scientists point to a mountain of counterevidence showing that facial movements during emotions vary too widely to be universal beacons of emotional meaning. People may smile in hatred when plotting their enemy’s downfall and scowl in delight when they hear a bad pun. In Melanesian culture, a wide-eyed gasping face is a symbol of aggression, not fear. These experts say the alleged universal expressions just represent cultural stereotypes. To be clear, both sides in the debate acknowledge that facial movements vary for a given emotion; the disagreement is about whether there is enough uniformity to detect what someone is feeling.

This debate is not just academic; the outcome has serious consequences. Today you can be turned down for a job because a so-called emotion-reading system watching you on camera applied artificial intelligence to evaluate your facial movements unfavorably during an interview. In a U.S. court of law, a judge or jury may sometimes hand down a harsher sentence, even death, if they think a defendant’s face showed a lack of remorse. Children in preschools across the country are taught to recognize smiles as happiness, scowls as anger and other expressive stereotypes from books, games and posters of disembodied faces. And for children on the autism spectrum, some of whom have difficulty perceiving emotion in others, these teachings do not translate to better communication….Emotion AI systems, therefore, do not detect emotions. They detect physical signals, such as facial muscle movements, not the psychological meaning of those signals. The conflation of movement and meaning is deeply embedded in Western culture and in science. An example is a recent high-profile study that applied machine learning to more than six million internet videos of faces. The human raters, who trained the AI system, were asked to label facial movements in the videos, but the only labels they were given to use were emotion words, such as “angry,” rather than physical descriptions, such as “scowling.” Moreover there was no objective way to confirm what, if anything, the anonymous people in the videos were feeling in those moments…(More)”.

Citizen power mobilized to fight against mosquito borne diseases


GigaBlog: “Just out in GigaByte is the latest data release from Mosquito Alert, a citizen science system for investigating and managing disease-carrying mosquitoes, and is part of our WHO-sponsored series on vector borne human diseases. Presenting 13,700 new database records in the Global Biodiversity Information Facility (GBIF) repository, all linked to photographs submitted by citizen volunteers and validated by entomological experts to determine if it provides evidence of the presence of any of the mosquito vectors of top concern in Europe. This is the latest of a new special issue of papers presenting biodiversity data for research on human diseases health, incentivising data sharing to fill important particular species and geographic gaps. As big fans of citizen science (and Mosquito Alert), its great to see this new data showcased in the series.

Vector-borne diseases account for more than 17% of all infectious diseases in humans. There are large gaps in knowledge related to these vectors, and data mobilization campaigns are required to improve data coverage to help research on vector-borne diseases and human health. As part of these efforts, GigaScience Press has partnered with the GBIF; and has been supported by TDR, the Special Programme for Research and Training in Tropical Diseases, hosted at the World Health Organization. Through this we launched this “Vectors of human disease” thematic series. Incentivising the sharing of this extremely important data, Article Processing Charges have been waived to assist with the global call for novel data. This effort has already led to the release of newly digitised location data for over 600,000 vector specimens observed across the Americas and Europe.

While paying credit to such a large number of volunteers, creating such a large public collection of validated mosquito images allows this dataset to be used to train machine-learning models for vector detection and classification. Sharing the data in this novel manner meant the authors of these papers had to set up a new credit system to evaluate contributions from multiple and diverse collaborators, which included university researchers, entomologists, and other non-academics such as independent researchers and citizen scientists. In the GigaByte paper these are acknowledged through collaborative authorship for the Mosquito Alert Digital Entomology Network and the Mosquito Alert Community…(More)”.

The Future of Open Data: Law, Technology and Media


Book edited by Pamela Robinson, and Teresa Scassa: “The Future of Open Data flows from a multi-year Social Sciences and Humanities Research Council (SSHRC) Partnership Grant project that set out to explore open government geospatial data from an interdisciplinary perspective. Researchers on the grant adopted a critical social science perspective grounded in the imperative that the research should be relevant to government and civil society partners in the field.

This book builds on the knowledge developed during the course of the grant and asks the question, “What is the future of open data?” The contributors’ insights into the future of open data combine observations from five years of research about the Canadian open data community with a critical perspective on what could and should happen as open data efforts evolve.

Each of the chapters in this book addresses different issues and each is grounded in distinct disciplinary or interdisciplinary perspectives. The opening chapter reflects on the origins of open data in Canada and how it has progressed to the present date, taking into account how the Indigenous data sovereignty movement intersects with open data. A series of chapters address some of the pitfalls and opportunities of open data and consider how the changing data context may impact sources of open data, limits on open data, and even liability for open data. Another group of chapters considers new landscapes for open data, including open data in the global South, the data priorities of local governments, and the emerging context for rural open data…(More)”.

What makes administrative data research-ready?


Paper by Louise Mc Grath-Lone et al: “Administrative data are a valuable research resource, but are under-utilised in the UK due to governance, technical and other barriers (e.g., the time and effort taken to gain secure data access). In recent years, there has been considerable government investment in making administrative data “research-ready”, but there is no definition of what this term means. A common understanding of what constitutes research-ready administrative data is needed to establish clear principles and frameworks for their development and the realisation of their full research potential…Overall, we screened 2,375 records and identified 38 relevant studies published between 2012 and 2021. Most related to administrative data from the UK and US and particularly to health data. The term research-ready was used inconsistently in the literature and there was some conflation with the concept of data being ready for statistical analysis. From the thematic analysis, we identified five defining characteristics of research-ready administrative data: (a) accessible, (b) broad, (c) curated, (d) documented and (e) enhanced for research purposes…
Our proposed characteristics of research-ready administrative data could act as a starting point to help data owners and researchers develop common principles and standards. In the more immediate term, the proposed characteristics are a useful framework for cataloguing existing research-ready administrative databases and relevant resources that can support their development…(More)”.

Public Data Commons: A public-interest framework for B2G data sharing in the Data Act


Policy Brief by Alek Tarkowski & Francesco Vogelezang: “It is by now a truism that data is a crucial resource in the digital era. Yet today access to data and the capacity to make use of data and to benefit from it are unevenly distributed. A new understanding of data is needed, one that takes into account a society-wide data sharing and value creation. This will solve power asymmetries related to data ownership and the capacity to use it, and fill the public value gap with regard to data-driven growth and innovation.

Public institutions are also in a unique position to safeguard the rule of law, ensure democratic control and accountability, and drive the use of data to generate non-economic value.

The “data sharing for public good” narratives have been presented for over a decade, arguing that privately-owned big data should be used for the public interest. The idea of the commons has attracted the attention of policymakers interested in developing institutional responses that can advance public interest goals. The concept of the data commons offers a generative model of property that is well-aligned with the ambitions of the European data strategy. And by employing the idea of the data commons, the public debate can be shifted beyond an opposition between treating data as a commodity or protecting it as the object of fundamental rights.

The European Union is uniquely positioned to deliver a data governance framework that ensures Business-to-Government (B2G) data sharing in the public interest. The policy vision for such a framework has been presented in the European strategy for data, and specific recommendations for a robust B2G data sharing model have been made by the Commission’s high-level expert group.

There are three connected objectives that must be achieved through a B2G data sharing framework. Firstly, access to data and the capacity to make use of it needs to be ensured for a broader range of actors. Secondly, exclusive corporate control over data needs to be reduced. And thirdly, the information power of the state and its generative capacity should be strengthened.

Yet the current proposal for the Data Act fails to meet these goals, due to a narrow B2G data sharing mandate limited only to situations of public emergency and exceptional need.

This policy brief therefore presents a model for public interest B2G data sharing, aimed to complement the current proposal. This framework would also create a robust baseline for sectoral regulations, like the recently proposed Regulation on the European Health Data Space. The proposal includes the creation of the European Public Data Commons, a body that acts as a recipient and clearinghouse for the data made available…(More)”.

Open data: The building block of 21st century (open) science


Paper by Corina Pascu and Jean-Claude Burgelman: “Given this irreversibility of data driven and reproducible science and the role machines will play in that, it is foreseeable that the production of scientific knowledge will be more like a constant flow of updated data driven outputs, rather than a unique publication/article of some sort. Indeed, the future of scholarly publishing will be more based on the publication of data/insights with the article as a narrative.

For open data to be valuable, reproducibility is a sine qua non (King2011; Piwowar, Vision and Whitlock2011) and—equally important as most of the societal grand challenges require several sciences to work together—essential for interdisciplinarity.

This trend correlates with the already ongoing observed epistemic shift in the rationale of science: from demonstrating the absolute truth via a unique narrative (article or publication), to the best possible understanding what at that moment is needed to move forward in the production of knowledge to address problem “X” (de Regt2017).

Science in the 21st century will be thus be more “liquid,” enabled by open science and data practices and supported or even co-produced by artificial intelligence (AI) tools and services, and thus a continuous flow of knowledge produced and used by (mainly) machines and people. In this paradigm, an article will be the “atomic” entity and often the least important output of the knowledge stream and scholarship production. Publishing will offer in the first place a platform where all parts of the knowledge stream will be made available as such via peer review.

The new frontier in open science as well as where most of future revenue will be made, will be via value added data services (such as mining, intelligence, and networking) for people and machines. The use of AI is on the rise in society, but also on all aspects of research and science: what can be put in an algorithm will be put; the machines and deep learning add factor “X.”

AI services for science 4 are already being made along the research process: data discovery and analysis and knowledge extraction out of research artefacts are accelerated with the use of AI. AI technologies also help to maximize the efficiency of the publishing process and make peer-review more objective5 (Table 1).

Table 1. Examples of AI services for science already being developed

Abbreviation: AI, artificial intelligence.

Source: Authors’ research based on public sources, 2021.

Ultimately, actionable knowledge and translation of its benefits to society will be handled by humans in the “machine era” for decades to come. But as computers are indispensable research assistants, we need to make what we publish understandable to them.

The availability of data that are “FAIR by design” and shared Application Programming Interfaces (APIs) will allow new ways of collaboration between scientists and machines to make the best use of research digital objects of any kind. The more findable, accessible, interoperable, and reusable (FAIR) data resources will become available, the more it will be possible to use AI to extract and analyze new valuable information. The main challenge is to master the interoperability and quality of research data…(More)”.

How can digital public technologies accelerate progress on the Sustainable Development Goals?


Report by George Ingram, John W. McArthur, and Priya Vora: “…There is no singular relationship between access to digital technologies and SDG outcomes. Country- and issue-specific assessments are essential. Sound approaches will frequently depend on the underlying physical infrastructure and economic systems. Rwanda, for instance, has made tremendous progress on SDG health indicators despite high rates of income poverty and internet poverty. This contrasts with Burkina Faso, which has lower income poverty and internet poverty but higher child mortality.

We draw from an OECD typology to identify three layers of a digital ecosystem: Physical infrastructure, platform infrastructure, and apps-level products. Physical and platform layers of digital infrastructure provide the rules, standards, and security guarantees so that local market innovators and governments can develop new ideas more rapidly to meet ever-changing circumstances. We emphasize five forms of DPT platform infrastructure that can play important roles in supporting SDG acceleration:

  • Personal identification and registration infrastructure allows citizens and organizations to have equal access to basic rights and services;
  • Payments infrastructure enables efficient resource transfer with low transaction costs;
  • Knowledge infrastructure links educational resources and data sets in an open or permissioned way;
  • Data exchange infrastructure enables interoperability of independent databases; and
  • Mapping infrastructure intersects with data exchange platforms to empower geospatially enabled diagnostics and service delivery opportunities.

Each of these platform types can contribute directly or indirectly to a range of SDG outcomes. For example, a person’s ability to register their identity with public sector entities is fundamental to everything from a birth certificate (SDG target 16.9) to a land title (SDG 1.4), bank account (SDG 8.10), driver’s license, or government-sponsored social protection (SDG 1.3). It can also ensure access to publicly available basic services, such as access to public schools (SDG 4.1) and health clinics (SDG 3.8).

At least three levers can help “level the playing field” such that a wide array of service providers can use the physical and platform layers of digital infrastructure equally: (1) public ownership and governance; (2) public regulation; and (3) open code, standards, and protocols. In practice, DPTs are typically built and deployed through a mix of levers, enabling different public and private actors to extract benefits through unique pathways….(More)”.

Facebook-owner Meta to share more political ad targeting data


Article by Elizabeth Culliford: “Facebook owner Meta Platforms Inc (FB.O) will share more data on targeting choices made by advertisers running political and social-issue ads in its public ad database, it said on Monday.

Meta said it would also include detailed targeting information for these individual ads in its “Facebook Open Research and Transparency” database used by academic researchers, in an expansion of a pilot launched last year.

“Instead of analyzing how an ad was delivered by Facebook, it’s really going and looking at an advertiser strategy for what they were trying to do,” said Jeff King, Meta’s vice president of business integrity, in a phone interview.

The social media giant has faced pressure in recent years to provide transparency around targeted advertising on its platforms, particularly around elections. In 2018, it launched a public ad library, though some researchers criticized it for glitches and a lack of detailed targeting data.Meta said the ad library will soon show a summary of targeting information for social issue, electoral or political ads run by a page….The company has run various programs with external researchers as part of its transparency efforts. Last year, it said a technical error meant flawed data had been provided to academics in its “Social Science One” project…(More)”.