Patients are Pooling Data to Make Diabetes Research More Representative


Blog by Tracy Kariuki: “Saira Khan-Gallo knows how overwhelming managing and living healthily with diabetes can be. As a person living with type 1 diabetes for over two decades, she understands how tracking glucose levels, blood pressure, blood cholesterol, insulin intake, and, and, and…could all feel like drowning in an infinite pool of numbers.

But that doesn’t need to be the case. This is why Tidepool, a non-profit tech organization composed of caregivers and other people living with diabetes such as Gallo, is transforming diabetes data management. Its data visualization platform enables users to make sense of the data and derive insights into their health status….

Through its Big Data Donation Project, Tidepool has been supporting the advancement of diabetes research by sharing anonymized data from people living with diabetes with researchers.

To date, more than 40,000 individuals have chosen to donate data uploaded from their diabetes devices like blood glucose meters, insulin pumps and continuous glucose monitors, which is then shared by Tidepool with students, academics, researchers, and industry partners — Making the database larger than many clinical trials. For instance, Oregon Health and Science University have used datasets collected from Tidepool to build an algorithm that predicts hypoglycemia, which is low blood sugar, with the goal of advancing closed loop therapy for diabetes management…(More)”.

A new way to look at data privacy


Article by Adam Zewe: “Imagine that a team of scientists has developed a machine-learning model that can predict whether a patient has cancer from lung scan images. They want to share this model with hospitals around the world so clinicians can start using it in diagnosis.

But there’s a problem. To teach their model how to predict cancer, they showed it millions of real lung scan images, a process called training. Those sensitive data, which are now encoded into the inner workings of the model, could potentially be extracted by a malicious agent. The scientists can prevent this by adding noise, or more generic randomness, to the model that makes it harder for an adversary to guess the original data. However, perturbation reduces a model’s accuracy, so the less noise one can add, the better.

MIT researchers have developed a technique that enables the user to potentially add the smallest amount of noise possible, while still ensuring the sensitive data are protected.

The researchers created a new privacy metric, which they call Probably Approximately Correct (PAC) Privacy, and built a framework based on this metric that can automatically determine the minimal amount of noise that needs to be added. Moreover, this framework does not need knowledge of the inner workings of a model or its training process, which makes it easier to use for different types of models and applications.

In several cases, the researchers show that the amount of noise required to protect sensitive data from adversaries is far less with PAC Privacy than with other approaches. This could help engineers create machine-learning models that provably hide training data, while maintaining accuracy in real-world settings…

A fundamental question in data privacy is: How much sensitive data could an adversary recover from a machine-learning model with noise added to it?

Differential Privacy, one popular privacy definition, says privacy is achieved if an adversary who observes the released model cannot infer whether an arbitrary individual’s data is used for the training processing. But provably preventing an adversary from distinguishing data usage often requires large amounts of noise to obscure it. This noise reduces the model’s accuracy.

PAC Privacy looks at the problem a bit differently. It characterizes how hard it would be for an adversary to reconstruct any part of randomly sampled or generated sensitive data after noise has been added, rather than only focusing on the distinguishability problem…(More)”

AI and the automation of work


Essay by Benedict Evans: “…We should start by remembering that we’ve been automating work for 200 years. Every time we go through a wave of automation, whole classes of jobs go away, but new classes of jobs get created. There is frictional pain and dislocation in that process, and sometimes the new jobs go to different people in different places, but over time the total number of jobs doesn’t go down, and we have all become more prosperous.

When this is happening to your own generation, it seems natural and intuitive to worry that this time, there aren’t going to be those new jobs. We can see the jobs that are going away, but we can’t predict what the new jobs will be, and often they don’t exist yet. We know (or should know), empirically, that there always have been those new jobs in the past, and that they weren’t predictable either: no-one in 1800 would have predicted that in 1900 a million Americans would work on ‘railways’ and no-one in 1900 would have predicted ‘video post-production’ or ‘software engineer’ as employment categories. But it seems insufficient to take it on faith that this will happen now just because it always has in the past. How do you know it will happen this time? Is this different?

At this point, any first-year economics student will tell us that this is answered by, amongst other things, the ‘Lump of Labour’ fallacy.

The Lump of Labour fallacy is the misconception that there is a fixed amount of work to be done, and that if some work is taken by a machine then there will be less work for people. But if it becomes cheaper to use a machine to make, say, a pair of shoes, then the shoes are cheaper, more people can buy shoes and they have more money to spend on other things besides, and we discover new things we need or want, and new jobs. The efficient gain isn’t confined to the shoe: generally, it ripples outward through the economy and creates new prosperity and new jobs. So, we don’t know what the new jobs will be, but we have a model that says, not just that there always have been new jobs, but why that is inherent in the process. Don’t worry about AI!The most fundamental challenge to this model today, I think, is to say that no, what’s really been happening for the last 200 years of automation is that we’ve been moving up the scale of human capability…(More)”.

Why Citizen-Driven Policy Making Is No Longer A Fringe Idea


Article by Tatjana Buklijas: “Deliberative democracy is a term that would have been met with blank stares in academic and political circles just a few decades ago.

Yet this approach, which examines ways to directly connect citizens with decision-making processes, has now become central to many calls for government reform across the world. 

This surge in interest was firstly driven by the 2008 financial crisis. After the banking crash, there was a crisis of trust in democratic institutions. In Europe and the United States, populist political movements helped drive public feeling to become increasingly anti-establishment. 

The second was the perceived inability of representative democracy to effectively respond to long-term, intergenerational challenges, such as climate change and environmental decline. 

Within the past few years, hundreds of citizens’ assemblies, juries and other forms of ‘minipublics’ have met to learn, deliberate and produce recommendations on topics from housing shortages and covid-19 policies, to climate action.

One of the most recent assemblies in the United Kingdom was the People’s Plan for Nature that produced a vision for the future of nature, and the actions society must take to protect and renew it. 

When it comes to climate action, experts argue that we need to move beyond showpiece national and international goal-setting, and bring decision-making closer to home. 

Scholars say that that local and regional minipublics should be used much more frequently to produce climate policies, as this is where citizens experience the impact of the changing climate and act to make everyday changes.

While some policymakers are critical of deliberative democracy and see these processes as redundant to the existing deliberative bodies, such a national parliaments, others are more supportive. They view them as a way to get a better understanding of both what the public both thinks, and also how they might choose to implement change, after being given the chance to learn and deliberate on key questions.

Research has shown that the cognitive diversity of minipublics ensure a better quality of decision-making, in comparison to the more experienced, but also more homogenous traditional decision-making bodies…(More)”.

Destination? Care Blocks!


Blog by Natalia González Alarcón, Hannah Chafetz, Diana Rodríguez Franco, Uma Kalkar, Bapu Vaitla, & Stefaan G. Verhulst: “Time poverty” caused by unpaid care work overload, such as washing, cleaning, cooking, and caring for their care-receivers is a structural consequence of gender inequality. In the City of Bogotá, 1.2 million women — 30% of their total women’s population — carry out unpaid care work full-time. If such work was compensated, it would represent 13% of Bogotá’s GDP and 20% of the country’s GDP. Moreover, the care burden falls disproportionately on women’s shoulder and prevents them from furthering their education, achieving financial autonomy, participating in their community, and tending to their personal wellbeing.

To address the care burden and its spillover consequences on women’s economic autonomy, well-being and political participation, in October 2020, Bogotá Mayor Claudia López launched the Care Block Initiative. Care Blocks, or Manzanas del cuidado, are centralized areas for women’s economic, social, medical, educational, and personal well-being and advancement. They provide services simultaneously for caregivers and care-receivers.

As the program expands from 19 existing Care Blocks to 45 Care Blocks by the end of 2035, decision-makers face another issue: mobility is a critical and often limiting factor for women when accessing Care Blocks in Bogotá.

On May 19th, 2023, The GovLabData2X, and the Secretariat for Women’s Affairs, in the City Government of Bogotá co-hosted a studio that aimed to scope a purposeful and gender-conscious data collaborative that addresses mobility-related issues affecting the access of Care Blocks in Bogotá. Convening experts across the gender, mobility, policy, and data ecosystems, the studio focused on (1) prioritizing the critical questions as it relates to mobility and access to Care Blocks and (2) identifying the data sources and actors that could be tapped into to set up a new data collaborative…(More)”.

Can AI help governments clean out bureaucratic “Sludge”?


Blog by Abhi Nemani: “Government services often entail a plethora of paperwork and processes that can be exasperating and time-consuming for citizens. Whether it’s applying for a passport, filing taxes, or registering a business, chances are one has encountered some form of sludge.

Sludge is a term coined by Cass Sunstein, in his straightforward book, Sludge, a legal scholar and former administrator of the White House Office of Information and Regulatory Affairs, to describe unnecessarily effortful processes, bureaucratic procedures, and other barriers to desirable outcomes in government services…

So how can sludge be reduced or eliminated in government services? Sunstein suggests that one way to achieve this is to conduct Sludge Audits, which are systematic evaluations of the costs and benefits of existing or proposed sludge. He also recommends that governments adopt ethical principles and guidelines for the design and use of public services. He argues that by reducing sludge, governments can enhance the quality of life and well-being of their citizens.

One example of sludge reduction in government is the simplification and automation of tax filing in some countries. According to a study by the World Bank, countries that have implemented electronic tax filing systems have reduced the time and cost of tax compliance for businesses and individuals. The study also found that electronic tax filing systems have improved tax administration efficiency, transparency, and revenue collection. Some countries, such as Estonia and Chile, have gone further by pre-filling tax returns with information from various sources, such as employers, banks, and other government agencies. This reduces the burden on taxpayers to provide or verify data, and increases the accuracy and completeness of tax returns.

Future Opportunities for AI in Cutting Sludge

AI technology is rapidly evolving, and its potential applications are manifold. Here are a few opportunities for further AI deployment:

  • AI-assisted policy design: AI can analyze vast amounts of data to inform policy design, identifying areas of administrative burden and suggesting improvements.
  • Smart contracts and blockchain: These technologies could automate complex procedures, such as contract execution or asset transfer, reducing the need for paperwork.
  • Enhanced citizen engagement: AI could personalize government services, making them more accessible and less burdensome.

Key Takeaways:

  • AI could play a significant role in policy design, contract execution, and citizen engagement.
  • These technologies hold the potential to significantly reduce sludge…(More)”.

When What’s Right Is Also Wrong: The Pandemic As A Corporate Social Responsibility Paradox


Article by Heidi Reed: “When the COVID-19 pandemic first hit, businesses were faced with difficult decisions where making the ‘right choice’ just wasn’t possible. For example, if a business chose to shut down, it might protect employees from catching COVID, but at the same time, it would leave them without a paycheck. This was particularly true in the U.S. where the government played a more limited role in regulating business behavior, leaving managers and owners to make hard choices.

In this way, the pandemic is a societal paradox in which the social objectives of public health and economic prosperity are both interdependent and contradictory. How does the public judge businesses then when they make decisions favoring one social objective over another? To answer this question, I qualitatively surveyed the American public at the start of the COVID-19 crisis about what they considered to be responsible and irresponsible business behavior in response to the pandemic. Analyzing their answers led me to create the 4R Model of Moral Sensemaking of Competing Social Problems.

The 4R Model relies on two dimensions: the extent to which people prioritize one social problem over another and the extent to which they exhibit psychological discomfort (i.e. cognitive dissonance). In the first mode, Reconcile, people view the problems as compatible. There is no need to prioritize then and no resulting dissonance. These people think, “Businesses can just convert to making masks to help the cause and still make a profit.”

The second mode, Resign, similarly does not prioritize one problem over another; however, the problems are seen as competing, suggesting a high level of cognitive dissonance. These people might say, “It’s dangerous to stay open, but if the business closes, people will lose their jobs. Both decisions are bad.”

In the third mode, Ranking, people use prioritizing to reduce cognitive dissonance. These people say things like, “I understand people will be fired, but it’s more important to stop the virus.”

In the fourth and final mode, Rectify, people start by ranking but show signs of lingering dissonance as they acknowledge the harm created by prioritizing one problem over another. Unlike with the Resign mode, they try to find ways to reduce this harm. A common response in this mode would be, “Businesses should shut down, but they should also try to help employees file for unemployment.”

The 4R model has strong implications for other grand challenges where there may be competing social objectives such as in addressing climate change. To this end, the typology helps corporate social responsibility (CSR) decision-makers understand how they may be judged when businesses are forced to re- or de-prioritize CSR dimensions. In other words, it helps us understand how people make moral sense of business behavior when the right thing to do is paradoxically also the wrong thing…(More)”

Brazil launches participatory national planning process


Article by Tarson Núñez and Luiza Jardim: “At a time when signs of a crisis in democracy are prevalent around the world, the Brazilian government is seeking to expand and deepen the active participation of citizens in its decisions. The new administration of Luiz Inácio Lula da Silva believes that more democracy is needed to rebuild citizens’ trust in political processes. And it just launched one of its main initiatives, the Participatory Pluriannual Plan (PPA Participativo). The PPA sets the goals and objectives for Brazil over the following four years, and Lula is determined to not only allow but facilitate public participation in its development. 

On May 11, the federal government held the first state plenary for the Participatory PPA, an assembly open to all citizens, social movements and civil society organizations. Participants at the state plenaries are able to discuss proposals and deliberate on the government’s public policies. Over the next two months, government officials will travel to the capitals of the country’s 26 states as well as the federal district (the capital of Brazil) to listen to people present their priorities. If they prefer, people can also submit their suggestions through a digital platform (Decidim, accessible only to people in Brazil) or the Interconselhos Forum, which brings together various councils and civil society groups…(More)”.

From the Economic Graph to Economic Insights: Building the Infrastructure for Delivering Labor Market Insights from LinkedIn Data


Blog by Patrick Driscoll and Akash Kaura: “LinkedIn’s vision is to create economic opportunity for every member of the global workforce. Since its inception in 2015, the Economic Graph Research and Insights (EGRI) team has worked to make this vision a reality by generating labor market insights such as:

In this post, we’ll describe how the EGRI Data Foundations team (Team Asimov) leverages LinkedIn’s cutting-edge data infrastructure tools such as Unified Metrics PlatformPinot, and Datahub to ensure we can deliver data and insights robustly, securely, and at scale to a myriad of partners. We will illustrate this through a case study of how we built the pipeline for our most well-known and oft-cited flagship metric: the LinkedIn Hiring Rate…(More)”.

From LogFrames to Logarithms – A Travel Log


Article by Karl Steinacker and Michael Kubach: “..Today, authorities all over the world are experimenting with predictive algorithms. That sounds technical and innocent but as we dive deeper into the issue, we realise that the real meaning is rather specific: fraud detection systems in social welfare payment systems. In the meantime, the hitherto banned terminology had it’s come back: welfare or social safety nets are, since a couple of years, en vogue again. But in the centuries-old Western tradition, welfare recipients must be monitored and, if necessary, sanctioned, while those who work and contribute must be assured that there is no waste. So it comes at no surprise that even today’s algorithms focus on the prime suspect, the individual fraudster, the undeserving poor.

Fraud detection systems promise that the taxpayer will no longer fall victim to fraud and efficiency gains can be re-directed to serve more people. The true extent of welfare fraud is regularly exaggerated  while the costs of such systems is routinely underestimated. A comparison of the estimated losses and investments doesn’t take place. It is the principle to detect and punish the fraudsters that prevail. Other issues don’t rank high either, for example on how to distinguish between honest mistakes and deliberate fraud. And as case workers spent more time entering and analysing data and in front of a computer screen, the less they have time and inclination to talk to real people and to understand the context of their life at the margins of society.

Thus, it can be said that routinely hundreds of thousands of people are being scored. Example Denmark: Here, a system called Udbetaling Danmark was created in 2012 to streamline the payment of welfare benefits. Its fraud control algorithms can access the personal data of millions of citizens, not all of whom receive welfare payments. In contrast to the hundreds of thousands affected by this data mining, the number of cases referred to the Police for further investigation are minute. 

In the city of Rotterdam in the Netherlands every year, data of 30,000 welfare recipients is investigated in order to flag suspected welfare cheats. However, an analysis of its scoring system based on machine learning and algorithms showed systemic discrimination with regard to ethnicity, age, gender, and parenthood. It revealed evidence of other fundamental flaws making the system both inaccurate and unfair. What might appear to a caseworker as a vulnerability is treated by the machine as grounds for suspicion. Despite the scale of data used to calculate risk scores, the output of the system is not better than random guesses. However, the consequences of being flagged by the “suspicion machine” can be drastic, with fraud controllers empowered to turn the lives of suspects inside out.

As reported by the World Bank, the recent Covid-19 pandemic provided a great push to implement digital social welfare systems in the global South. In fact, for the World Bank the so-called Digital Public Infrastructure (DPI), enabling “Digitizing Government to Person Payments (G2Px)”, are as fundamental for social and economic development today as physical infrastructure was for previous generations. Hence, the World Bank is finances globally systems modelled after the Indian Aadhaar system, where more than a billion persons have been registered biometrically. Aadhaar has become, for all intents and purposes, a pre-condition to receive subsidised food and other assistance for 800 million Indian citizens.

Important international aid organisations are not behaving differently from states. The World Food Programme alone holds data of more than 40 million people on its Scope data base. Unfortunately, WFP like other UN organisations, is not subject to data protection laws and the jurisdiction of courts. This makes the communities they have worked with particularly vulnerable.

In most places, the social will become the metric, where logarithms determine the operational conduit for delivering, controlling and withholding assistance, especially welfare payments. In other places, the power of logarithms may go even further, as part of trust systems, creditworthiness, and social credit. These social credit systems for individuals are highly controversial as they require mass surveillance since they aim to track behaviour beyond financial solvency. The social credit score of a citizen might not only suffer from incomplete, or inaccurate data, but also from assessing political loyalties and conformist social behaviour…(More)”.