Michael Sanders et al in Behavioral Public Policy: “The use of behavioural sciences in government has expanded and matured in the last decade. Since the Behavioural Insights Team (BIT) has been part of this movement, we sketch out the history of the team and the current state of behavioural public policy, recognising that other works have already told this story in detail. We then set out two clusters of issues that have emerged from our work at BIT. The first cluster concerns current challenges facing behavioural public policy: the long-term effects of interventions; repeated exposure effects; problems with proxy measures; spillovers and general equilibrium effects and unintended consequences; cultural variation; ‘reverse impact’; and the replication crisis. The second cluster concerns opportunities: influencing the behaviour of government itself; scaling interventions; social diffusion; nudging organisations; and dealing with thorny problems. We conclude that the field will need to address these challenges and take these opportunities in order to realise the full potential of behavioural public policy….(More)”.
Odd Numbers: Algorithms alone can’t meaningfully hold other algorithms accountable
Frank Pasquale at Real Life Magazine: “Algorithms increasingly govern our social world, transforming data into scores or rankings that decide who gets credit, jobs, dates, policing, and much more. The field of “algorithmic accountability” has arisen to highlight the problems with such methods of classifying people, and it has great promise: Cutting-edge work in critical algorithm studies applies social theory to current events; law and policy experts seem to publish new articles daily on how artificial intelligence shapes our lives, and a growing community of researchers has developed a field known as “Fairness, Accuracy, and Transparency in Machine Learning.”
The social scientists, attorneys, and computer scientists promoting algorithmic accountability aspire to advance knowledge and promote justice. But what should such “accountability” more specifically consist of? Who will define it? At a two-day, interdisciplinary roundtable on AI ethics I recently attended, such questions featured prominently, and humanists, policy experts, and lawyers engaged in a free-wheeling discussion about topics ranging from robot arms races to computationally planned economies. But at the end of the event, an emissary from a group funded by Elon Musk and Peter Thiel among others pronounced our work useless. “You have no common methodology,” he informed us (apparently unaware that that’s the point of an interdisciplinary meeting). “We have a great deal of money to fund real research on AI ethics and policy”— which he thought of as dry, economistic modeling of competition and cooperation via technology — “but this is not the right group.” He then gratuitously lashed out at academics in attendance as “rent seekers,” largely because we had the temerity to advance distinctive disciplinary perspectives rather than fall in line with his research agenda.
Most corporate contacts and philanthrocapitalists are more polite, but their sense of what is realistic and what is utopian, what is worth studying and what is mere ideology, is strongly shaping algorithmic accountability research in both social science and computer science. This influence in the realm of ideas has powerful effects beyond it. Energy that could be put into better public transit systems is instead diverted to perfect the coding of self-driving cars. Anti-surveillance activism transmogrifies into proposals to improve facial recognition systems to better recognize all faces. To help payday-loan seekers, developers might design data-segmentation protocols to show them what personal information they should reveal to get a lower interest rate. But the idea that such self-monitoring and data curation can be a trap, disciplining the user in ever finer-grained ways, remains less explored. Trying to make these games fairer, the research elides the possibility of rejecting them altogether….(More)”.
Searching for the Smart City’s Democratic Future
Article by Bianca Wylie at the Center for International Governance Innovation: “There is a striking blue building on Toronto’s eastern waterfront. Wrapped top to bottom in bright, beautiful artwork by Montreal illustrator Cecile Gariepy, the building — a former fish-processing plant — stands out alongside the neighbouring parking lots and a congested highway. It’s been given a second life as an office for Sidewalk Labs — a sister company to Google that is proposing a smart city development in Toronto. Perhaps ironically, the office is like the smart city itself: something old repackaged to be light, fresh and novel.
“Our mission is really to use technology to redefine urban life in the twenty-first century.”
Dan Doctoroff, CEO of Sidewalk Labs, shared this mission in an interview with Freakonomics Radio. The phrase is a variant of the marketing language used by the smart city industry at large. Put more simply, the term “smart city” is usually used to describe the use of technology and data in cities.
No matter the words chosen to describe it, the smart city model has a flaw at its core: corporations are seeking to exert influence on urban spaces and democratic governance. And because most governments don’t have the policy in place to regulate smart city development — in particular, projects driven by the fast-paced technology sector — this presents a growing global governance concern.
This is where the story usually descends into warnings of smart city dystopia or failure. Loads of recent articles have detailed the science fiction-style city-of-the-future and speculated about the perils of mass data collection, and for good reason — these are important concepts that warrant discussion. It’s time, however, to push past dystopian narratives and explore solutions for the challenges that smart cities present in Toronto and globally…(More)”.
A roadmap for restoring trust in Big Data
Americans Want to Share Their Medical Data. So Why Can’t They?
Eleni Manis at RealClearHealth: “Americans are willing to share personal data — even sensitive medical data — to advance the common good. A recent Stanford University study found that 93 percent of medical trial participants in the United States are willing to share their medical data with university scientists and 82 percent are willing to share with scientists at for-profit companies. In contrast, less than a third are concerned that their data might be stolen or used for marketing purposes.
However, the majority of regulations surrounding medical data focus on individuals’ ability to restrict the use of their medical data, with scant attention paid to supporting the ability to share personal data for the common good. Policymakers can begin to right this balance by establishing a national medical data donor registry that lets individuals contribute their medical data to support research after their deaths. Doing so would help medical researchers pursue cures and improve health care outcomes for all Americans.
Increased medical data sharing facilitates advances in medical science in three key ways. First, de-identified participant-level data can be used to understand the results of trials, enabling researchers to better explicate the relationship between treatments and outcomes. Second, researchers can use shared data to verify studies and identify cases of data fraud and research misconduct in the medical community. For example, one researcher recently discovered a prolific Japanese anesthesiologist had falsified data for almost two decades. Third, shared data can be combined and supplemented to support new studies and discoveries.
Despite these benefits, researchers, research funders, and regulators have struggled to establish a norm for sharing clinical research data. In some cases, regulatory obstacles are to blame. HIPAA — the federal law regulating medical data — blocks some sharing on grounds of patient privacy, while federal and state regulations governing data sharing are inconsistent. Researchers themselves have a proprietary interest in data they produce, while academic researchers seeking to maximize publications may guard data jealously.
Though funding bodies are aware of this tension, they are unable to resolve it on their own. The National Institutes of Health, for example, requires a data sharing plan for big-ticket funding but recognizes that proprietary interests may make sharing impossible….(More)”.
#TrendingLaws: How can Machine Learning and Network Analysis help us identify the “influencers” of Constitutions?
Data science techniques allow us to use methods like network science and machine learning to uncover patterns and insights that are hard for humans to see. Just as we can map influential users on Twitter — and patterns of relations between places to predict how diseases will spread — we can identify which countries have influenced each other in the past and what are the relations between legal provisions.
One way UNICEF fulfills its mission is through advocacy with national governments — to enshrine rights for minorities, notably children, formally in law. Perhaps the most renowned example of this is the International Convention on the Rights of the Child (ICRC).
Constitutions, such as Mexico’s 1917 constitution — the first to limit the employment of children — are critical to formalizing rights for vulnerable populations. National constitutions describe the role of a country’s institutions, its character in the eyes of the world, as well as the rights of its citizens.
From a scientific standpoint, the work is an important first step in showing that network analysis and machine learning technique can be used to better understand the dynamics of caring for and protecting the rights of children — critical to the work we do in a complex and interconnected world. It shows the significant, and positive policy implications of using data science to uphold children’s rights.
What the Research Shows:Through this research, we uncovered:
- A network of relationships between countries and their constitutions.
- A natural progression of laws — where fundamental rights are a necessary precursor to more specific rights for minorities.
- The effect of key historical events in changing legal norms….(More)”.
The Democratization of Data Science
Jonathan Cornelissen at Harvard Business School: “Want to catch tax cheats? The government of Rwanda does — and it’s finding them by studying anomalies in revenue-collection data.
Want to understand how American culture is changing? So does a budding sociologist in Indiana. He’s using data science to find patterns in the massive amounts of text people use each day to express their worldviews — patterns that no individual reader would be able to recognize.
Intelligent people find new uses for data science every day. Still, despite the explosion of interest in the data collected by just about every sector of American business — from financial companies and health care firms to management consultancies and the government — many organizations continue to relegate data-science knowledge to a small number of employees.
That’s a mistake — and in the long run, it’s unsustainable. Think of it this way: Very few companies expect only professional writers to know how to write. So why ask onlyprofessional data scientists to understand and analyze data, at least at a basic level?
Relegating all data knowledge to a handful of people within a company is problematic on many levels. Data scientists find it frustrating because it’s hard for them to communicate their findings to colleagues who lack basic data literacy. Business stakeholders are unhappy because data requests take too long to fulfill and often fail to answer the original questions. In some cases, that’s because the questioner failed to explain the question properly to the data scientist.
Why would non–data scientists need to learn data science? That’s like asking why non-accountants should be expected to stay within budget.
These days every industry is drenched in data, and the organizations that succeed are those that most quickly make sense of their data in order to adapt to what’s coming. The best way to enable fast discovery and deeper insights is to disperse data science expertise across an organization.
Companies that want to compete in the age of data need to do three things: share data tools, spread data skills, and spread data responsibility…(More)”.
From Code to Cure
David J. Craig at Columbia Magazine: “Armed with enormous amounts of clinical data, teams of computer scientists, statisticians, and physicians are rewriting the rules of medical research….
The deluge is upon us.
We are living in the age of big data, and with every link we click, every message we send, and every movement we make, we generate torrents of information.
In the past two years, the world has produced more than 90 percent of all the digital data that has ever been created. New technologies churn out an estimated 2.5 quintillion bytes per day. Data pours in from social media and cell phones, weather satellites and space telescopes, digital cameras and video feeds, medical records and library collections. Technologies monitor the number of steps we walk each day, the structural integrity of dams and bridges, and the barely perceptible tremors that indicate a person is developing Parkinson’s disease. These are the building blocks of our knowledge economy.
This tsunami of information is also providing opportunities to study the world in entirely new ways. Nowhere is this more evident than in medicine. Today, breakthroughs are being made not just in labs but on laptops, as biomedical researchers trained in mathematics, computer science, and statistics use powerful new analytic tools to glean insights from enormous data sets and help doctors prevent, treat, and cure disease.
“The medical field is going through a major period of transformation, and many of the changes are driven by information technology,” says George Hripcsak ’85PS,’00PH, a physician who chairs the Department of Biomedical Informatics at Columbia University Irving Medical Center (CUIMC). “Diagnostic techniques like genomic screening and high-resolution imaging are generating more raw data than we’ve ever handled before. At the same time, researchers are increasingly looking outside the confines of their own laboratories and clinics for data, because they recognize that by analyzing the huge streams of digital information now available online they can make discoveries that were never possible before.” …
Consider, for example, what the young computer scientist has been able to accomplish in recent years by mining an FDA database of prescription-drug side effects. The archive, which contains millions of reports of adverse drug reactions that physicians have observed in their patients, is continuously monitored by government scientists whose job it is to spot problems and pull drugs off the market if necessary. And yet by drilling down into the database with his own analytic tools, Tatonetti has found evidence that dozens of commonly prescribed drugs may interact in dangerous ways that have previously gone unnoticed. Among his most alarming findings: the antibiotic ceftriaxone, when taken with the heartburn medication lansoprazole, can trigger a type of heart arrhythmia called QT prolongation, which is known to cause otherwise healthy people to suddenly drop dead…(More)”
Our misguided love affair with techno-politics
The Economist: “What might happen if technology, which smothers us with its bounty as consumers, made the same inroads into politics? Might data-driven recommendations suggest “policies we may like” just as Amazon recommends books? Would we swipe right to pick candidates in elections or answers in referendums? Could businesses expand into every cranny of political and social life, replete with ® and ™ at each turn? What would this mean for political discourse and individual freedom?
This dystopian yet all-too-imaginable world has been conjured up by Giuseppe Porcaro in his novel “Disco Sour”. The story takes place in the near future, after a terrible war and breakdown of nations, when the (fictional) illegitimate son of Roman Polanski creates an app called Plebiscitum that works like Tinder for politics.
Mr Porcaro—who comes armed with a doctorate in political geography—uses the plot to consider questions of politics in the networked age. The Economist’s Open Future initiative asked him to reply to five questions in around 100 words each. An excerpt from the book appears thereafter.
* * *
The Economist: In your novel, an entrepreneur attempts to replace elections with an app that asks people to vote on individual policies. Is that science fiction or prediction? And were you influenced by Italy’s Five Star Movement?
Giuseppe Porcaro: The idea of imagining a Tinder-style app replacing elections came up because I see connections between the evolution of dating habits and 21st-century politics. A new sort of “tinderpolitics” kicking in when instant gratification substitutes substantial participation. Think about tweet trolling, for example.
Italy’s Five Star Movement was certainly another inspiration as it is has been a pioneer in using an online platform to successfully create a sort of new political mass movement. Another one was an Australian political party called Flux. They aim to replace the world’s elected legislatures with a new system known as issue-based direct democracy.
The Economist: Is it too cynical to suggest that a more direct relationship between citizens and policymaking would lead to a more reactionary political landscape? Or does the ideal of liberal democracy depend on an ideal citizenry that simply doesn’t exist?
Mr Porcaro: It would be cynical to put the blame on citizens for getting too close to influence decision-making. That would go against the very essence of the “liberal democracy ideal”. However, I am critical towards the pervasive idea that technology can provide quick fixes to bridge the gap between citizens and the government. By applying computational thinking to democracy, an extreme individualisation and instant participation, we forget democracy is not simply the result of an election or the mathematical sum of individual votes. Citizens risk entering a vicious circle where reactionary politics are easier to go through.
The Economist: Modern representative democracy was in some ways a response to the industrial revolution. If AI and automation radically alter the world we live in, will we have to update the way democracy works too—and if so, how?
Mr Porcaro: Democracy has already morphed several times. 19th century’s liberal democracy was shaken by universal suffrage, and adapted to the Fordist mode of production with the mass party. May 1968 challenged that model. Today, the massive availability of data and the increasing power of decision-making algorithms will change both political institutions.
The policy “production” process might be utterly redesigned. Data collected by devices we use on a daily basis (such as vehicles, domestic appliances and wearable sensors) will provide evidence about the drivers of personal voting choices, or the accountability of government decisions. …(More)
This surprising, everyday tool might hold the key to changing human behavior
Annabelle Timsit at Quartz: “To be a person in the modern world is to worry about your relationship with your phone. According to critics, smartphones are making us ill-mannered and sore-necked, dragging parents’ attention away from their kids, and destroying an entire generation.
But phones don’t have to be bad. With 4.68 billion people forecast to become mobile phone users by 2019, nonprofits and social science researchers are exploring new ways to turn our love of screens into a force for good. One increasingly popular option: Using texting to help change human behavior.
Texting: A unique tool
The short message service (SMS) was invented in the late 1980s, and the first text message was sent in 1992. (Engineer Neil Papworth sent “merry Christmas” to then-Vodafone director Richard Jarvis.) In the decades since, texting has emerged as the preferred communication method for many, and in particular younger generations. While that kind of habit-forming can be problematic—47% of US smartphone users say they “couldn’t live without” the device—our attachment to our phones also makes text-based programs a good way to encourage people to make better choices.
“Texting, because it’s anchored in mobile phones, has the ability to be with you all the time, and that gives us an enormous flexibility on precision,” says Todd Rose, director of the Mind, Brain, & Education Program at the Harvard Graduate School of Education. “When people lead busy lives, they need timely, targeted, actionable information.”
And who is busier than a parent? Text-based programs can help current or would-be moms and dads with everything from medication pickup to childhood development. Text4Baby, for example, messages pregnant women and young moms with health information and reminders about upcoming doctor visits. Vroom, an app for building babies’ brains, sends parents research-based prompts to help them build positive relationships with their children (for example, by suggesting they ask toddlers to describe how they’re feeling based on the weather). Muse, an AI-powered app, uses machine learning and big data to try and help parents raise creative, motivated, emotionally intelligent kids. As Jenny Anderson writes in Quartz: “There is ample evidence that we can modify parents’ behavior through technological nudges.”
Research suggests text-based programs may also be helpful in supporting young children’s academic and cognitive development. …Texts aren’t just being used to help out parents. Non-governmental organizations (NGOs) have also used them to encourage civic participation in kids and young adults. Open Progress, for example, has an all-volunteer community called “text troop” that messages young adults across the US, reminding them to register to vote and helping them find their polling location.
Text-based programs are also useful in the field of nutrition, where private companies and public-health organizations have embraced them as a way to give advice on healthy eating and weight loss. The National Cancer Institute runs a text-based program called SmokefreeTXT that sends US adults between three and five messages per day for up to eight weeks, to help them quit smoking.
Texting programs can be a good way to nudge people toward improving their mental health, too. Crisis Text Line, for example, was the first national 24/7 crisis-intervention hotline to conduct counseling conversations entirely over text…(More).