NYC’s New Tech to Track Every Homeless Person in the City


Wired: “New York is facing a crisis. The city that never sleeps has become the city with the most people who have no home to sleep in. As rising rents outpace income growth across the five boroughs, some 62,000 people, nearly 40 percent of them children, live in homeless shelters—rates the city hasn’t seen since the Great Depression.

As New York City Mayor Bill de Blasio faces reelection in November, his reputation and electoral prospects depend in part on his ability to reverse this troubling trend. In the mayor’s estimation, combatting homelessness effectively will require opening 90 new shelters across the city and expanding the number of outreach workers who canvass the streets every day offering aid and housing. The effort will also require having the technology in place to ensure that work happens as efficiently as possible. To that end, the city is rolling out a new tool, StreetSmart, aims to give city agencies and non-profit groups a comprehensive view of all of the data being collected on New York’s homeless on a daily basis.

Think of StreetSmart as a customer relationship management system for the homeless. Every day in New York, some 400 outreach workers walk the streets checking in on homeless people and collecting information about their health, income, demographics, and history in the shelter system, among other data points. The workers get to know this vulnerable population and build trust in the hope of one day placing them in some type of housing.

StreetSmart-Dashboard.jpg

Traditionally, outreach workers have entered information about every encounter into a database, keeping running case files. But those databases never talked to each other. One outreach worker in the Bronx might never know she was talking to the same person who’d checked into a Brooklyn shelter a week prior. More importantly, the worker might never know why that person left. What’s more, systems used by city agencies and non-profits seldom overlapped, complicating efforts to keep track of individuals….

The big promise of StreetSmart extends beyond its ability to help outreach workers in the moment. The aggregation of all this information could also help the city proactively design fixes to problems it wouldn’t have otherwise seen. The tool has a map feature that shows where encampments are popping up and where outreach workers are having the most interactions. It can also be used to assess how effective different housing facilities are at keeping people off the streets….(More)”.

The Nudge Wars: A Glimpse into the Modern Socialist Calculation Debate


Paper by Abigail Devereaux: “Nudge theory, the preferences-neutral subset of modern behavioral economic policy, is premised on irrational decision-making at the level of the individual agent. We demonstrate how Hayek’s epistemological argument, developed primarily during the socialist calculation debate in response to claims made by fellow economists in favor of central planning, can be extended to show how nudge theory requires social architects to have access to fundamentally unascertainable implicit and local knowledge. We draw parallels between the socialist calculation debate and nudge theoretical arguments throughout, particularly the “libertarian socialism” of H. D. Dickinson and the “libertarian paternalism” of Cass Sunstein and Richard Thaler. We discuss the theory of creative and computable economics in order to demonstrate how nudges are provably not preferences-neutral, as even in a state of theoretically perfect information about current preferences, policy-makers cannot access information about how preferences may change in the future. We conclude by noting that making it cheaper to engage in some methods of decision-making is analogous to subsidizing some goods. Therefore, the practical consequences of implementing nudge theory could erode the ability of individuals to make good decisions by destroying the kinds of knowledge-encoding institutions that endogenously emerge to assist agent decision-making….(More)”

Beyond Networks – Interlocutory Coalitions, the European and Global Legal Orders


Book by Gianluca Sgueo: “….explores the activism promoted by organised networks of civil society actors in opening up possibilities for more democratic supranational governance. It examines the positive and negative impact that such networks of civil society actors – named “interlocutory coalitions” – may have on the convergence of principles of administrative governance across the European legal system and other supranational legal systems.

The book takes two main controversial aspects into account: the first relates to the convergence between administrative rules pertaining to different supranational regulatory systems. Traditionally, the spread of methods of administrative governance has been depicted primarily against the background of the interactions between the domestic and the supranational arena, both from a top-down and bottom-up perspective. However, the exploration of interactions occurring at the supranational level between legal regimes is still not grounded on adequate empirical evidence. The second controversial aspect considered in this book consists of the role of civil society actors operating at the supranational level. In its discussion of the first aspect, the book focuses on the relations between the European administrative law and the administrative principles of law pertaining to other supranational regulatory regimes and regulators, including the World Bank, the International Monetary Fund, the World Trade Organization, the United Nations, the Organization for Economic Cooperation and Development, the Asian Development Bank, and the Council of Europe. The examination of the second aspect involves the exploration of the still little examined, but crucial, role of civil society organised networks in shaping global administrative law. These “interlocutory coalitions” include NGOs, think tanks, foundations, universities, and occasionally activists with no formal connections to civil society organisations. The book describes such interlocutory coalitions as drivers of harmonized principles of participatory democracy at the European and global levels. However, interlocutory coalitions show a number of tensions (e.g. the governability of coalitions, the competition among them) that may hamper the impact they have on the reconfiguration of individuals’ rights, entitlements and responsibilities in the global arena….(More)’

Big Data Science: Opportunities and Challenges to Address Minority Health and Health Disparities in the 21st Century


Xinzhi Zhang et al in Ethnicity and Disease: “Addressing minority health and health disparities has been a missing piece of the puzzle in Big Data science. This article focuses on three priority opportunities that Big Data science may offer to the reduction of health and health care disparities. One opportunity is to incorporate standardized information on demographic and social determinants in electronic health records in order to target ways to improve quality of care for the most disadvantaged popula­tions over time. A second opportunity is to enhance public health surveillance by linking geographical variables and social determinants of health for geographically defined populations to clinical data and health outcomes. Third and most impor­tantly, Big Data science may lead to a better understanding of the etiology of health disparities and understanding of minority health in order to guide intervention devel­opment. However, the promise of Big Data needs to be considered in light of significant challenges that threaten to widen health dis­parities. Care must be taken to incorporate diverse populations to realize the potential benefits. Specific recommendations include investing in data collection on small sample populations, building a diverse workforce pipeline for data science, actively seeking to reduce digital divides, developing novel ways to assure digital data privacy for small populations, and promoting widespread data sharing to benefit under-resourced minority-serving institutions and minority researchers. With deliberate efforts, Big Data presents a dramatic opportunity for re­ducing health disparities but without active engagement, it risks further widening them….(More)”

Twitter and Tear Gas: The Power and Fragility of Networked Protest


Screen Shot 2017-05-07 at 8.19.03 AMBook by Zeynep Tufekci: “A firsthand account and incisive analysis of modern protest, revealing internet-fueled social movements’ greatest strengths and frequent challenges….
To understand a thwarted Turkish coup, an anti–Wall Street encampment, and a packed Tahrir Square, we must first comprehend the power and the weaknesses of using new technologies to mobilize large numbers of people. An incisive observer, writer, and participant in today’s social movements, Zeynep Tufekci explains in this accessible and compelling book the nuanced trajectories of modern protests—how they form, how they operate differently from past protests, and why they have difficulty persisting in their long-term quests for change.

Tufekci speaks from direct experience, combining on-the-ground interviews with insightful analysis. She describes how the internet helped the Zapatista uprisings in Mexico, the necessity of remote Twitter users to organize medical supplies during Arab Spring, the refusal to use bullhorns in the Occupy Movement that started in New York, and the empowering effect of tear gas in Istanbul’s Gezi Park. These details from life inside social movements complete a moving investigation of authority, technology, and culture—and offer essential insights into the future of governance….(More)”

Using big data to understand consumer behaviour on ethical issues


Phani Kumar Chintakayala  and C. William Young in the Journal of Consumer Ethics: “The Consumer Data Research Centre (CDRC) was established by the UK Economic and Social Research Council and launched its data services in 2015. Te project is led by the University of Leeds and UCL, with partners at the Universities of Liverpool and Oxford. It is working with consumer-related organisations and businesses to open up their data resources to trusted researchers, enabling them to carry out important social and economic research….

Over the last few years there has been much talk about how so-called “big data” is the future and if you are not exploiting it, you are losing your competitive advantage. So what is there in the latest wave of enthusiasm on big data to help organisations, researchers and ethical consumers?…

Examples of the types of research being piloted using data from the food sector by CDRC include the consumption of milk and egg products. Te results clearly indicate that not all the sustainable  products are considered the same by consumers, and consumption behaviour varies across sustainable product categories. i) A linked data analysis was carried out by combining sales data of organic milk and free range eggs from a retailer with over 300 stores across the UK, green and ethical atitude data from CDRC’s data partner, and socio-demographic and deprivation data from open sources. Te analysis revealed that, in general, the consumers with deeper green and ethical atitudes are the most likely consumers of sustainable products. Deprivation has a negative efect on the consumption of sustainable products. Price, as expected, has a negative efect but the impact varies across products. Convenience stores have signifcant negative efect on the consumption of sustainable products. Te infuences of socio-demographic characteristics such as gender, age, ethnicity etc. seem to vary by product categories….

Big data can help organisations, researchers and ethical consumers understand the ethics around consumer behaviour and products. Te opportunities to link diferent types of data is exciting but must be research-question-led to avoid digging for non-existent causal links. Te methods and access to data is still a barrier but open access is key to solving this. Big data will probably only help in flling in the details of our knowledge on ethical consumption and on products, but this can only help our decision making…(More)”.

The rise of the algorithm need not be bad news for humans


 at the Financial Times: “The science and technology committee of the House of Commons published the responses to its inquiry on “algorithms in decision-making” on April 26. They vary in length, detail and approach, but share one important feature — the belief that human intervention may be unavoidable, indeed welcome, when it comes to trusting algorithmic decisions….

In a society in which algorithms and other automated processes are increasingly apparent, the important question, addressed by the select committee, is the extent to which we can trust such brainless technologies, which are regularly taking decisions instead of us. Now that white-collar jobs are being replaced, we may all be at the mercy of algorithmic errors — an unfair attribution of responsibility, say, or some other Kafkaesque computer-generated disaster. The best protection against such misfires is to put human intelligence back into the equation.

Trust depends on delivery, transparency and accountability. You trust your doctor, for instance, if they do what they are supposed to do, if you can see what they are doing and if they take responsibility in the event of things go wrong. The same holds true for algorithms. We trust them when it is clear what they are designed to deliver, when it is transparent whether or not they are delivering it, and, finally, when someone is accountable — or at least morally responsible, if not legally liable — if things go wrong.

Only human intelligence can solve the AI challenge Societies have to devise frameworks for directing technologies for the common good This is where humans come in. First, to design the right sorts of algorithms and so to minimise risk. Second, since even the best algorithm can sometimes go wrong, or be fed the wrong data or in some other way misused, we need to ensure that not all decisions are left to brainless machines. Third, while some crucial decisions may indeed be too complex for any human to cope with, we should nevertheless oversee and manage such decision-making processes. And fourth, the fact that a decision is taken by an algorithm is not grounds for disregarding the insight and understanding that only humans can bring when things go awry.

In short, we need a system of design, control, transparency and accountability overseen by humans. And this need not mean spurning the help provided by digital technologies. After all, while a computer may play chess better than a human, a human in tandem with a computer is unbeatable….(More).

Solving a Global Digital Identity Crisis


Seth Berkley at MIT Technology Review:” In developing countries, one in three children under age five has no record of their existence. Technology can help….Digital identities have become an integral part of modern life, but things like e-passports, digital health records, or Apple Pay really only provide faster, easier, or sometimes smarter ways of accessing services that are already available.

In developing countries it’s a different story. There, digital ID technology can have a profound impact on people’s lives by enabling them to access vital and often life-saving services for the very first time….The challenge is that in poor countries, an increasing number of people live under the radar, invisible to the often archaic, paper-based methods used to certify births, deaths, and marriages. One in three children under age five does not officially exist because their birth wasn’t registered. Even when it is, many don’t have proof in the form of birth certificates. This can have a lasting impact on children’s lives, leaving them vulnerable to neglect and abuse.

In light of this, it is difficult to see how we will meet the SDG16 deadline without a radical solution. What we need are new and affordable digital ID technologies capable of working in poorly resourced settings—for example, where there is no reliable electricity—and yet able to leapfrog current approaches to reach everyone, whether they’re living in remote villages or urban slums.

Such technologies are already emerging as part of efforts to increase global childhood vaccination coverage, with small-scale trials across Africa and Asia. With 86 percent of infants now having access to routine immunization—where they receive all three doses of a diphtheria-pertussis-tetanus vaccine—there are obvious advantages of building on an existing system with such a broad reach.

These systems were designed to help the World Health Organization, UNICEF, and my organization, Gavi, the Vaccine Alliance, close the gap on the one in seven infants still missing out. But they can also be used to help us achieve SDG16.

One, called MyChild, helps countries transition from paper to digital. At first glance it looks like a typical paper booklet on which workers can record health-record details about the child, such as vaccinations, deworming, or nutritional supplements. But each booklet contains a unique identification number and tear-out slips that are collected and scanned later. This means that even if a child’s birth hasn’t been registered, a unique digital record will follow them through childhood. Developed by Swedish startup Shifo, this system has been used to register more than 95,000 infants in Uganda, Afghanistan, and the Gambia, enabling health workers to follow up either in person or using text reminders to parents.

Another system, called Khushi Baby, is entirely paperless and involves giving each child a digital necklace that contains a unique ID number on a near-field communication chip. This can be scanned by community health workers using a cell phone, enabling them to update a child’s digital health records even in remote areas with no cell coverage. Trials in the Indian state of Rajasthan have been carried out across 100 villages to track more than 15,000 vaccination events. An organization called ID2020 is exploring the use of blockchain technology to create access to a unique identity for those who currently lack one….(More)”

Algorithmic accountability


 at TechCrunch: “When Netflix recommends you watch “Grace and Frankie” after you’ve finished “Love,” an algorithm decided that would be the next logical thing for you to watch. And when Google shows you one search result ahead of another, an algorithm made a decision that one page was more important than the other. Oh, and when a photo app decides you’d look better with lighter skin, a seriously biased algorithm that a real person developed made that call.

Algorithms are sets of rules that computers follow in order to solve problems and make decisions about a particular course of action. Whether it’s the type of information we receive, the information people see about us, the jobs we get hired to do, the credit cards we get approved for, and, down the road, the driverless cars that either see us or don’t see us, algorithms are increasingly becoming a big part of our lives.

But there is an inherent problem with algorithms that begins at the most base level and persists throughout its adaption: human bias that is baked into these machine-based decision-makers.

You may remember that time when Uber’s self-driving car ran a red light in San Francisco, or when Google’s photo app labeled images of black people as gorillas. The Massachusetts Registry of Motor Vehicles’ facial-recognition algorithm mistakenly tagged someone as a criminal and revoked their driver’s license. And Microsoft’s bot Tay went rogue and decided to become a white supremacist. Those were algorithms at their worst. They have also recently been thrust into the spotlight with the troubles around fake news stories surfacing in Google search results and on Facebook.

But algorithms going rogue have much greater implications; they can result in life-altering consequences for unsuspecting people. Think about how scary it could be with algorithmically biased self-driving cars, drones and other sorts of automated vehicles. Consider robots that are algorithmically biased against black people or don’t properly recognize people who are not cisgender white people, and then make a decision on the basis that the person is not human.

Another important element to consider is the role algorithm’s play in determining what we see in the world, as well as how people see us. Think driverless cars “driven” by algorithms mowing down black people because they don’t recognize black people as human. Or algorithmic software that predicts future criminals, which just so happens to be biased against black people.

A variety of issues can arise as a result of bad or erroneous data, good but biased data because there’s not enough of it, or an inflexible model that can’t account for different scenarios.

The dilemma is figuring out what to do about these problematic algorithmic outcomes. Many researchers and academics are actively exploring how to increase algorithmic accountability. What would it mean if tech companies provided their code in order to make these algorithmic decisions more transparent? Furthermore, what would happen if some type of government board would be in charge of reviewing them?…(More)”.

The Next Great Experiment


A collection of essays from technologists and scholars about how machines are reshaping civil society” in the Atlantic:” Technology is changing the way people think about—and participate in—democratic society. What does that mean for democracy?…

We are witnessing, on a massive scale, diminishing faith in institutions of all kinds. People don’t trust the government. They don’t trust banks and other corporations. They certainly don’t trust the news media.

At the same time, we are living through a period of profound technological change. Along with the rise of bioengineering, networked devices, autonomous robots, space exploration, and machine learning, the mobile internet is recontextualizing how we relate to one another, dramatically changing the way people seek and share information, and reconfiguring how we express our will as citizens in a democratic society.

But trust is a requisite for democratic governance. And now, many are growing disillusioned with democracy itself.

Disentangling the complex forces that are driving these changes can help us better understand what ails democracies today, and potentially guide us toward compelling solutions. That’s why we asked more than two dozen people who think deeply about the intersection of technology and civics to reflect on two straightforward questions: Is technology hurting democracy? And can technology help save democracy?

We received an overwhelming response. Our contributors widely view 2017 as a moment of reckoning. They are concerned with many aspects of democratic life and put a spotlight in particular on correcting institutional failures that have contributed most to inequality of access—to education, information, and voting—as well as to ideological divisiveness and the spread of misinformation. They also offer concrete solutions for how citizens, corporations, and governmental bodies can improve the free flow of reliable information, pull one another out of ever-deepening partisan echo chambers, rebuild spaces for robust and civil discourse, and shore up the integrity of the voting process itself.

Despite the unanimous sense of urgency, the authors of these essays are cautiously optimistic, too. Everyone who participated in this series believes there is hope yet—for democracy, and for the institutions that support it. They also believe that technology can help, though it will take time and money to make it so. Democracy can still thrive in this uncertain age, they argue, but not without deliberate and immediate action from the people who believe it is worth protecting.

We’ll publish a new essay every day for the next several weeks, beginning with Shannon Vallor’s “Lessons From Isaac Asimov’s Multivac.”…(More)”