What Is Human-Centric Design?


Zack Quaintance at GovTech: “…Government services, like all services, have historically used some form of design to deploy user-facing components. The design portion of this equation is nothing new. What Olesund says is new, however, is the human-centric component.

“In the past, government services were often designed from the perspective and need of the government institution, not necessarily with the needs or desires of residents or constituents in mind,” said Olesund. “This might lead, for example, to an accumulation of stats and requirements for residents, or utilization of outdated technology because the government institution is locked into a contract.”

Basically, government has never set out to design its services to be clunky or hard to use. These qualities have, however, grown out of the legally complex frameworks that governments must adhere to, which can subsequently result in a failure to prioritize the needs of the people using the services rather than the institution.

Change, however, is underway. Human-centric design is one of the main priorities of the U.S. Digital Service (USDS) and 18F, a pair of organizations created under the Obama administration with missions that largely involve making government services more accessible to the citizenry through efficient use of tech.

Although the needs of state and municipal governments are more localized, the gov tech work done at the federal level by the USDS and 18F has at times served as a benchmark or guidepost for smaller government agencies.

“They both redesign services to make them digital and user-friendly,” Olesund said. “But they also do a lot of work creating frameworks and best practices for other government agencies to adopt in order to achieve some of the broader systemic change.”

One of the most tangible examples of human-centered design at the state or local level can be found at Michigan’s Department of Health and Human Services, which recently worked with the Detroit-based design studio Civillato reduce its paper services application from 40 pages, 18,000-some words and 1,000 questions, down to 18 pages, 3,904 words and 213 questions. Currently, Civilla is working with the nonprofit civic tech group Code for America to help bring the same massive level of human-centered design progress to the state’s digital services.

Other work is underway in San Francisco’s City Hall and within the state of California. A number of cities also have iTeams funded through Bloomberg Philanthropies, and their missions are to innovate in ways that solve ongoing municipal problems, a mission that often requires use of human-centric design….(More)”.

Data rights are civic rights: a participatory framework for GDPR in the US?


Elena Souris and Hollie Russon Gilman at Vox: “…While online rights are coming into question, it’s worth considering how those will overlap with offline rights and civic engagement.

The two may initially seem completely separate, but democracy itself depends on information and communication, and a balance of privacy (secret ballot) and transparency. As communication moves almost entirely to networked online technology platforms, the governance questions surrounding data and privacy have far-reaching civic and political implications for how people interact with all aspects of their lives, from commerce and government services to their friends, families, and communities. That is why we need a conversation about data protections, empowering users with their own information, and transparency — ultimately, data rights are now civic rights…

What could a golden mean in the US look like? Is it possible to take principles of the GDPR and apply a more community based, citizen-centric approach across states and localities in the United States? Could a US version of the GDPR be designed in a way that included public participation? Perhaps there could be an ongoing participatory role? Most of all, the questions underpinning data regulation need to serve as an impetus for an honest conversation about equity across digital access, digital literacy, and now digital privacy.

Across the country, we’re already seeing successful experiments with a more citizen-inclusive democracy, with localities and cities rising as engines of American re-innovationand laboratories of participatory democracy. Thanks to our federalist system, states are already paving the way for greater electoral reform, from public financing of campaigns to experiments with structures such as ranked-choice voting.

In these local federalist experiments, civic participation is slowly becoming a crucial tool. Innovations from participatory budgeting to interactive policy co-production sessions are giving people in communities a direct say in public policies. For example, the Rural Climate Dialogues in Minnesota empower rural residents to impact policy on long-term climate mitigation. Bowling Green, Kentucky, recently used the online deliberation platform Polisto identify common policy areas for consensus building. Scholars have been writing about various potential participatory models for our digital lives as well, including civic trusts.

Can we take these principles and begin a serious conversation for how to translate the best privacy practices, tools, and methods to ensure that people’s valuable online and offline resources — including their trust, attention span, and vital information — are also protected and honored? Since the people are a primary stakeholder in the conversation about civic data and data privacy, they should have a seat at the table.

Including citizens and residents in these conversations could have a big policy impact. First, working toward a participatory governance framework for civic data would enable people to understand the value of their data in the open market. Second, it would provide greater transparency to the value of networks — an individual’s social graph, a valuable asset, which, until now, people are generating in aggregate without anything in return. Third, it could amplify concerns of more vulnerable data users, including elderly or tech-illiterate citizens — and even refugees and international migrants, as Andrew Young and Stefaan Verhulst recently argued in the Stanford Social Innovation Review.

There are already templates and road maps for responsible data, but talking to those users themselves with a participatory governance approach could make them even more effective. Finally, citizens can help answer tough questions about what we value and when and how we need to make ethical choices with data.

Because data-collecting organizations will have to comply abroad soon, the GDPR is a good opportunity for the American social sector to consider data rights as civic rights and incorporate a participatory process to meet this challenge. Instead of simply assuming regulatory agencies will pave the way, a more participatory data framework could foster an ongoing process of civic empowerment and make the outcome more effective. It’s too soon to know the precise forms or mechanisms new data regulation should take. Instead of a rigid, predetermined format, the process needs to be community-driven by design — ensuring traditionally marginalized communities are front and center in this conversation, not only the elites who already hold the microphone.

It won’t be easy. Building a participatory governance structure for civic data will require empathy, compromise, and potentially challenging the preconceived relationship between people, institutions, and their information. The interplay between our online and offline selves is a continuous process of learning error. But if we simply replicate the top-down structures of the past, we can’t evolve toward a truly empowered digital democratic future. Instead, let’s use the GDPR as an opening in the United States for advancing the principles of a more transparent and participatory democracy….(More)”.

Friends with Academic Benefits


The new or interesting story isn’t just that Valerie, Betsy, and Steve’s friends had different social and academic impacts, but that they had various types of friendship networks. My research points to the importance of network structure—that is, the relationships among their friends—for college students’ success. Different network structures result from students’ experiences—such as race- and class-based marginalization on this predominantly White campus—and shape students’ experiences by helping or hindering them academically and socially.

I used social network techniques to analyze the friendship networks of 67 MU students and found they clumped into three distinctive types—tight-knitters, compartmentalizers, and samplers. Tight-knitters have one densely woven friendship group in which nearly all their friends are friends with one another. Compartmentalizers’ friends form two to four clusters, where friends know each other within clusters but rarely across them. And samplers make a friend or two from a variety of places, but the friends remain unconnected to each other. As shown in the figures, tight-knitters’ networks resemble a ball of yarn, compartmentalizers’ a bow-tie, and samplers’ a daisy. In these network maps, the person I interviewed is at the center and every other dot represents a friend, with lines representing connections among friends (that is, whether the person I interviewed believed that the two people knew each other). During the interviews, participants defined what friendship meant to them and listed as many friends as they liked (ranging from three to 45).

The students’ friendship network types influenced how friends matter for their academic and social successes and failures. Like Valerie, most Black and Latina/o students were tight-knitters. Their dense friendship networks provided a sense of home as a minority on a predominantly White campus. Tight-knit networks could provide academic support and motivation (as they did for Valerie) or pull students down academically if their friends lacked academic skills and motivation. Most White students were compartmentalizers like Betsy, and they succeeded with moderate levels of social support from friends and with social support and academic support from different clusters. Samplers came from a range of class and race backgrounds. Like Steve, samplers typically succeeded academically without relying on their friends. Friends were fun people who neither help nor hurt them academically. Socially, however, samplers reported feeling lonely and lacking social support….(More)”.

Participatory Budgeting: Step to Building Active Citizenship or a Distraction from Democratic Backsliding?


David Sasaki: “Is there any there there? That’s what we wanted to uncover beneath the hype and skepticism surrounding participatory budgeting, an innovation in democracy that began in Brazil in 1989 and has quickly spread to nearly every corner of the world like a viral hashtag….We ended up selecting two groups of consultants for two phases of work. The first phase was led by three academic researchers — Brian WamplerMike Touchton and Stephanie McNulty — to synthesize what we know broadly about PB’s impact and where there are gaps in the evidence. mySociety led the second phase, which originally intended to identify the opportunities and challenges faced by civil society organizations and public officials that implement participatory budgeting. However, a number of unforeseen circumstances, including contested elections in Kenya and a major earthquake in Mexico, shifted mySociety’s focus to take a global, field-wide perspective.

In the end, we were left with two reports that were similar in scope and differed in perspective. Together they make for compelling reading. And while they come from different perspectives, they settle on similar recommendations. I’ll focus on just three: 1) the need for better research, 2) the lack of global coordination, and 3) the emerging opportunity to link natural resource governance with participatory budgeting….

As we consider some preliminary opportunities to advance participatory budgeting, we are clear-eyed about the risks and challenges. In the face of democratic backsliding and the concern that liberal democracy may not survive the 21st century, are these efforts to deepen local democracy merely a distraction from a larger threat, or is this a way to build active citizenship? Also, implementing PB is expensive — both in terms of money and time; is it worth the investment? Is PB just the latest checkbox for governments that want a reputation for supporting citizen participation without investing in the values and process it entails? Just like the proliferation of fake “consultation meetings,” fake PB could merely exacerbate our disappointment with democracy. What should we make of the rise of participatory budgeting in quasi-authoritarian contexts like China and Russia? Is PB a tool for undemocratic central governments to keep local governments in check while giving citizens a simulacrum of democratic participation? Crucially, without intentional efforts to be inclusive like we’ve seen in Boston, PB could merely direct public resources to those neighborhoods with the most outspoken and powerful residents.

On the other hand, we don’t want to dismiss the significant opportunities that come with PB’s rapid global expansion. For example, what happens when social movements lose their momentum between election cycles? Participatory budgeting could create a civic space for social movements to pursue concrete outcomes while engaging with neighbors and public officials. (In China, it has even helped address the urban-rural divide on perspectives toward development policy.) Meanwhile, social media have exacerbated our human tendency to complain, but participatory budgeting requires us to shift our perspective from complaints to engaging with others on solutions. It could even serve as a gateway to deeper forms of democratic participation and increased trust between governments, civil society organizations, and citizens. Perhaps participatory budgeting is the first step we need to rebuild our civic infrastructure and make space for more diverse voices to steer our complex public institutions.

Until we have more research and evidence, however, these possibilities remain speculative….(More)”.

Behavioral Economics: Are Nudges Cost-Effective?


Carla Fried at UCLA Anderson Review: “Behavioral science does not suffer from a lack of academic focus. A Google Scholar search for the term delivers more than three million results.

While there is an abundance of research into how human nature can muck up our decision making process and the potential for well-placed nudges to help guide us to better outcomes, the field has kept rather mum on a basic question: Are behavioral nudges cost-effective?

That’s an ever more salient question as the art of the nudge is increasingly being woven into public policy initiatives. In 2009, the Obama administration set up a nudge unit within the White House Office of Information and Technology, and a year later the U.K. government launched its own unit. Harvard’s Cass Sunstein, co-author of the book Nudge, headed the U.S. effort. His co-author, the University of Chicago’s Richard Thaler — who won the 2017 Nobel Prize in Economics — helped develop the U.K.’s Behavioral Insights office. Nudge units are now humming away in other countries, including Germany and Singapore, as well as at the World Bank, various United Nations agencies and the Organisation for Economic Co-operation and Development (OECD).

Given the interest in the potential for behavioral science to improve public policy outcomes, a team of nine experts, including UCLA Anderson’s Shlomo Benartzi, Sunstein and Thaler, set out to explore the cost-effectiveness of behavioral nudges relative to more traditional forms of government interventions.

In addition to conducting their own experiments, the researchers looked at published research that addressed four areas where public policy initiatives aim to move the needle to improve individuals’ choices: saving for retirement, applying to college, energy conservation and flu vaccinations.

For each topic, they culled studies that focused on both nudge approaches and more traditional mandates such as tax breaks, education and financial incentives, and calculated cost-benefit estimates for both types of studies. Research used in this study was published between 2000 and 2015. All cost estimates were inflation-adjusted…

The study itself should serve as a nudge for governments to consider adding nudging to their policy toolkits, as this approach consistently delivered a high return on investment, relative to traditional mandates and policies….(More)”.

A New Model for Industry-Academic Partnerships


Working Paper by Gary King and Nathaniel Persily: “The mission of the academic social sciences is to understand and ameliorate society’s greatest challenges. The data held by private companies holds vast potential to further this mission. Yet, because of its interaction with highly politicized issues, customer privacy, proprietary content, and differing goals of firms and academics, these data are often inaccessible to university researchers.

We propose here a new model for industry-academic partnerships that addresses these problems via a novel organizational structure: Respected scholars form a commission which, as a trusted third party, receives access to all relevant firm information and systems, and then recruits independent academics to do research in specific areas following standard peer review protocols organized and funded by nonprofit foundations.

We also report on a partnership we helped forge under this model to make data available about the extremely visible and highly politicized issues surrounding the impact of social media on elections and democracy. In our partnership, Facebook will provide privacy-preserving data and access; seven major politically and substantively diverse nonprofit foundations will fund the research; and the Social Science Research Council will oversee the peer review process for funding and data access….(More)”.

Democracy is in danger when the census undercounts vulnerable populations


Emily Klancher Merchant at The Conversation: “The 2020 U.S. Census is still two years away, but experts and civil rights groups are already disputing the results.At issue is whether the census will fulfill the Census Bureau’s mandate to “count everyone once, only once, and in the right place.”

The task is hardly as simple as it seems and has serious political consequences. Recent changes to the 2020 census, such as asking about citizenship status, will make populations already vulnerable to undercounting even more likely to be missed. These vulnerable populations include the young, poor, nonwhite, non-English-speaking, foreign-born and transient.

An accurate count is critical to the functioning of the U.S. government. Census data determine how the power and resources of the federal government are distributed across the 50 states. This includes seats in the House, votes in the Electoral College and funds for federal programs. Census data also guide the drawing of congressional and other voting districts and the enforcement of civil and voting rights laws.

Places where large numbers of people go uncounted get less than their fair share of political representation and federal resources. When specific racial and ethnic groups are undercounted, it is harder to identify and rectify violations of their civil rights. My research on the international history of demography demonstrates that the question of how to equitably count the population is not new, nor is it unique to the United States. The experience of the United States and other countries may hold important lessons as the Census Bureau finalizes its plans for the 2020 count.

Let’s take a look at that history….

In 1790, the United States became the first country to take a regular census. Following World War II, the U.S. government began to promote census-taking in other countries. U.S. leaders believed data about the size and location of populations throughout the Western Hemisphere could help the government plan defense. What’s more, U.S. businesses could also use the data to identify potential markets and labor forces in nearby countries.

The U.S. government began investing in a program called the Census of the Americas. Through this program, the State Department provided financial support and the Census Bureau provided technical assistance to Western Hemisphere countries taking censuses in 1950.

United Nations demographers also viewed the Census of the Americas as an opportunity. Data that were standardized across countries could serve as the basis for projections of world population growth and the calculation of social and economic indicators. They also hoped that censuses would provide useful information to newly established governments. The U.N. turned the Census of the Americas into a global affair, recommending that “all Member States planning population censuses about 1950 use comparable schedules so far as possible.” Since 1960, the U.N. has sponsored a World Census Program every 10 years. The 2020 World Census Program will be the seventh round….

Not all countries went along with the program. For example, Lebanon’s Christian rulers feared that a census would show Christians to be a minority, undermining the legitimacy of their government. However, for the 65 sovereign countries taking censuses between 1945 and 1954, leaders faced the same question the U.S. faces today: How can we make sure that everyone has an equal chance of being counted?…(More)”.

Algorithmic Sovereignty


Thesis by Denis Roio: “This thesis describes a practice based research journey across various projects dealing with the design of algorithms, to highlight the governance implications in design choices made on them. The research provides answers and documents methodologies to address the urgent need for more awareness of decisions made by algorithms about the social and economical context in which we live. Algorithms consitute a foundational basis across different fields of studies: policy making, governance, art and technology. The ability to understand what is inscribed in such algorithms, what are the consequences of their execution and what is the agency left for the living world is crucial. Yet there is a lack of interdisciplinary and practice based literature, while specialised treatises are too narrow to relate to the broader context in which algorithms are enacted.

This thesis advances the awareness of algorithms and related aspects of sovereignty through a series of projects documented as participatory action research. One of the projects described, Devuan, leads to the realisation of a new, worldwide renown operating system. Another project, “sup”, consists of a minimalist approach to mission critical software and literate programming to enhance security and reliability of applications. Another project, D-CENT, consisted in a 3 year long path of cutting edge research funded by the EU commission on the emerging dynamics of participatory democracy connected to the technologies adopted by citizen organizations.

My original contribution to knowledge lies within the function that the research underpinning these projects has on the ability to gain a better understanding of sociopolitical aspects connected to the design and management of algorithms. It suggests that we can improve the design and regulation of future public, private and common spaces which are increasingly governed by algorithms by understanding not only economical and legal implications, but also the connections between design choices and the sociopolitical context for their development and execution….(More)”.

How Democracy Can Survive Big Data


Colin Koopman in The New York Times: “…The challenge of designing ethics into data technologies is formidable. This is in part because it requires overcoming a century-long ethos of data science: Develop first, question later. Datafication first, regulation afterward. A glimpse at the history of data science shows as much.

The techniques that Cambridge Analytica uses to produce its psychometric profiles are the cutting edge of data-driven methodologies first devised 100 years ago. The science of personality research was born in 1917. That year, in the midst of America’s fevered entry into war, Robert Sessions Woodworth of Columbia University created the Personal Data Sheet, a questionnaire that promised to assess the personalities of Army recruits. The war ended before Woodworth’s psychological instrument was ready for deployment, but the Army had envisioned its use according to the precedent set by the intelligence tests it had been administering to new recruits under the direction of Robert Yerkes, a professor of psychology at Harvard at the time. The data these tests could produce would help decide who should go to the fronts, who was fit to lead and who should stay well behind the lines.

The stakes of those wartime decisions were particularly stark, but the aftermath of those psychometric instruments is even more unsettling. As the century progressed, such tests — I.Q. tests, college placement exams, predictive behavioral assessments — would affect the lives of millions of Americans. Schoolchildren who may have once or twice acted out in such a way as to prompt a psychometric evaluation could find themselves labeled, setting them on an inescapable track through the education system.

Researchers like Woodworth and Yerkes (or their Stanford colleague Lewis Terman, who formalized the first SAT) did not anticipate the deep consequences of their work; they were too busy pursuing the great intellectual challenges of their day, much like Mr. Zuckerberg in his pursuit of the next great social media platform. Or like Cambridge Analytica’s Christopher Wylie, the twentysomething data scientist who helped build psychometric profiles of two-thirds of all Americans by leveraging personal information gained through uninformed consent. All of these researchers were, quite understandably, obsessed with the great data science challenges of their generation. Their failure to consider the consequences of their pursuits, however, is not so much their fault as it is our collective failing.

For the past 100 years we have been chasing visions of data with a singular passion. Many of the best minds of each new generation have devoted themselves to delivering on the inspired data science promises of their day: intelligence testing, building the computer, cracking the genetic code, creating the internet, and now this. We have in the course of a single century built an entire society, economy and culture that runs on information. Yet we have hardly begun to engineer data ethics appropriate for our extraordinary information carnival. If we do not do so soon, data will drive democracy, and we may well lose our chance to do anything about it….(More)”.

Reapplying behavioral symmetry: public choice and choice architecture


Michael David Thomas in Public Choice: “New justifications for government intervention based on behavioral psychology rely on a behavioral asymmetry between expert policymakers and market participants. Public choice theory applied the behavioral symmetry assumption to policy making in order to illustrate how special interests corrupt the suppositions of benevolence on the part of policy makers. Cognitive problems associated with market choices have been used to argue for even more intervention.

If behavioral symmetry is applied to the experts and not just to market participants, problems with this approach to public policy formation become clear. Manipulation, cognitive capture, and expert bias are among the problems associated with a behavioral theory of market failure. The application of behavioral symmetry to the expanding role of choice architecture will help to limit the bias in behavioral policy. Since experts are also subject to cognitive failures, policy must include an evaluation of expert error. Like the rent-seeking literature before it, a theory of cognitive capture points out the systematic problems with a theory of asymmetry between policy experts and citizens when it comes to policy making….(More)”.