Findings of the Big Data and Privacy Working Group Review


John Podesta at the White House Blog: “Over the past several days, severe storms have battered Arkansas, Oklahoma, Mississippi and other states. Dozens of people have been killed and entire neighborhoods turned to rubble and debris as tornadoes have touched down across the region. Natural disasters like these present a host of challenges for first responders. How many people are affected, injured, or dead? Where can they find food, shelter, and medical attention? What critical infrastructure might have been damaged?
Drawing on open government data sources, including Census demographics and NOAA weather data, along with their own demographic databases, Esri, a geospatial technology company, has created a real-time map showing where the twisters have been spotted and how the storm systems are moving. They have also used these data to show how many people live in the affected area, and summarize potential impacts from the storms. It’s a powerful tool for emergency services and communities. And it’s driven by big data technology.
In January, President Obama asked me to lead a wide-ranging review of “big data” and privacy—to explore how these technologies are changing our economy, our government, and our society, and to consider their implications for our personal privacy. Together with Secretary of Commerce Penny Pritzker, Secretary of Energy Ernest Moniz, the President’s Science Advisor John Holdren, the President’s Economic Advisor Jeff Zients, and other senior officials, our review sought to understand what is genuinely new and different about big data and to consider how best to encourage the potential of these technologies while minimizing risks to privacy and core American values.
Over the course of 90 days, we met with academic researchers and privacy advocates, with regulators and the technology industry, with advertisers and civil rights groups. The President’s Council of Advisors for Science and Technology conducted a parallel study of the technological trends underpinning big data. The White House Office of Science and Technology Policy jointly organized three university conferences at MIT, NYU, and U.C. Berkeley. We issued a formal Request for Information seeking public comment, and hosted a survey to generate even more public input.
Today, we presented our findings to the President. We knew better than to try to answer every question about big data in three months. But we are able to draw important conclusions and make concrete recommendations for Administration attention and policy development in a few key areas.
There are a few technological trends that bear drawing out. The declining cost of collection, storage, and processing of data, combined with new sources of data like sensors, cameras, and geospatial technologies, mean that we live in a world of near-ubiquitous data collection. All this data is being crunched at a speed that is increasingly approaching real-time, meaning that big data algorithms could soon have immediate effects on decisions being made about our lives.
The big data revolution presents incredible opportunities in virtually every sector of the economy and every corner of society.
Big data is saving lives. Infections are dangerous—even deadly—for many babies born prematurely. By collecting and analyzing millions of data points from a NICU, one study was able to identify factors, like slight increases in body temperature and heart rate, that serve as early warning signs an infection may be taking root—subtle changes that even the most experienced doctors wouldn’t have noticed on their own.
Big data is making the economy work better. Jet engines and delivery trucks now come outfitted with sensors that continuously monitor hundreds of data points and send automatic alerts when maintenance is needed. Utility companies are starting to use big data to predict periods of peak electric demand, adjusting the grid to be more efficient and potentially averting brown-outs.
Big data is making government work better and saving taxpayer dollars. The Centers for Medicare and Medicaid Services have begun using predictive analytics—a big data technique—to flag likely instances of reimbursement fraud before claims are paid. The Fraud Prevention System helps identify the highest-risk health care providers for waste, fraud, and abuse in real time and has already stopped, prevented, or identified $115 million in fraudulent payments.
But big data raises serious questions, too, about how we protect our privacy and other values in a world where data collection is increasingly ubiquitous and where analysis is conducted at speeds approaching real time. In particular, our review raised the question of whether the “notice and consent” framework, in which a user grants permission for a service to collect and use information about them, still allows us to meaningfully control our privacy as data about us is increasingly used and reused in ways that could not have been anticipated when it was collected.
Big data raises other concerns, as well. One significant finding of our review was the potential for big data analytics to lead to discriminatory outcomes and to circumvent longstanding civil rights protections in housing, employment, credit, and the consumer marketplace.
No matter how quickly technology advances, it remains within our power to ensure that we both encourage innovation and protect our values through law, policy, and the practices we encourage in the public and private sector. To that end, we make six actionable policy recommendations in our report to the President:
Advance the Consumer Privacy Bill of Rights. Consumers deserve clear, understandable, reasonable standards for how their personal information is used in the big data era. We recommend the Department of Commerce take appropriate consultative steps to seek stakeholder and public comment on what changes, if any, are needed to the Consumer Privacy Bill of Rights, first proposed by the President in 2012, and to prepare draft legislative text for consideration by stakeholders and submission by the President to Congress.
Pass National Data Breach Legislation. Big data technologies make it possible to store significantly more data, and further derive intimate insights into a person’s character, habits, preferences, and activities. That makes the potential impacts of data breaches at businesses or other organizations even more serious. A patchwork of state laws currently governs requirements for reporting data breaches. Congress should pass legislation that provides for a single national data breach standard, along the lines of the Administration’s 2011 Cybersecurity legislative proposal.
Extend Privacy Protections to non-U.S. Persons. Privacy is a worldwide value that should be reflected in how the federal government handles personally identifiable information about non-U.S. citizens. The Office of Management and Budget should work with departments and agencies to apply the Privacy Act of 1974 to non-U.S. persons where practicable, or to establish alternative privacy policies that apply appropriate and meaningful protections to personal information regardless of a person’s nationality.
Ensure Data Collected on Students in School is used for Educational Purposes. Big data and other technological innovations, including new online course platforms that provide students real time feedback, promise to transform education by personalizing learning. At the same time, the federal government must ensure educational data linked to individual students gathered in school is used for educational purposes, and protect students against their data being shared or used inappropriately.
Expand Technical Expertise to Stop Discrimination. The detailed personal profiles held about many consumers, combined with automated, algorithm-driven decision-making, could lead—intentionally or inadvertently—to discriminatory outcomes, or what some are already calling “digital redlining.” The federal government’s lead civil rights and consumer protection agencies should expand their technical expertise to be able to identify practices and outcomes facilitated by big data analytics that have a discriminatory impact on protected classes, and develop a plan for investigating and resolving violations of law.
Amend the Electronic Communications Privacy Act. The laws that govern protections afforded to our communications were written before email, the internet, and cloud computing came into wide use. Congress should amend ECPA to ensure the standard of protection for online, digital content is consistent with that afforded in the physical world—including by removing archaic distinctions between email left unread or over a certain age.
We also identify several broader areas ripe for further study, debate, and public engagement that, collectively, we hope will spark a national conversation about how to harness big data for the public good. We conclude that we must find a way to preserve our privacy values in both the domestic and international marketplace. We urgently need to build capacity in the federal government to identify and prevent new modes of discrimination that could be enabled by big data. We must ensure that law enforcement agencies using big data technologies do so responsibly, and that our fundamental privacy rights remain protected. Finally, we recognize that data is a valuable public resource, and call for continuing the Administration’s efforts to open more government data sources and make investments in research and technology.
While big data presents new challenges, it also presents immense opportunities to improve lives, the United States is perhaps better suited to lead this conversation than any other nation on earth. Our innovative spirit, technological know-how, and deep commitment to values of privacy, fairness, non-discrimination, and self-determination will help us harness the benefits of the big data revolution and encourage the free flow of information while working with our international partners to protect personal privacy. This review is but one piece of that effort, and we hope it spurs a conversation about big data across the country and around the world.
Read the Big Data Report.
See the fact sheet from today’s announcement.

Can technology end homelessness?


Geekwire: “At the heart of Seattle’s Pioneer Square neighborhood exists a unique juxtaposition.
Inside a two-story brick building is the Impact Hub co-working space and business incubator, a place where entrepreneurs are busily working on ideas to improve the world we live in.
hacktoendhomelessnessBut walk outside the Impact Hub’s doors, and you’ll enter an entirely different world.
Homelessness. Drugs. Violence.
Now, those two contrasting scenes are coming together.
This weekend, more than 100 developers, designers, entrepreneurs and do-gooders will team up at the Impact Hub for the first-ever Hack to End Homelessness, a four-day event that encourages participants to envision and create ideas to alleviate the homelessness problem in Seattle.
The Washington Low Income Housing Alliance, Real Change and several other local homeless services and advocacy groups have already submitted project proposals, which range from an e-commerce site showcasing artwork of homeless youth to a social network focusing on low-end mobile phones for people who are homeless.
Seattle has certainly made an effort to fix its homelessness problem. Back in 2005, the Committee to End Homelessness established a 10-year plan to dramatically reduce the number of people without homes in the region. By the end of 2014, the goal was to “virtually end,” homelessness in King County.
But fast-forward to today and that hasn’t exactly come to fruition. There are more than 2,300 people in Seattle sleeping in the streets — up 16 percent from 2013 — and city data shows nearly 10,000 households checking into shelters or transitional housing last year. Thousands of others may not be on the streets or in shelters, yet still live without a permanent place to sleep at night.
While some efforts of the committee have helped curb homelessness, it’s clear that there is still a problem — one that has likely been affected by rising rent prices in the area.
Candace Faber, one of the event organizers, said that her team has been shocked by the growth of homelessness in the Seattle metropolitan area. They’re worried not only about how many people do not have a permanent home, but what kind of impact the problem is having on the city as a whole.
“With Seattle experiencing the highest rent hikes in the nation, we’re concerned that, without action, our city will not be able to remain the dynamic, affordable place it is now,” Faber said. “We don’t want to lose our entrepreneurial spirit or wind up with a situation like San Francisco, where you can’t afford to innovate without serious VC backing and there’s serious tension between the housing community and tech workers.”
That raises the question: How, exactly, can technology fix the homeless problem? The stories of these Seattle entrepreneurs helps to provide the answer.

FROM SHELTERS TO STARTUPS

Kyle Kesterson knows a thing or two about being homeless.
That’s because the Freak’n Genius co-founder and CEO spent his childhood living in 14 different homes and apartments, in addition to a bevy of shelters and transitional houses. The moving around and lack of permanent housing made going to school difficult, and finding acceptance anywhere was nearly impossible.
“I was always the new kid, the poor kid, and the smallest kid,” Kesterson says now. “You just become the target of getting picked on.”
By the time he was 15, Kesterson realized that school wasn’t a place that fit his learning style. So, he dropped out to help run his parents’ house-cleaning business in Seattle.
That’s when Kesterson, now a talented artist and designer, further developed his creative skills. The Yuba City, Calif. native would spend hours locked in a room perusing through deviantART.com, a new Internet community where other artists from around the world were sharing their own work and receiving feedback.

So now Kesterson, who plans on attending the final presentations at the Hack for Homelessness event on Sunday, is using his own experiences to teach youth about finding solutions to problems with a entrepreneurial lens. When it comes to helping at-risk youth, or those that are homeless, Kesterson says it’s about finding a thriving and supportive environment — the same one he surrounded himself with while surfing through deviantART 14 years ago.
“Externally, our environment plays a significant role in either setting people up for growth and success, or weighting people down, sucking the life out of them, and eventually leaving them at or near rock bottom,” he said.
For Kesterson, it’s entrepreneurs who can help create these environments for people, and show them that they have the ability and power to solve problems and truly make a difference.
“Entrepreneurs need to first focus on the external and internal environments of those that are homeless,” he said. “Support, help, and inspire. Become a part of their network to mentor and make connections with the challenges they are faced with the way we lean on our own mentor networks.”

FIXING THE ROOT

Lindsay Caron Epstein has always, in some shape or form, been an entrepreneur at heart.
She figured out a way to survive after moving to Arizona from New Jersey with only $100. She figured out how to land a few minimum wage jobs and eventually start a non-profit community center for at-risk youth at just 22 years old.
And now, Caron using her entrepreneurial spirit to help figure out ways to fix social challenges like homelessness.
The 36-year-old is CEO and founder of ActivateHub, a startup working alongside other socially-conscious companies in Seattle’s Fledge Accelerator. ActivateHub is a “community building social action network,” or a place where people can find local events put on by NGOs and other organizations working on a wide variety of issues.
Caron found the inspiration to start the company after organizing programs for troubled youth in Arizona and studying the homelessness problem while in school. She became fascinated with how communities were built in a way that could help people and pull them out of tough situations, but there didn’t appear to be an easy way for people to get involved.
“If you do a Google search for poverty, homelessness, climate change — any issue you care about — you’ll just find news articles and blogs,” Caron explained. “You don’t find who in your community is working on those problems and you don’t find out how you can get involved.”
Caron says her company can help those that may not have a home or have anything to do. ActivateHub, she said, might give them a reason to become engaged in something and create a sense of value in the community.
“It gives people a reason to clean up and enables them to make connections,” said Caron, who will also be attending this weekend’s event. “Some people need that inspiration and purpose to change their situation, and a lot of times that motivation isn’t there.”
Of course, ActivateHub alone isn’t going to solve the homelessness problem by itself. Caron knows this and thinks that entrepreneurs can help by focusing on more preventative measures. Sure, technology can be used to help connect homeless people to certain resources, but there’s a deeper issue at hand for Caron…”

Mapping the Intersection Between Social Media and Open Spaces in California


Stamen Design: “Last month, Stamen launched parks.stamen.com, a project we created in partnership with the Electric Roadrunner Lab, with the goal of revealing the diversity of social media activity that happens inside parks and other open spaces in California. If you haven’t already looked at the site, please go visit it now! Find your favorite park, or the parks that are nearest to you, or just stroll between random parks using the wander button. For more background about the goals of the project, read Eric’s blog post: A Conversation About California Parks.
In this post I’d like to describe some of the algorithms we use to collect the social media data that feeds the park pages. Currently we collect data from four social media platforms: Twitter, Foursquare, Flickr, and Instagram. We chose these because they all have public APIs (Application Programming Interfaces) that are easy to work with, and we expect they will provide a view into the different facets of each park, and the diverse communities who enjoy these parks. Each social media service creates its own unique geographies, and its own way of representing these parks. For example, the kinds of photos you upload to Instagram might be different from the photos you upload to Flickr. The way you describe experiences using Twitter might be different from the moments you document by checking into Foursquare. In the future we may add more feeds, but for now there’s a lot we can learn from these four.
Through the course of collecting data from these social network services, I also found that each service’s public API imposes certain constraints on our queries, producing their own intricate patterns. Thus, the quirks of how each API was written results in distinct and fascinating geometries. Also, since we are only interested in parks for this project, the process of culling non-park-related content further produces unusual and interesting patterns. Rural areas have large parks that cover huge areas, while cities have lots of (relatively) tiny parks, which creates its own challenges for how we query the APIs.
Broadly, we followed a similar approach for all the social media services. First, we grab the geocoded data from the APIs. This ignores any media that don’t have a latitude and longitude associated with them. In Foursquare, almost all checkins have a latitude and longitude, and for Flickr and Instagram most photos have a location associated with them. However, for Twitter, only around 1% of all tweets have geographic coordinates. But as we will see, even 1% still results in a whole lot of tweets!
After grabbing the social media data, we intersect it with the outlines of parks and open spaces in California, using polygons from the California Protected Areas Database maintained by GreenInfo Network. Everything that doesn’t intersect one of these parks, we throw away. The following maps represent the data as it looks before the filtering process.
But enough talking, let’s look at some maps!”

Using data to treat the sickest and most expensive patients


Dan Gorenstein for Marketplace (radio):  “Driving to a big data conference a few weeks back, Dr. Jeffrey Brenner brought his compact SUV to a full stop – in the middle of a short highway entrance ramp in downtown Philadelphia…

Here’s what you need to know about Dr. Jeffrey BrennerHe really likes to figure out how things work. And he’s willing to go to extremes to do it – so far that he’s risking his health policy celebrity status.
Perhaps it’s not the smartest move from a guy who just last fall was named a MacArthur Genius, but this month, Brenner began to test his theory for treating some of the sickest and most expensive patients.
“We can actually take the sickest and most complicated patients, go to their bedside, go to their home, go with them to their appointments and help them for about 90 days and dramatically improve outcomes and reduce cost,” he says.
That’s the theory anyway. Like many ideas when it comes to treating the sickest patients, there’s little data to back up that it works.
Brenner’s willing to risk his reputation precisely because he’s not positive his approach for treating folks who cycle in and out of the healthcare system — “super-utilizers” — actually works.
“It’s really easy for me at this point having gotten a MacArthur award to simply declare what we do works and to drive this work forward without rigorously testing it,” Brenner said. “We are not going to do that,” he said. “We don’t think that’s the right thing to do. So we are going to do a randomized controlled trial on our work and prove whether it works and how well it works.”
Helping lower costs and improve care for the super-utilizers is one of the most pressing policy questions in healthcare today. And given its importance, there is a striking lack of data in the field.
People like to call randomized controlled trials (RCTs) the gold standard of scientific testing because two groups are randomly assigned – one gets the treatment, while the other doesn’t – and researchers closely monitor differences.
But a 2012 British Medical Journal article found over the last 25 years, a total of six RCTs have focused on care delivery for super-utilizers.

Randomized Clinical Trials (RCTs)

…Every major health insurance company – Medicare and Medicaid, too – has spent billions on programs for super-utilizers. The absence of rigorous evidence raises the question: Is all this effort built on health policy quicksand?
Not being 100 percent sure can be dangerous, says Duke behavioral scientist Peter Ubel, particularly in healthcare.
Ubel said back in the 1980s and 90s doctors prescribed certain drugs for irregular heartbeats. The medication, he said, made those weird rhythms go away, leaving beautiful-looking EKGs.
“But no one had tested whether people receiving these drugs actually lived longer, and many people thought, ‘Why would you do that? We can look at their cardiogram and see that they’re getting better,’” Ubel said. “Finally when somebody put that evidence to the test of a randomized trial, it turned out that these drugs killed people.”
WellPoint’s Nussbaum said he hoped Brenner’s project would inspire others to follow his lead and insert data into the discussion.
“I believe more people should be bold in challenging the status quo of our delivery system,” Nussbaum said. “The Jeff Brenners of the world should be embraced. We should be advocating for them to take on these studies.”
So why aren’t more healthcare luminaries putting their brilliance to the test? There are a couple of reasons.
Harvard economist Kate Baicker said until now there have been few personal incentives pushing people.
“If you’re focused on branding and spreading your brand, you have no incentive to say, ‘How good is my brand after all?’” she said.
And Venrock healthcare venture capitalist Bob Kocher said no one would fault Brenner if he put his brand before science, an age-old practice in this business.
“Healthcare has benefitted from the fact that you don’t understand it. It’s a bit of an art, and it hasn’t been a science,” he said. “You made money in healthcare by putting a banner outside your building saying you are a top something without having to justify whether you really are top at whatever you do.”
Duke’s Ubel said it’s too easy – and frankly, wrong – to say the main reason doctors avoid these rigorous studies is because they’re afraid to lose money and status. He said doctors aren’t immune from the very human trap of being sure their own ideas are right.
He says psychologists call it confirmation bias.
“Everything you see is filtered through your hopes, your expectations and your pre-existing beliefs,” Ubel said. “And that’s why I might look at a grilled cheese sandwich and see a grilled cheese sandwich and you might see an image of Jesus,” he says.
Even with all these hurdles, MIT economist Amy Finkelstein – who is running the RCT with Brenner – sees change coming.
“Providers have a lot more incentive now than they use to,” she said. “They have much more skin in the game.”
Finkelstein said hospital readmission penalties and new ways to pay doctors are bringing market incentives that have long been missing.
Brenner said he accepts that the truth of what he’s doing in Camden may be messier than the myth.

Cyberlibertarians’ Digital Deletion of the Left


in Jacobin: “The digital revolution, we are told everywhere today, produces democracy. It gives “power to the people” and dethrones authoritarians; it levels the playing field for distribution of information critical to political engagement; it destabilizes hierarchies, decentralizes what had been centralized, democratizes what was the domain of elites.
Most on the Left would endorse these ends. The widespread availability of tools whose uses are harmonious with leftist goals would, one might think, accompany broad advancement of those goals in some form. Yet the Left today is scattered, nearly toothless in most advanced democracies. If digital communication technology promotes leftist values, why has its spread coincided with such a stark decline in the Left’s political fortunes?
Part of this disconnect between advancing technology and a retreating left can be explained by the advent of cyberlibertarianism, a view that widespread computerization naturally produces democracy and freedom.
In the 1990s, UK media theorists Richard Barbrook and Andy Cameron, US journalist Paulina Borsook, and US philosopher of technology Langdon Winner introduced the term to describe a prominent worldview in Silicon Valley and digital culture generally; a related analysis can be found more recently in Stanford communication scholar Fred Turner’s work. While cyberlibertarianism can be defined as a general digital utopianism, summed up by a simple slogan like “computerization will set us free” or “computers provide the solution to any and all problems,” these writers note a specific political formation — one Winner describes as “ecstatic enthusiasm for electronically mediated forms of living with radical, right-wing libertarian ideas about the proper definition of freedom, social life, economics, and politics.”
There are overt libertarians who are also digital utopians — figures like Jimmy Wales, Eric Raymond, John Perry Barlow, Kevin Kelly, Peter Thiel, Elon Musk, Julian Assange, Dread Pirate Roberts, and Sergey Brin, and the members of the Technology Liberation Front who explicitly describe themselves as cyberlibertarians. But the term also describes a wider ideological formation in which people embrace digital utopianism as compatible or even identical with leftist politics opposed to neoliberalism.
In perhaps the most pointed form of cyberlibertarianism, computer expertise is seen as directly applicable to social questions.  In The Cultural Logic of Computation, I argue that computational practices are intrinsically hierarchical and shaped by identification with power. To the extent that algorithmic forms of reason and social organization can be said to have an inherent politics, these have long been understood as compatible with political formations on the Right rather than the Left.
Yet today, “hacktivists” and other promoters of the liberatory nature of mass computerization are prominent political voices, despite their overall political commitments remaining quite unclear. They are championed by partisans of both the Right and the Left as if they obviously serve the political ends of each. One need only reflect on the leftist support for a project like Open Source software to notice the strange and under-examined convergence of the Right and Left around specifically digital practices whose underlying motivations are often explicitly libertarian. Open Source is a deliberate commercialization of Richard Stallman’s largely noncommercial notion ofFree Software (see Stallman himself on the distinction). Open Source is widely celebrated by libertarians and corporations, and was started by libertarian Eric Raymond and programmer Bruce Perens, with support from businessman and corporate sympathizer Tim O’Reilly. Today the term Open Source has wide currency as a political imperative outside the software development community, despite its place on the Right-Left spectrum being at best ambiguous, and at worst explicitly libertarian and pro-corporate.
When computers are involved, otherwise brilliant leftists who carefully examine the political commitments of most everyone they side with suddenly throw their lot in with libertarians — even when those libertarians explicitly disavow Left principles in their work…”

Twitter Can Now Predict Crime, and This Raises Serious Questions


Motherboard: “Police departments in New York City may soon be using geo-tagged tweets to predict crime. It sounds like a far-fetched sci-fi scenario a la Minority Report, but when I contacted Dr. Matthew Greber, the University of Virginia researcher behind the technology, he explained that the system is far more mathematical than metaphysical.
The system Greber has devised is an amalgam of both old and new techniques. Currently, many police departments target hot spots for criminal activity based on actual occurrences of crime. This approach, called kernel density estimation (KDE), involves pairing a historical crime record with a geographic location and using a probability function to calculate the possibility of future crimes occurring in that area. While KDE is a serviceable approach to anticipating crime, it pales in comparison to the dynamism of Twitter’s real-time data stream, according to Dr. Gerber’s research paper “Predicting Crime Using Twitter and Kernel Density Estimation”.
Dr. Greber’s approach is similar to KDE, but deals in the ethereal realm of data and language, not paperwork. The system involves mapping the Twitter environment; much like how police currently map the physical environment with KDE. The big difference is that Greber is looking at what people are talking about in real time, as well as what they do after the fact, and seeing how well they match up. The algorithms look for certain language that is likely to indicate the imminent occurrence of a crime in the area, Greber says. “We might observe people talking about going out, getting drunk, going to bars, sporting events, and so on—we know that these sort of events correlate with crime, and that’s what the models are picking up on.”
Once this data is collected, the GPS tags in tweets allows Greber and his team to pin them to a virtual map and outline hot spots for potential crime. However, everyone who tweets about hitting the club later isn’t necessarily going to commit a crime. Greber tests the accuracy of his approach by comparing Twitter-based KDE predictions with traditional KDE predictions based on police data alone. The big question is, does it work? For Greber, the answer is a firm “sometimes.” “It helps for some, and it hurts for others,” he says.
According to the study’s results, Twitter-based KDE analysis yielded improvements in predictive accuracy over traditional KDE for stalking, criminal damage, and gambling. Arson, kidnapping, and intimidation, on the other hand, showed a decrease in accuracy from traditional KDE analysis. It’s not clear why these crimes are harder to predict using Twitter, but the study notes that the issue may lie with the kind of language used on Twitter, which is characterized by shorthand and informal language that can be difficult for algorithms to parse.
This kind of approach to high-tech crime prevention brings up the familiar debate over privacy and the use of users’ date for purposes they didn’t explicitly agree to. The case becomes especially sensitive when data will be used by police to track down criminals. On this point, though he acknowledges post-Snowden societal skepticism regarding data harvesting for state purposes, Greber is indifferent. “People sign up to have their tweets GPS tagged. It’s an opt-in thing, and if you don’t do it, your tweets won’t be collected in this way,” he says. “Twitter is a public service, and I think people are pretty aware of that.”…

Is Participatory Budgeting Real Democracy?


Anna Clark in NextCity: “Drawing from a practice pioneered 25 years ago in Porto Alegre, Brazil and imported to North America via progressive leaders in Toronto and Quebec, participatory budgeting cracks open the closed-door process of fiscal decision-making in cities, letting citizens vote on exactly how government money is spent in their community. It’s an auspicious departure from traditional ways of allocating tax dollars, let alone in Chicago, which has long been known for deeply entrenched machine politics. As Alderman Joe Moore puts it, in Chicago, “so many decisions are made from the top down.”
Participatory budgeting works pretty simply in the 49th Ward. Instead of Moore deciding how to spend $1.3 million in “menu money” that is allotted annually to each of Chicago’s 50 council members for capital improvements, the councilman opens up a public process to determine how to spend $1 million of the allotment. The remaining $300,000 is socked away in the bank for emergencies and cost overruns.
And the unusual vote on $1 million in menu money is open to a wider swath of the community than your standard Election Day: you don’t have to be a citizen to cast a ballot, and the voting age is sixteen.
Thanks to the process, Rogers Park can now boast of a new community garden, dozens of underpass murals, heating shelters at three transit stations, hundreds of tree plantings, an outdoor shower at Loyola Park, a $110,000 dog park, and eye-catching “You Are Here” neighborhood information boards at transit station entrances.

Another prominent supporter of participatory budgeting? The White House. In December—about eight months after Joe Moore met with President Barack Obama about bringing participatory budgeting to the federal level—PB became an option for determining how to spend community development block-grant money from the Department of Housing and Urban Development. The Obama administration also declared that, in a yet-to-be-detailed partnership, it will help create tools that can be used for participatory budgeting on a local level.
All this activity has so far added up to $45 million in tax dollars allocated to 203 voter-approved projects across the country. Some 46,000 people and 500 organizations nationwide have been part of the decision-making, according to the nonprofit Participatory Budgeting Project.
….
But to fulfill this vision, the process needs resources behind it—enough funds for projects to demonstrate a visible community benefit, and ample capacity from the facilitators of the process (whether it’s district officials or city hall) to truly reach out to the community. Without intention and capacity, PB risks duplicating the process of elections for ordinary representative democracy, where white middle- and upper-class voters are far more likely to vote and therefore enjoy an outsized influence on their neighborhood.

Participatory budgeting works differently for every city. In Porto Alegre, Brazil, where the process was created a generation ago by The Worker’s Party to give disadvantaged people a stronger voice in government, as many as 50,000 people vote on how to spend public money each year. More than $700 million has been funneled through the process since its inception. Vallejo, Calif., embraced participatory budgeting in 2012 after emerging from bankruptcy as part of its citywide reinvention. In its first PB vote in May 2013, 3,917 residents voted over the course of a week at 13 polling locations. That translated into four percent of the city’s eligible voters—a tiny number, but a much higher percentage than previous PB processes in Chicago and New York.
But the 5th Ward in Hyde Park, a South Side neighborhood that’s home to the University of Chicago, dropped PB in December, citing low turnout in neighborhood assemblies and residents who felt the process was too much work to be worthwhile. “They said it was very time consuming, a lot of meetings, and that they thought the neighborhood groups that they had were active enough to do it without having all of the expenses that were associated with it,” Alderman Leslie Hairston told the Hyde Park Herald. In 2013, its first year with participatory budgeting, the 5th Ward held a PB vote that saw only 100 ballots cast.
Josh Lerner of the Participatory Budgeting Project says low turnout is a problem that can be solved through outreach and promotion. “It is challenging to do this without capacity,” he said. Internationally, according to Lerner, PB is part of a city administration, with a whole office coordinating the process. Without the backing from City Hall in Porto Alegre, participatory budgeting would have a hard time attracting the tens of thousands who now count themselves as part of the process. And even with the support from City Hall, the 50,000 participants represent less than one percent of the city’s population of 1.4 million.

So what’s next for participatory budgeting in Rogers Park and beyond?
Well, first off, Rahm Emanuel’s new Manager of Participatory Budgeting will be responsible for supporting council districts if and when they opt to go participatory. There won’t be a requirement to do so, but if a district wishes to follow the 49th, they will have high-level backup from City Hall.
But this new manager—as well as Chicago’s aldermen and engaged citizens—must understand that there is no one-size-fits-all formula for participatory budgeting. The process must be adapted to the unique needs and culture of each district if it is to resonate with locals. And timing is key for rolling out the process.
While still in the hazy early days, federal support through the new White House initiative may also prove crucial in streamlining the participatory budgeting process, easing the burden on local leaders and citizens, and ultimately generating better participation—and, therefore, better on-the-ground results in communities around the country.
One of the key lessons of participatory budgeting—as with democracy more broadly—is that efficiency is not the highest value in the public sphere. It would be much easier and more cost-effective for aldermen to return to the old days and simply check off the boxes for where he or she thinks menu money should be spent. “We could sign off on menu money in a couple hours, a couple days,” Vandercook said. By choosing the participatory path, aldermen effectively create more work for themselves. They risk low rates of participation and the possibility that winning projects may not be the most worthy. Scalability, too, is a problem — the larger the community served by the process, the more difficult it is to ensure that both the process and the resulting projects reflect the needs of the entire community.
Nonetheless, participatory budgeting serves a harder-to-measure purpose that may well be, in the final accounting, more important. It is a profound civic education for citizens, who dig into both the limits and possibilities of public money. They experience what their elected leaders must navigate every day. But it’s also a civic education for council members and city staff who may find that they are engaging with those they represent more than they ever had before, learning about what they value most. Owen Burgh, chief of staff for Alderman Joe Arena in Chicago’s 45th Ward, told the Participatory Budgeting Project, “I was really surprised by the amazing knowledge base we have among our volunteers. So many of our volunteers came to the process with a background where they understood some principles of traffic management, community development and urban planning. It was very refreshing. Usually, in an alderman’s office, people contact us to fix an isolated problem. Through this process, we discussed not just what needed to be fixed but what we wanted our community to be.”
The participatory budgeting process expands the scope and depth of civic spaces in the community, where elected leaders work with—not for—residents. Even for those who do not show up to vote, there is an empowerment that comes simply in knowing that they could; the sincere invitation to participate matters, whether or not it is accepted…”

Feds see innovation decline within government


Federal Times: “Support for innovation is declining across the government, according to a report by the Partnership for Public Service released April 23.
Federal employee answers to three innovation-related questions on the annual Federal Employee Viewpoint Survey dropped from 61.5 out of 100 in 2012 to 59.4 out of 100, according to the report, produced in partnership with Deloitte.

This chart, extracted from the Partnership for Public Service report, shows the slow but steady decline of innovation measures.

This chart, extracted from the Partnership for Public Service report, shows the slow but steady decline of innovation measures. (Partnership for Public Service)

While 90 percent of employees surveyed report they are always looking for better ways to do their jobs only 54.7 percent feel encouraged to do so and only 33.4 percent believe their agency rewards creativity and innovation.
“The bottom line is that federal workers are motivated to improve the way they do their work, but they do not feel supported by their organizations,” the report said.
Dave Dye, a director of human capital at Deloitte, LLP, said the report is a message to agency leaders to pay attention and have discussions on innovation and make concerted efforts to enhance innovation in their areas.
“It’s not that leaders have to be innovative in their own right it means they need to set up environments for people to feel that innovation Is encouraged, rewarded and respected,” Dye said.
Most agencies saw a decline in their “innovation score” according to the report, including:
■ The Army saw one of the largest drops in its innovation score – from 64.2 out of 100 I 2012 to 60.1 out of 100 in 2013.
■ NASA – which had the highest score at 76.0 out of 100 in 2013 – also dropped from 76.5 in 2012.
■ The Financial Crimes Enforcement Network at the Treasury Department saw one of the largest drops among component agencies, from 63.8 out of 100 in 2012 to 52.0 in 2013.
Some agencies that have shown improvement are the National Science Foundation and the Peace Corps. Some NASA facilities also saw improvement, including the John C. Stennis Space Center in Mississippi and the George C. Marshall Space Flight Center in Alabama.
The ongoing effects of sequestration, budget cuts and threat of furloughs may also have had a dampening effect on federal employees, Dye said.
“When people feel safer or more sure about whats going on they are going to better focus on the mission,” he said.
Agency managers should also work to improve their work environments to build trust and confidence in their workforce by showing concerns about people’s careers and supporting development opportunities while recognizing good work, according to Dye.
The report recommends that agencies recognize employees at team meetings or with more formal awards to highlight innovation and creativity and reward success. Managers should make sure to share specific goals, provide a forum for open discussion and work to build trust among the workforce that is needed to help spur innovation.”

Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens


Paper by Martin Gilens and Benjamin I. Page :”Who governs? Who really rules? To what extent is the broad body of U.S. citizens sovereign, semi-sovereign, or largely powerless? These questions have animated much important work in the study of American politics.
While this body of research is rich and variegated, it can loosely be divided into four families of theories: Majoritarian Electoral Democracy, Economic Elite Domination, and two types of interest group pluralism –Majoritarian Pluralism, in which the interests of all citizens are more or less equally represented, and Biased Pluralism, in which corporations, business associations, and professional groups predominate) Each of these perspectives makes different predictions about the independent influence upon U.S. policy making of four sets of actors: the Average Citizen or “median voter,” Economic Elites, and Mass-based or Business-oriented Interest Groups or industries.
Each of these theoretical traditions has given rise to a large body of literature. Each is supported by a great deal of empirical evidence – some of it quantitative, some historical, some observational – concerning the importance of various sets of actors (or, all too often, a single set of actors) in U.S. policy making. This literature has made important contributions to our understanding of how American politics works and has helped illuminate how democratic or undemocratic (in various senses) our policy making process actually is. Until very recently, however, it has been impossible to test the diffe ring predictions of these theories against each other within a single statistical model that permits one to analyze
the independent effects of each set of actors upon policy outcomes.
Here – in a tentative and preliminary way – we offer such test, bringing a unique data set to bear on the problem. Our measures are far from
perfect, but we hope that this first step will help inspire further research into what we see as some of the most fundamental questions about American politics…”

Five Reasons for Choice-Preserving Approaches


Cass Sunstein at Nudges vs Shoves: “Psychologists and behavioral economists have identified many sources of human errors, including self-control problems, “present bias,” unrealistic optimism, and limited attention.  Building on these underlying findings, a great deal of work has explored the possibility of enlisting libertarian paternalism, or nudges, to make people’s lives go better.  Nudges preserve freedom of choice and thus allow people to go their own way.  But in light of behavioral findings, there has also been increasing interest in asking whether mandates and bans have a fresh justification.1  The motivation for that question is clear: If we know that people’s choices lead them in the wrong direction, why should we insist on, or adopt a precommitment to, approaches that preserve freedom of choice?  Some skeptics, notably Professors Ryan Bubb and Richard Pildes, object that behavioral economists have “trimmed their sails” by adopting an unjustified presumption in favor of choice-preserving approaches.2

It should be agreed that if a mandate would increase social welfare, suitably defined, there is a strong argument on its behalf.  No one believes that nudges are a sufficient approach to violent crime.  In the face of a standard market failure, coercion has a standard justification; consider the problem of air pollution.  We know that there are “behavioral market failures” as well.  If people suffer from unrealistic optimism, limited attention, or a problem of self-control, and if the result is a serious welfare loss, there is an argument for some kind of public response.  We could certainly imagine cases in which the best approach is a mandate or a ban, because that response is preferable, from the standpoint of social welfare, to any alternative, including nudges.
Nonetheless, there are many reasons to think that if improving social welfare is the goal, nudges have significant advantages and are often the best approach.  They may well have high benefits without high costs, and in any case their net benefits may be higher than those of alternative approaches.  Five points are especially important.
First, choice-preserving approaches make sense in the face of heterogeneity.  By allowing people to go their own way, they reduce the high costs potentially associated with one-size-fits-all solutions, which mandates often impose.  Second, those who favor nudges are alert to the important fact that public officials have limited information and may themselves err.  If nudges are based on mistakes, the damage is likely to be less severe than in the case of mandates, because nudges can be ignored or dismissed.  Third, nudges respond to the fact that public officials may be improperly affected by the influence of well-organized private groups (the public choice problem).  If so, the fact that people can go their own way provides an important safeguard, at least when compared with mandates.  Fourth, nudges have the advantage of avoiding the welfare loss that people experience when they are deprived of the ability to choose.  In some cases, that loss might be severe.  Fifth, nudges recognize that freedom of choice can be seen, and often is seen, as an intrinsic good, which government should respect if it is to treat people with dignity….”