Sammies finalists are harnessing technology to help the public


Lisa Rein in the Washington Post: “One team of federal agents led Medicare investigations that resulted in more than 600 convictions in South Florida, recovering hundreds of millions of dollars. Another official boosted access to burial sites for veterans across the country. And one guided an initiative to provide safe drinking water to 5 million people in Uganda and Kenya. These are some of the 33 individuals and teams of federal employees nominated for the 13th annual Samuel J. Heyman Service to America Medals, among the highest honors in government. The 2014 finalists reflect the achievements of public servants in fields from housing to climate change, their work conducted in Washington and locations as far-flung as Antarctica and Alabama…
Many of them have excelled in harnessing new technology in ways that are pushing the limits of what government thought was possible even a few years ago. Michael Byrne of the Federal Communications Commission, for example, put detailed data about broadband availability in the hands of citizens and policymakers using interactive online maps and other visualizations. At the Environmental Protection Agency, Douglas James Norton made water quality data that had never been public available on the Web for citizens, scientists and state agencies.”

Out in the Open: An Open Source Website That Gives Voters a Platform to Influence Politicians


Klint Finley in Wired: “This is the decade of the protest. The Arab Spring. The Occupy Movement. And now the student demonstrations in Taiwan.
Argentine political scientist Pia Mancini says we’re caught in a “crisis of representation.” Most of these protests have popped up in countries that are at least nominally democratic, but so many people are still unhappy with their elected leaders. The problem, Mancini says, is that elected officials have drifted so far from the people they represent, that it’s too hard for the average person to be heard.
“If you want to participate in the political system as it is, it’s really costly,” she says. “You need to study politics in university, and become a party member and work your way up. But not every citizen can devote their lives to politics.”

Democracy OS is designed to address that problem by getting citizens directly involved in debating specific proposals when their representatives are actually voting on them.

That’s why Mancini started the Net Democracy foundation, a not-for-profit that explores ways of improving civic engagement through technology. The foundation’s first project is something called Democracy OS, an online platform for debating and voting on political issues, and it’s already finding a place in the world. The federal government in Mexico is using this open-source tool to gather feedback on a proposed public data policy, and in Tunisia, a non-government organization called iWatch has adopted it in an effort to give the people a stronger voice.
Mancini’s dissatisfaction with electoral politics stems from her experience working for the Argentine political party Unión Celeste y Blanco from 2010 until 2012. “I saw some practices that I thought were harmful to societies,” she says. Parties were too interested in the appearances of the candidates, and not interested enough in their ideas. Worse, citizens were only consulted for their opinions once every two to four years, meaning politicians could get away with quite a bit in the meantime.
Democracy OS is designed to address that problem by getting citizens directly involved in debating specific proposals when their representatives are actually voting on them. It operates on three levels: one for gathering information about political issues, one for public debate about those issues, and one for actually voting on specific proposals.
Various communities now use a tool called Madison to discuss policy documents, and many activists and community organizations have adopted Loomio to make decisions internally. But Democracy OS aims higher: to provide a common platform for any city, state, or government to actually put proposals to a vote. “We’re able to actually overthrow governments, but we’re not using technology to decide what to do next,” Mancini says. “So the risk is that we create power vacuums that get filled with groups that are already very well organized. So now we need to take it a bit further. We need to decide what democracy for the internet era looks like.”
Image: Courtesy of Net Democracy

Software Shop as Political Party

Today Net Democracy is more than just a software development shop. It’s also a local political party based in Beunos Aires. Two years ago, the foundation started pitching the first prototype of the software to existing political parties as a way for them to gather feedback from constituents, but it didn’t go over well. “They said: ‘Thank you, this is cool, but we’re not interested,’” Mancini remembers. “So we decided to start our own political party.”
The Net Democracy Party hasn’t won any seats yet, but it promises that if it does, it will use Democracy OS to enable any local registered voter to tell party representatives how to vote. Mancini says the party representatives will always vote the way constituents tell them to vote through the software.

‘We’re not saying everyone should vote on every issue all the time. What were saying is that issues should be open for everyone to participate.’

She also uses the term “net democracy” to refer to the type of democracy that the party advocates, a form of delegative democracy that attempts to strike a balance between representative democracy and direct democracy. “We’re not saying everyone should vote on every issue all the time,” Mancini explains. “What were saying is that issues should be open for everyone to participate.”
Individuals will also be able to delegate their votes to other people. “So, if you’re not comfortable voting on health issues, you can delegate to someone else to vote for you in that area,” she says. “That way people with a lot of experience in an issue, like a community leader who doesn’t have lobbyist access to the system, can build more political capital.”
She envisions a future where decisions are made on two levels. Decisions that involve specific knowledge — macroeconomics, tax reforms, judiciary regulations, penal code, etc. — or that affect human rights are delegated “upwards” to representatives. But then decisions related to local issues — transport, urban development, city codes, etc. — cab be delegated “downwards” to the citizens.

The Secret Ballot Conundrum

Ensuring the integrity of the votes gathered via Democracy OS will be a real challenge. The U.S. non-profit organization Black Box Voting has long criticized electronic voting schemes as inherently flawed. “Our criticism of internet voting is that it is not transparent and cannot be made publicly transparent,” says Black Box Voting founder Bev Harris. “With transparency for election integrity defined as public ability to see and authenticate four things: who can vote, who did vote, vote count, and chain of custody.”
In short, there’s no known way to do a secret ballot online because any system for verifying that the votes were counted properly will inevitably reveal who voted for what.
Democracy OS deals with that by simply doing away with secret ballots. For now, the Net Democracy party will have people sign-up for Democracy OS accounts in person with their government issued ID cards. “There is a lot to be said about how anonymity allows you to speak more freely,” Mancini says. “But in the end, we decided to prioritize the reliability, accountability and transparency of the system. We believe that by making our arguments and decisions public we are fostering a civic culture. We will be more responsible for what we say and do if it’s public.”
But making binding decisions based on these online discussions would be problematic, since they would skew not just towards those tech savvy enough to use the software, but also towards those willing to have their names attached to their votes publicly. Fortunately, the software isn’t yet being used to gather real votes, just to gather public feedback….”

The Universe Is Programmable. We Need an API for Everything


Keith Axline in Wired: “Think about it like this: In the Book of Genesis, God is the ultimate programmer, creating all of existence in a monster six-day hackathon.
Or, if you don’t like Biblical metaphors, you can think about it in simpler terms. Robert Moses was a programmer, shaping and re-shaping the layout of New York City for more than 50 years. Drug developers are programmers, twiddling enzymes to cure what ails us. Even pickup artists and conmen are programmers, running social scripts on people to elicit certain emotional results.

Keith Axline in Wired: “Everyone is becoming a programmer. The next step is to realize that everything is a program.

The point is that, much like the computer on your desk or the iPhone in your hand, the entire Universe is programmable. Just as you can build apps for your smartphones and new services for the internet, so can you shape and re-shape almost anything in this world, from landscapes and buildings to medicines and surgeries to, well, ideas — as long as you know the code.
That may sound like little more than an exercise in semantics. But it’s actually a meaningful shift in thinking. If we look at the Universe as programmable, we can start treating it like software. In short, we can improve almost everything we do with the same simple techniques that have remade the creation of software in recent years, things like APIs, open source code, and the massively popular code-sharing service GitHub.
The great thing about the modern software world is that you don’t have to build everything from scratch. Apple provides APIs, or application programming interfaces, that can help you build apps on their devices. And though Tim Cook and company only give you part of what you need, you can find all sorts of other helpful tools elsewhere, thanks to the open source software community.
The same is true if you’re building, say, an online social network. There are countless open source software tools you can use as the basic building blocks — many of them open sourced by Facebook. If you’re creating almost any piece of software, you can find tools and documentation that will help you fashion at least a small part of it. Chances are, someone has been there before, and they’ve left some instructions for you.
Now we need to discover and document the APIs for the Universe. We need a standard way of organizing our knowledge and sharing it with the world at large, a problem for which programmers already have good solutions. We need to give everyone a way of handling tasks the way we build software. Such a system, if it can ever exist, is still years away — decades at the very least — and the average Joe is hardly ready for it. But this is changing. Nowadays, programming skills and the DIY ethos are slowly spreading throughout the population. Everyone is becoming a programmer. The next step is to realize that everything is a program.

What Is an API?

The API may sound like just another arcane computer acronym. But it’s really one of the most profound metaphors of our time, an idea hiding beneath the surface of each piece of tech we use everyday, from iPhone apps to Facebook. To understand what APIs are and why they’re useful, let’s look at how programmers operate.
If I’m building a smartphone app, I’m gonna need — among so many other things — a way of validating a signup form on a webpage to make sure a user doesn’t, say, mistype their email address. That validation has nothing to do with the guts of my app, and it’s surprisingly complicated, so I don’t really want to build it from scratch. Apple doesn’t help me with that, so I start looking on the web for software frameworks, plugins, Software Developer Kits (SDKs) — anything that will help me build my signup tool.
Hopefully, I’ll find one. And if I do, chances are it will include some sort of documentation or “Readme file” explaining how this piece of code is supposed to be used so that I can tailor it to my app. This Readme file should contain installation instructions as well as the API for the code. Basically, an API lays out the code’s inputs and outputs. It shows what me what I have to send the code and what it will spit back out. It shows how I bolt it onto my signup form. So the name is actually quite explanatory: Application Programming Interface. An API is essentially an instruction manual for a piece of software.
Now, let’s combine this with the idea that everything is an application: molecules, galaxies, dogs, people, emotional states, abstract concepts like chaos. If you do something to any these things, they’ll respond in some way. Like software, they have inputs and outputs. What we need to do is discover and document their APIs.
We aren’t dealing with software code here. Inputs and outputs can themselves be anything. But we can closely document these inputs and their outputs — take what we know about how we interface with something and record it in a standard way that it can be used over and over again. We can create a Readme file for everything.
We can start by doing this in small, relatively easy ways. How about APIs for our cities? New Zealand just open sourced aerial images of about 95 percent of its land. We could write APIs for what we know about building in those areas, from properties of the soil to seasonal weather patterns to zoning laws. All this knowledge exists but it hasn’t been organized and packaged for use by anyone who is interested. And we could go still further — much further.
For example, between the science community, the medical industry and the billions of human experiences, we could probably have a pretty extensive API mapped out of the human stomach — one that I’d love to access when I’m up at 3am with abdominal pains. Maybe my microbiome is out of whack and there’s something I have on-hand that I could ingest to make it better. Or what if we cracked the API for the signals between our eyes and our brain? We wouldn’t need to worry about looking like Glassholes to get access to always-on augmented reality. We could just get an implant. Yes, these APIs will be slightly different for everyone, but that brings me to the next thing we need.

A GitHub for Everything

We don’t just need a Readme for the Universe. We need a way of sharing this Readme and changing it as need be. In short, we need a system like GitHub, the popular online service that lets people share and collaborate on software code.
Let’s go back to the form validator I found earlier. Say I made some modifications to it that I think other programmers would find useful. If the validator is on GitHub, I can create a separate but related version — a fork — that people can find and contribute to, in the same way I first did with the original software.

This creates a tree of knowledge, with giant groups of people creating and merging branches, working on their small section and then giving it back to the whole.

GitHub not only enables this collaboration, but every change is logged into separate versions. If someone were so inclined, they could go back and replay the building of the validator, from the very first save all the way up to my changes and whoever changes it after me. This creates a tree of knowledge, with giant groups of people creating and merging branches, working on their small section and then giving it back to the whole.
We should be able to funnel all existing knowledge of how things work — not just software code — into a similar system. That way, if my brain-eye interface needs to be different, I (or my personal eye technician) can “fork” the API. In a way, this sort of thing is already starting to happen. People are using GitHub to share government laws, policy documents, Gregorian chants, and the list goes on. The ultimate goal should be to share everything.
Yes, this idea is similar to what you see on sites like Wikipedia, but the stuff that’s shared on Wikipedia doesn’t let you build much more than another piece of text. We don’t just need to know what things are. We need to know how they work in ways that let us operate on them.

The Open Source Epiphany

If you’ve never programmed, all this can sound a bit, well, abstract. But once you enter the coding world, getting a loose grasp on the fundamentals of programming, you instantly see the utility of open source software. “Oooohhh, I don’t have to build this all myself,” you say. “Thank God for the open source community.” Because so many smart people contribute to open source, it helps get the less knowledgeable up to speed quickly. Those acolytes then pay it forward with their own contributions once they’ve learned enough.
Today, more and more people are jumping on this train. More and more people are becoming programmers of some shape or form. It wasn’t so long ago that basic knowledge of HTML was considered specialized geek speak. But now, it’s a common requirement for almost any desk job. Gone are the days when kids made fun of their parents for not being able to set the clock on the VCR. Now they get mocked for mis-cropping their Facebook profile photos.
These changes are all part of the tech takeover of our lives that is trickling down to the masses. It’s like how the widespread use of cars brought a general mechanical understanding of engines to dads everywhere. And this general increase in aptitude is accelerating along with the technology itself.
Steps are being taken to make programming a skill that most kids get early in school along with general reading, writing, and math. In the not too distant future, people will need to program in some form for their daily lives. Imagine the world before the average person knew how to write a letter, or divide two numbers, compared to now. A similar leap is around the corner…”

EU: Have your say on Future and Emerging Technologies!


European Commission: “Do you have a great idea for a new technology that is not possible yet? Do you think it can become realistic by putting Europe’s best minds on the task? Share your view and the European Commission – via the Future and Emerging Technologies (FET) programme@fet_eu#FET_eu– can make it happen. The consultation is open till 15 June 2014.

The aim of the public consultation launched today is to identify promising and potentially game-changing directions for future research in any technological domain.

Vice-President of the European Commission @NeelieKroesEU, responsible for the Digital Agenda, said: “From protecting the environment to curing disease – the choices and investments we make today will make a difference to the jobs and lives we enjoy tomorrow. Researchers and entrepreneurs, innovators, creators or interested bystanders – whoever you are, I hope you will take this opportunity to take part in determining Europe’s future“.

The consultation is organised as a series of discussions, in which contributors can suggest ideas for a new FET Proactive initiative or discuss the 9 research topics identified in the previous consultation to determine whether they are still relevant today.

The ideas collected via the public consultation will contribute to future FET work programmes, notably the next one (2016-17). This participative process has already been used to draft the current work programme (2014-15).

Background

€2,7 billion will be invested in Future and Emerging Technologies (FET) under the new research programme Horizon 2020#H2020 (2014-2020). This represents a nearly threefold increase in budget compared to the previous research programme, FP7. FET actions are part of the Excellent science pillar of Horizon 2020.

The objective of FET is to foster radical new technologies by exploring novel and high-risk ideas building on scientific foundations. By providing flexible support to goal-oriented and interdisciplinary collaborative research, and by adopting innovative research practices, FET research seizes the opportunities that will deliver long-term benefit for our society and economy.

FET Proactive initiatives aim to mobilise interdisciplinary communities around promising long-term technological visions. They build up the necessary base of knowledge and know-how for kick-starting a future technology line that will benefit Europe’s future industries and citizens in the decades to come. FET Proactive initiatives complement FET Open scheme, which funds small-scale projects on future technology, and FET Flagships, which are large-scale initiatives to tackle ambitious interdisciplinary science and technology goals.

FET previously launched an online consultation (2012-13) to identify research topics for the current work programme. Around 160 ideas were submitted. The European Commission did an exhaustive analysis and produced an informal clustering of these ideas into broad topics. 9 topics were identified as candidates for a FET Proactive initiative. Three are included in the current programme, namely Global Systems Science; Knowing, Doing, Being; and Quantum Simulation.”

The false promise of the digital humanities


Adam Kirsch in the New Republic: “The humanities are in crisis again, or still. But there is one big exception: digital humanities, which is a growth industry. In 2009, the nascent field was the talk of the Modern Language Association (MLA) convention: “among all the contending subfields,” a reporter wrote about that year’s gathering, “the digital humanities seem like the first ‘next big thing’ in a long time.” Even earlier, the National Endowment for the Humanities created its Office of Digital Humanities to help fund projects. And digital humanities continues to go from strength to strength, thanks in part to the Mellon Foundation, which has seeded programs at a number of universities with large grantsmost recently, $1 million to the University of Rochester to create a graduate fellowship.

Despite all this enthusiasm, the question of what the digital humanities is has yet to be given a satisfactory answer. Indeed, no one asks it more often than the digital humanists themselves. The recent proliferation of books on the subjectfrom sourcebooks and anthologies to critical manifestosis a sign of a field suffering an identity crisis, trying to determine what, if anything, unites the disparate activities carried on under its banner. “Nowadays,” writes Stephen Ramsay in Defining Digital Humanities, “the term can mean anything from media studies to electronic art, from data mining to edutech, from scholarly editing to anarchic blogging, while inviting code junkies, digital artists, standards wonks, transhumanists, game theorists, free culture advocates, archivists, librarians, and edupunks under its capacious canvas.”

Within this range of approaches, we can distinguish a minimalist and a maximalist understanding of digital humanities. On the one hand, it can be simply the application of computer technology to traditional scholarly functions, such as the editing of texts. An exemplary project of this kind is the Rossetti Archive created by Jerome McGann, an online repository of texts and images related to the career of Dante Gabriel Rossetti: this is essentially an open-ended, universally accessible scholarly edition. To others, however, digital humanities represents a paradigm shift in the way we think about culture itself, spurring a change not just in the medium of humanistic work but also in its very substance. At their most starry-eyed, some digital humanistssuch as the authors of the jargon-laden manifesto and handbook Digital_Humanitieswant to suggest that the addition of the high-powered adjective to the long-suffering noun signals nothing less than an epoch in human history: “We live in one of those rare moments of opportunity for the humanities, not unlike other great eras of cultural-historical transformation such as the shift from the scroll to the codex, the invention of movable type, the encounter with the New World, and the Industrial Revolution.”

The language here is the language of scholarship, but the spirit is the spirit of salesmanshipthe very same kind of hyperbolic, hard-sell approach we are so accustomed to hearing about the Internet, or  about Apple’s latest utterly revolutionary product. Fundamental to this kind of persuasion is the undertone of menace, the threat of historical illegitimacy and obsolescence. Here is the future, we are made to understand: we can either get on board or stand athwart it and get run over. The same kind of revolutionary rhetoric appears again and again in the new books on the digital humanities, from writers with very different degrees of scholarly commitment and intellectual sophistication.

In Uncharted, Erez Aiden and Jean-Baptiste Michel, the creators of the Google Ngram Vieweran online tool that allows you to map the frequency of words in all the printed matter digitized by Googletalk up the “big data revolution”: “Its consequences will transform how we look at ourselves…. Big data is going to change the humanities, transform the social sciences, and renegotiate the relationship between the world of commerce and the ivory tower.” These breathless prophecies are just hype. But at the other end of the spectrum, even McGann, one of the pioneers of what used to be called “humanities computing,” uses the high language of inevitability: “Here is surely a truth now universally acknowledged: that the whole of our cultural inheritance has to be recurated and reedited in digital forms and institutional structures.”

If ever there were a chance to see the ideological construction of reality at work, digital humanities is it. Right before our eyes, options are foreclosed and demands enforced; a future is constructed as though it were being discovered. By now we are used to this process, since over the last twenty years the proliferation of new technologies has totally discredited the idea of opting out of “the future.”…

The promise and perils of giving the public a policy ‘nudge’


Nicholas Biddle and Katherine Curchin at the Conversation: “…These behavioural insights are more than just intellectual curiosities. They are increasingly being used by policymakers inspired by Richard Thaler and Cass Sunstein’s bestselling manifesto for libertarian paternalism, Nudge.
The British and New South Wales governments have set up behavioural insights units. Many other governments around Australia are following their lead.
Most of the attention so far has been on how behavioural insights could be employed to make people slimmer, greener, more altruistic or better savers. However, it’s time we started thinking and talking about the impact these ideas could have on social policy – programs and payments that aim to reduce disadvantage and narrow divergence in opportunity.
While applying behavioural insights can potentially improve the efficiency and effectiveness of social policy, unscrupulous or poorly thought through applications could be disturbing and damaging. It would appear behavioural insights inspired the UK government’s so-called “Nudge Unit” to force job seekers to undergo bogus personality tests – on pain of losing benefits if they refused.
The idea seemed to be that because people readily believe that any vaguely worded combination of character traits applies to them – which is why people connect with their star sign – the results of a fake psychometric test can dupe them into believing they have a go-getting personality.
In our view, this is not how behavioural insights should be applied. This UK example seems to be a particularly troubling case of the use of “nudges” in conjunction with, rather than instead of, coercion. This is the worst of both worlds: not libertarian paternalism, but authoritarian paternalism.
Ironically, this instance betrays a questionable understanding of behavioural insights or at the very least a very short-term focus. Research tells us that co-operative behaviour depends on the perception of fairness and successful framing requires trust.
Dishonest interventions, which make the government seem both unfair and untrustworthy, should have the longer-term effect of undermining its ability to elicit cooperation and successfully frame information.
Some critics have assumed nudge is inherently conservative or neoliberal. Yet these insights could inform progressive reform in many ways.
For example, taking behavioural insights seriously would encourage a redesign of employment services. There is plenty of scope for thinking more rigorously about how job seekers’ interactions with employment services unintentionally inhibit their motivation to search for work.

Beware accidental nudges

More than just a nudge here or there, behavioural insights can be used to reflect on almost all government decisions. Too often governments accidentally nudge citizens in the opposite direction to where they want them to go.
Take the disappointing take-up of the Matched Savings Scheme, which is part of New Income Management in the Northern Territory. It matches welfare recipients’ savings dollar-for-dollar up to a maximum of A$500 and is meant to get people into the habit of saving regularly.
No doubt saving is extremely hard for people on very low incomes. But another reason so few people embraced the savings program may be a quirk in its design: people had to save money out of their non-income-managed funds, but the $500 reward they received from the government went into their income-managed account.
To some people this appears to have signalled the government’s bad faith. It said to them: even if you demonstrate your responsibility with money, we still won’t trust you.
The Matched Savings Scheme was intended to be a carrot, not a stick. It was supposed to complement the coercive element of income management by giving welfare recipients an incentive to improve their budgeting. Instead it was perceived as an invitation to welfare recipients to be complicit in their own humiliation.
The promise of an extra $500 would have been a strong lure for Homo economicus, but it wasn’t for Homo sapiens. People out of work or on income support are no more or less rational than merchant bankers or economics professors. Their circumstances and choices are different though.
The idiosyncrasies of human decision-making don’t mean that the human brain is fundamentally flawed. Most of the biases that we mentioned earlier are adaptive. But they do mean that policy makers need to appreciate how we differ from rational utility maximisers.”
Real humans are not worse than economic man. We’re just different and we deserve policies made for Homo sapiens, not Homo economicus.

Findings of the Big Data and Privacy Working Group Review


John Podesta at the White House Blog: “Over the past several days, severe storms have battered Arkansas, Oklahoma, Mississippi and other states. Dozens of people have been killed and entire neighborhoods turned to rubble and debris as tornadoes have touched down across the region. Natural disasters like these present a host of challenges for first responders. How many people are affected, injured, or dead? Where can they find food, shelter, and medical attention? What critical infrastructure might have been damaged?
Drawing on open government data sources, including Census demographics and NOAA weather data, along with their own demographic databases, Esri, a geospatial technology company, has created a real-time map showing where the twisters have been spotted and how the storm systems are moving. They have also used these data to show how many people live in the affected area, and summarize potential impacts from the storms. It’s a powerful tool for emergency services and communities. And it’s driven by big data technology.
In January, President Obama asked me to lead a wide-ranging review of “big data” and privacy—to explore how these technologies are changing our economy, our government, and our society, and to consider their implications for our personal privacy. Together with Secretary of Commerce Penny Pritzker, Secretary of Energy Ernest Moniz, the President’s Science Advisor John Holdren, the President’s Economic Advisor Jeff Zients, and other senior officials, our review sought to understand what is genuinely new and different about big data and to consider how best to encourage the potential of these technologies while minimizing risks to privacy and core American values.
Over the course of 90 days, we met with academic researchers and privacy advocates, with regulators and the technology industry, with advertisers and civil rights groups. The President’s Council of Advisors for Science and Technology conducted a parallel study of the technological trends underpinning big data. The White House Office of Science and Technology Policy jointly organized three university conferences at MIT, NYU, and U.C. Berkeley. We issued a formal Request for Information seeking public comment, and hosted a survey to generate even more public input.
Today, we presented our findings to the President. We knew better than to try to answer every question about big data in three months. But we are able to draw important conclusions and make concrete recommendations for Administration attention and policy development in a few key areas.
There are a few technological trends that bear drawing out. The declining cost of collection, storage, and processing of data, combined with new sources of data like sensors, cameras, and geospatial technologies, mean that we live in a world of near-ubiquitous data collection. All this data is being crunched at a speed that is increasingly approaching real-time, meaning that big data algorithms could soon have immediate effects on decisions being made about our lives.
The big data revolution presents incredible opportunities in virtually every sector of the economy and every corner of society.
Big data is saving lives. Infections are dangerous—even deadly—for many babies born prematurely. By collecting and analyzing millions of data points from a NICU, one study was able to identify factors, like slight increases in body temperature and heart rate, that serve as early warning signs an infection may be taking root—subtle changes that even the most experienced doctors wouldn’t have noticed on their own.
Big data is making the economy work better. Jet engines and delivery trucks now come outfitted with sensors that continuously monitor hundreds of data points and send automatic alerts when maintenance is needed. Utility companies are starting to use big data to predict periods of peak electric demand, adjusting the grid to be more efficient and potentially averting brown-outs.
Big data is making government work better and saving taxpayer dollars. The Centers for Medicare and Medicaid Services have begun using predictive analytics—a big data technique—to flag likely instances of reimbursement fraud before claims are paid. The Fraud Prevention System helps identify the highest-risk health care providers for waste, fraud, and abuse in real time and has already stopped, prevented, or identified $115 million in fraudulent payments.
But big data raises serious questions, too, about how we protect our privacy and other values in a world where data collection is increasingly ubiquitous and where analysis is conducted at speeds approaching real time. In particular, our review raised the question of whether the “notice and consent” framework, in which a user grants permission for a service to collect and use information about them, still allows us to meaningfully control our privacy as data about us is increasingly used and reused in ways that could not have been anticipated when it was collected.
Big data raises other concerns, as well. One significant finding of our review was the potential for big data analytics to lead to discriminatory outcomes and to circumvent longstanding civil rights protections in housing, employment, credit, and the consumer marketplace.
No matter how quickly technology advances, it remains within our power to ensure that we both encourage innovation and protect our values through law, policy, and the practices we encourage in the public and private sector. To that end, we make six actionable policy recommendations in our report to the President:
Advance the Consumer Privacy Bill of Rights. Consumers deserve clear, understandable, reasonable standards for how their personal information is used in the big data era. We recommend the Department of Commerce take appropriate consultative steps to seek stakeholder and public comment on what changes, if any, are needed to the Consumer Privacy Bill of Rights, first proposed by the President in 2012, and to prepare draft legislative text for consideration by stakeholders and submission by the President to Congress.
Pass National Data Breach Legislation. Big data technologies make it possible to store significantly more data, and further derive intimate insights into a person’s character, habits, preferences, and activities. That makes the potential impacts of data breaches at businesses or other organizations even more serious. A patchwork of state laws currently governs requirements for reporting data breaches. Congress should pass legislation that provides for a single national data breach standard, along the lines of the Administration’s 2011 Cybersecurity legislative proposal.
Extend Privacy Protections to non-U.S. Persons. Privacy is a worldwide value that should be reflected in how the federal government handles personally identifiable information about non-U.S. citizens. The Office of Management and Budget should work with departments and agencies to apply the Privacy Act of 1974 to non-U.S. persons where practicable, or to establish alternative privacy policies that apply appropriate and meaningful protections to personal information regardless of a person’s nationality.
Ensure Data Collected on Students in School is used for Educational Purposes. Big data and other technological innovations, including new online course platforms that provide students real time feedback, promise to transform education by personalizing learning. At the same time, the federal government must ensure educational data linked to individual students gathered in school is used for educational purposes, and protect students against their data being shared or used inappropriately.
Expand Technical Expertise to Stop Discrimination. The detailed personal profiles held about many consumers, combined with automated, algorithm-driven decision-making, could lead—intentionally or inadvertently—to discriminatory outcomes, or what some are already calling “digital redlining.” The federal government’s lead civil rights and consumer protection agencies should expand their technical expertise to be able to identify practices and outcomes facilitated by big data analytics that have a discriminatory impact on protected classes, and develop a plan for investigating and resolving violations of law.
Amend the Electronic Communications Privacy Act. The laws that govern protections afforded to our communications were written before email, the internet, and cloud computing came into wide use. Congress should amend ECPA to ensure the standard of protection for online, digital content is consistent with that afforded in the physical world—including by removing archaic distinctions between email left unread or over a certain age.
We also identify several broader areas ripe for further study, debate, and public engagement that, collectively, we hope will spark a national conversation about how to harness big data for the public good. We conclude that we must find a way to preserve our privacy values in both the domestic and international marketplace. We urgently need to build capacity in the federal government to identify and prevent new modes of discrimination that could be enabled by big data. We must ensure that law enforcement agencies using big data technologies do so responsibly, and that our fundamental privacy rights remain protected. Finally, we recognize that data is a valuable public resource, and call for continuing the Administration’s efforts to open more government data sources and make investments in research and technology.
While big data presents new challenges, it also presents immense opportunities to improve lives, the United States is perhaps better suited to lead this conversation than any other nation on earth. Our innovative spirit, technological know-how, and deep commitment to values of privacy, fairness, non-discrimination, and self-determination will help us harness the benefits of the big data revolution and encourage the free flow of information while working with our international partners to protect personal privacy. This review is but one piece of that effort, and we hope it spurs a conversation about big data across the country and around the world.
Read the Big Data Report.
See the fact sheet from today’s announcement.

Can technology end homelessness?


Geekwire: “At the heart of Seattle’s Pioneer Square neighborhood exists a unique juxtaposition.
Inside a two-story brick building is the Impact Hub co-working space and business incubator, a place where entrepreneurs are busily working on ideas to improve the world we live in.
hacktoendhomelessnessBut walk outside the Impact Hub’s doors, and you’ll enter an entirely different world.
Homelessness. Drugs. Violence.
Now, those two contrasting scenes are coming together.
This weekend, more than 100 developers, designers, entrepreneurs and do-gooders will team up at the Impact Hub for the first-ever Hack to End Homelessness, a four-day event that encourages participants to envision and create ideas to alleviate the homelessness problem in Seattle.
The Washington Low Income Housing Alliance, Real Change and several other local homeless services and advocacy groups have already submitted project proposals, which range from an e-commerce site showcasing artwork of homeless youth to a social network focusing on low-end mobile phones for people who are homeless.
Seattle has certainly made an effort to fix its homelessness problem. Back in 2005, the Committee to End Homelessness established a 10-year plan to dramatically reduce the number of people without homes in the region. By the end of 2014, the goal was to “virtually end,” homelessness in King County.
But fast-forward to today and that hasn’t exactly come to fruition. There are more than 2,300 people in Seattle sleeping in the streets — up 16 percent from 2013 — and city data shows nearly 10,000 households checking into shelters or transitional housing last year. Thousands of others may not be on the streets or in shelters, yet still live without a permanent place to sleep at night.
While some efforts of the committee have helped curb homelessness, it’s clear that there is still a problem — one that has likely been affected by rising rent prices in the area.
Candace Faber, one of the event organizers, said that her team has been shocked by the growth of homelessness in the Seattle metropolitan area. They’re worried not only about how many people do not have a permanent home, but what kind of impact the problem is having on the city as a whole.
“With Seattle experiencing the highest rent hikes in the nation, we’re concerned that, without action, our city will not be able to remain the dynamic, affordable place it is now,” Faber said. “We don’t want to lose our entrepreneurial spirit or wind up with a situation like San Francisco, where you can’t afford to innovate without serious VC backing and there’s serious tension between the housing community and tech workers.”
That raises the question: How, exactly, can technology fix the homeless problem? The stories of these Seattle entrepreneurs helps to provide the answer.

FROM SHELTERS TO STARTUPS

Kyle Kesterson knows a thing or two about being homeless.
That’s because the Freak’n Genius co-founder and CEO spent his childhood living in 14 different homes and apartments, in addition to a bevy of shelters and transitional houses. The moving around and lack of permanent housing made going to school difficult, and finding acceptance anywhere was nearly impossible.
“I was always the new kid, the poor kid, and the smallest kid,” Kesterson says now. “You just become the target of getting picked on.”
By the time he was 15, Kesterson realized that school wasn’t a place that fit his learning style. So, he dropped out to help run his parents’ house-cleaning business in Seattle.
That’s when Kesterson, now a talented artist and designer, further developed his creative skills. The Yuba City, Calif. native would spend hours locked in a room perusing through deviantART.com, a new Internet community where other artists from around the world were sharing their own work and receiving feedback.

So now Kesterson, who plans on attending the final presentations at the Hack for Homelessness event on Sunday, is using his own experiences to teach youth about finding solutions to problems with a entrepreneurial lens. When it comes to helping at-risk youth, or those that are homeless, Kesterson says it’s about finding a thriving and supportive environment — the same one he surrounded himself with while surfing through deviantART 14 years ago.
“Externally, our environment plays a significant role in either setting people up for growth and success, or weighting people down, sucking the life out of them, and eventually leaving them at or near rock bottom,” he said.
For Kesterson, it’s entrepreneurs who can help create these environments for people, and show them that they have the ability and power to solve problems and truly make a difference.
“Entrepreneurs need to first focus on the external and internal environments of those that are homeless,” he said. “Support, help, and inspire. Become a part of their network to mentor and make connections with the challenges they are faced with the way we lean on our own mentor networks.”

FIXING THE ROOT

Lindsay Caron Epstein has always, in some shape or form, been an entrepreneur at heart.
She figured out a way to survive after moving to Arizona from New Jersey with only $100. She figured out how to land a few minimum wage jobs and eventually start a non-profit community center for at-risk youth at just 22 years old.
And now, Caron using her entrepreneurial spirit to help figure out ways to fix social challenges like homelessness.
The 36-year-old is CEO and founder of ActivateHub, a startup working alongside other socially-conscious companies in Seattle’s Fledge Accelerator. ActivateHub is a “community building social action network,” or a place where people can find local events put on by NGOs and other organizations working on a wide variety of issues.
Caron found the inspiration to start the company after organizing programs for troubled youth in Arizona and studying the homelessness problem while in school. She became fascinated with how communities were built in a way that could help people and pull them out of tough situations, but there didn’t appear to be an easy way for people to get involved.
“If you do a Google search for poverty, homelessness, climate change — any issue you care about — you’ll just find news articles and blogs,” Caron explained. “You don’t find who in your community is working on those problems and you don’t find out how you can get involved.”
Caron says her company can help those that may not have a home or have anything to do. ActivateHub, she said, might give them a reason to become engaged in something and create a sense of value in the community.
“It gives people a reason to clean up and enables them to make connections,” said Caron, who will also be attending this weekend’s event. “Some people need that inspiration and purpose to change their situation, and a lot of times that motivation isn’t there.”
Of course, ActivateHub alone isn’t going to solve the homelessness problem by itself. Caron knows this and thinks that entrepreneurs can help by focusing on more preventative measures. Sure, technology can be used to help connect homeless people to certain resources, but there’s a deeper issue at hand for Caron…”

Mapping the Intersection Between Social Media and Open Spaces in California


Stamen Design: “Last month, Stamen launched parks.stamen.com, a project we created in partnership with the Electric Roadrunner Lab, with the goal of revealing the diversity of social media activity that happens inside parks and other open spaces in California. If you haven’t already looked at the site, please go visit it now! Find your favorite park, or the parks that are nearest to you, or just stroll between random parks using the wander button. For more background about the goals of the project, read Eric’s blog post: A Conversation About California Parks.
In this post I’d like to describe some of the algorithms we use to collect the social media data that feeds the park pages. Currently we collect data from four social media platforms: Twitter, Foursquare, Flickr, and Instagram. We chose these because they all have public APIs (Application Programming Interfaces) that are easy to work with, and we expect they will provide a view into the different facets of each park, and the diverse communities who enjoy these parks. Each social media service creates its own unique geographies, and its own way of representing these parks. For example, the kinds of photos you upload to Instagram might be different from the photos you upload to Flickr. The way you describe experiences using Twitter might be different from the moments you document by checking into Foursquare. In the future we may add more feeds, but for now there’s a lot we can learn from these four.
Through the course of collecting data from these social network services, I also found that each service’s public API imposes certain constraints on our queries, producing their own intricate patterns. Thus, the quirks of how each API was written results in distinct and fascinating geometries. Also, since we are only interested in parks for this project, the process of culling non-park-related content further produces unusual and interesting patterns. Rural areas have large parks that cover huge areas, while cities have lots of (relatively) tiny parks, which creates its own challenges for how we query the APIs.
Broadly, we followed a similar approach for all the social media services. First, we grab the geocoded data from the APIs. This ignores any media that don’t have a latitude and longitude associated with them. In Foursquare, almost all checkins have a latitude and longitude, and for Flickr and Instagram most photos have a location associated with them. However, for Twitter, only around 1% of all tweets have geographic coordinates. But as we will see, even 1% still results in a whole lot of tweets!
After grabbing the social media data, we intersect it with the outlines of parks and open spaces in California, using polygons from the California Protected Areas Database maintained by GreenInfo Network. Everything that doesn’t intersect one of these parks, we throw away. The following maps represent the data as it looks before the filtering process.
But enough talking, let’s look at some maps!”

Using data to treat the sickest and most expensive patients


Dan Gorenstein for Marketplace (radio):  “Driving to a big data conference a few weeks back, Dr. Jeffrey Brenner brought his compact SUV to a full stop – in the middle of a short highway entrance ramp in downtown Philadelphia…

Here’s what you need to know about Dr. Jeffrey BrennerHe really likes to figure out how things work. And he’s willing to go to extremes to do it – so far that he’s risking his health policy celebrity status.
Perhaps it’s not the smartest move from a guy who just last fall was named a MacArthur Genius, but this month, Brenner began to test his theory for treating some of the sickest and most expensive patients.
“We can actually take the sickest and most complicated patients, go to their bedside, go to their home, go with them to their appointments and help them for about 90 days and dramatically improve outcomes and reduce cost,” he says.
That’s the theory anyway. Like many ideas when it comes to treating the sickest patients, there’s little data to back up that it works.
Brenner’s willing to risk his reputation precisely because he’s not positive his approach for treating folks who cycle in and out of the healthcare system — “super-utilizers” — actually works.
“It’s really easy for me at this point having gotten a MacArthur award to simply declare what we do works and to drive this work forward without rigorously testing it,” Brenner said. “We are not going to do that,” he said. “We don’t think that’s the right thing to do. So we are going to do a randomized controlled trial on our work and prove whether it works and how well it works.”
Helping lower costs and improve care for the super-utilizers is one of the most pressing policy questions in healthcare today. And given its importance, there is a striking lack of data in the field.
People like to call randomized controlled trials (RCTs) the gold standard of scientific testing because two groups are randomly assigned – one gets the treatment, while the other doesn’t – and researchers closely monitor differences.
But a 2012 British Medical Journal article found over the last 25 years, a total of six RCTs have focused on care delivery for super-utilizers.

Randomized Clinical Trials (RCTs)

…Every major health insurance company – Medicare and Medicaid, too – has spent billions on programs for super-utilizers. The absence of rigorous evidence raises the question: Is all this effort built on health policy quicksand?
Not being 100 percent sure can be dangerous, says Duke behavioral scientist Peter Ubel, particularly in healthcare.
Ubel said back in the 1980s and 90s doctors prescribed certain drugs for irregular heartbeats. The medication, he said, made those weird rhythms go away, leaving beautiful-looking EKGs.
“But no one had tested whether people receiving these drugs actually lived longer, and many people thought, ‘Why would you do that? We can look at their cardiogram and see that they’re getting better,’” Ubel said. “Finally when somebody put that evidence to the test of a randomized trial, it turned out that these drugs killed people.”
WellPoint’s Nussbaum said he hoped Brenner’s project would inspire others to follow his lead and insert data into the discussion.
“I believe more people should be bold in challenging the status quo of our delivery system,” Nussbaum said. “The Jeff Brenners of the world should be embraced. We should be advocating for them to take on these studies.”
So why aren’t more healthcare luminaries putting their brilliance to the test? There are a couple of reasons.
Harvard economist Kate Baicker said until now there have been few personal incentives pushing people.
“If you’re focused on branding and spreading your brand, you have no incentive to say, ‘How good is my brand after all?’” she said.
And Venrock healthcare venture capitalist Bob Kocher said no one would fault Brenner if he put his brand before science, an age-old practice in this business.
“Healthcare has benefitted from the fact that you don’t understand it. It’s a bit of an art, and it hasn’t been a science,” he said. “You made money in healthcare by putting a banner outside your building saying you are a top something without having to justify whether you really are top at whatever you do.”
Duke’s Ubel said it’s too easy – and frankly, wrong – to say the main reason doctors avoid these rigorous studies is because they’re afraid to lose money and status. He said doctors aren’t immune from the very human trap of being sure their own ideas are right.
He says psychologists call it confirmation bias.
“Everything you see is filtered through your hopes, your expectations and your pre-existing beliefs,” Ubel said. “And that’s why I might look at a grilled cheese sandwich and see a grilled cheese sandwich and you might see an image of Jesus,” he says.
Even with all these hurdles, MIT economist Amy Finkelstein – who is running the RCT with Brenner – sees change coming.
“Providers have a lot more incentive now than they use to,” she said. “They have much more skin in the game.”
Finkelstein said hospital readmission penalties and new ways to pay doctors are bringing market incentives that have long been missing.
Brenner said he accepts that the truth of what he’s doing in Camden may be messier than the myth.