How Much Transparency Do We Really Want?


William Galston in the Wall Street Journal: “Transparency is very nearly the opposite of privacy. In the current controversy, it is a demand that the government make public matters it conducts in private and wants to keep private.

image

The argument for disclosure goes like this: If the government is acting in the name of the people, the people need to know what their government is doing. How else can they judge these activities? Democratic government means accountability to the public, and accountability requires disclosure. History testifies to the link between secrecy and the abuse of public power. Without disclosure, the people will find it difficult to restrain government’s excesses—most importantly, secret activities that could endanger our liberties.
Government transparency has a distinguished history. In 1795, Immanuel Kant propounded what is often called the principle of publicity: Roughly, if you cannot reveal the principle that guides your policy without undermining that policy, then the policy itself is fatally flawed from a moral point of view.
Little more than a century later, in his famous “Fourteen Points” speech about U.S. war aims and the principles that would guide the peace, President Woodrow Wilson called for “Open covenants of peace, openly arrived at, after which there shall be no private international understandings of any kind but diplomacy shall proceed always frankly and in the public view.”…
Yet the relation between collective security and individual liberty is not zero-sum. Because another 9/11-scale terrorist event might well lead to even more intrusive antiterrorism measures, reducing the likelihood of such an event could end up preventing serious infringements on liberty. Up to a point, liberty and security can be mutually reinforcing. But at what point do they become opposed?
This is not a judgment that can be left to experts in the executive branch. Ultimately, the people, acting through their elected representatives, must decide—and it is hard to see how they can do so unless all representatives, not just a select few, have the information they need to participate in such a decision. As we learned in the 1970s, however, public deliberation on intelligence matters is anything but cost-free.”
 

A Modern Approach to Open Data


at the Sunlight Foundation blog: “Last year, a group of us who work daily with open government data — Josh Tauberer of GovTrack.us, Derek Willis at The New York Times, and myself — decided to stop each building the same basic tools over and over, and start building a foundation we could share.
noun_project_15212
We set up a small home at github.com/unitedstates, and kicked it off with a couple of projects to gather data on the people and work of Congress. Using a mix of automation and curation, they gather basic information from all over the government — THOMAS.gov, the House and Senate, the Congressional Bioguide, GPO’s FDSys, and others — that everyone needs to report, analyze, or build nearly anything to do with Congress.
Once we centralized this work and started maintaining it publicly, we began getting contributions nearly immediately. People educated us on identifiers, fixed typos, and gathered new data. Chris Wilson built an impressive interactive visualization of the Senate’s budget amendments by extending our collector to find and link the text of amendments.
This is an unusual, and occasionally chaotic, model for an open data project. github.com/unitedstates is a neutral space; GitHub’s permissions system allows many of us to share the keys, so no one person or institution controls it. What this means is that while we all benefit from each other’s work, no one is dependent or “downstream” from anyone else. It’s a shared commons in the public domain.
There are a few principles that have helped make the unitedstates project something that’s worth our time:…”

Is Connectivity A Human Right?


Mark Zuckerberg (Facebook): For almost ten years, Facebook has been on a mission to make the world more open and connected. Today we connect more than 1.15 billion people each month, but as we started thinking about connecting the next 5 billion, we realized something important: the vast majority of people in the world don’t have access to the internet.
Today, only 2.7 billion people are online — a little more than one third of the world. That is growing by less than 9% each year, but that’s slow considering how early we are in the internet’s development. Even though projections show most people will get smartphones in the next decade, most people still won’t have data access because the cost of data remains much more expensive than the price of a smartphone.
Below, I’ll share a rough proposal for how we can connect the next 5 billion people, and a rough plan to work together as an industry to get there. We’ll discuss how we can make internet access more affordable by making it more efficient to deliver data, how we can use less data by improving the efficiency of the apps we build and how we can help businesses drive internet access by developing a new model to get people online.
I call this a “rough plan” because, like many long term technology projects, we expect the details to evolve. It may be possible to achieve more than we lay out here, but it may also be more challenging than we predict. The specific technical work will evolve as people contribute better ideas, and we welcome all feedback on how to improve this.
Connecting the world is one of the greatest challenges of our generation. This is just one small step toward achieving that goal. I’m excited to work together to make this a reality.
For the full version, click here.

Civic Innovation Fellowships Go Global


Some thoughts from Panthea Lee from Reboot: “In recent years, civic innovation fellowships have shown great promise to improve the relationships between citizens and government. In the United States, Code for America and the Presidential Innovation Fellows have demonstrated the positive impact a small group of technologists can have working hand-in-hand with government. With the launch of Code for All, Code for Europe, Code4Kenya, and Code4Africa, among others, the model is going global.
But despite the increasing popularity of civic innovation fellowships, there are few templates for how a “Code for” program can be adapted to a different context. In the US, the success of Code for America has drawn from a wealth of tech talent eager to volunteer skills, public and private support, and the active participation of municipal governments. Elsewhere, new “Code for” programs are surely going to have to operate within a different set of capacities and constraints.”

White House Expands Guidance on Promoting Open Data


NextGov: “White House officials have announced expanded technical guidance to help agencies make more data accessible to the public in machine-readable formats.
Following up on President Obama’s May executive order linking the pursuit of open data to economic growth, innovation and government efficiency, two budget and science office spokesmen on Friday published a blog post highlighting new instructions and answers to frequently asked questions.
Nick Sinai, deputy chief technology officer at the Office of Science and Technology Policy, and Dominic Sale, supervisory policy analyst at the Office of Management and Budget, noted that the policy now in place means that all “newly generated government data will be required to be made available in open, machine-readable formats, greatly enhancing their accessibility and usefulness, while ensuring privacy and security.”

Strengthening Local Capacity for Data-Driven Decisionmaking


A report by the National Neighborhood Indicators Partnership (NNIP): “A large share of public decisions that shape the fundamental character of American life are made at the local level; for example, decisions about controlling crime, maintaining housing quality, targeting social services, revitalizing low-income neighborhoods, allocating health care, and deploying early childhood programs. Enormous benefits would be gained if a much larger share of these decisions were based on sound data and analysis.
In the mid-1990s, a movement began to address the need for data for local decisionmaking.Civic leaders in several cities funded local groups to start assembling neighborhood and address-level data from multiple local agencies. For the first time, it became possible to track changing neighborhood conditions, using a variety of indicators, year by year between censuses. These new data intermediaries pledged to use their data in practical ways to support policymaking and community building and give priority to the interests of distressed neighborhoods. Their theme was “democratizing data,” which in practice meant making the data accessible to residents and community groups (Sawicki and Craig 1996).

The initial groups that took on this work formed the National Neighborhood Indicators Partnership (NNIP) to further develop these capacities and spread them to other cities. By 2012, NNIP partners were established in 37 cities, and similar capacities were in development in a number of others. The Urban Institute (UI) serves as the secretariat for the network. This report documents a strategic planning process undertaken by NNIP in 2012 and early 2013. The network’s leadership and funders re-examined the NNIP model in the context of 15 years of local partner experiences and the dramatic changes in technology and policy approaches that have occurred over that period. The first three sections explain NNIP functions and institutional structures and examine the potential role for NNIP in advancing the community information field in today’s environment.”

Using Crowdsourcing In Government


Daren C. Brabham for IBM Center for The Business of Government: “The growing interest in “engaging the crowd” to identify or develop innovative solutions to public problems has been inspired by similar efforts in the commercial world.  There, crowdsourcing has been successfully used to design innovative consumer products or solve complex scientific problems, ranging from custom-designed T-shirts to mapping genetic DNA strands.
The Obama administration, as well as many state and local governments, have been adapting these crowdsourcing techniques with some success.  This report provides a strategic view of crowdsourcing and identifies four specific types:

  • Type 1:  Knowledge Discovery and Management. Collecting knowledge reported by an on-line community, such as the reporting of earth tremors or potholes to a central source.
  • Type 2:  Distributed Human Intelligence Tasking. Distributing “micro-tasks” that require human intelligence to solve, such as transcribing handwritten historical documents into electronic files.
  • Type 3:  Broadcast Search. Broadcasting a problem-solving challenge widely on the internet and providing an award for solution, such as NASA’s prize for an algorithm to predict solar flares
  • Type 4:  Peer-Vetted Creative Production. Creating peer-vetted solutions, where an on-line community both proposes possible solutions and is empowered to collectively choose among the solutions.

By understanding the different types, which require different approaches, public managers will have a better chance of success.  Dr. Brabham focuses on the strategic design process rather than on the specific technical tools that can be used for crowdsourcing.  He sets forth ten emerging best practices for implementing a crowdsourcing initiative.”

Do you want to live in a smart city?


Jane Wakefield from BBC News: “In the future everything in a city, from the electricity grid, to the sewer pipes to roads, buildings and cars will be connected to the network. Buildings will turn off the lights for you, self-driving cars will find you that sought-after parking space, even the rubbish bins will be smart. But how do we get to this smarter future. Who will be monitoring and controlling the sensors that will increasingly be on every building, lamp-post and pipe in the city?…
There is another chapter in the smart city story – and this one is being written by citizens, who are using apps, DIY sensors, smartphones and the web to solve the city problems that matter to them.
Don’t Flush Me is a neat little DIY sensor and app which is single-handedly helping to solve one of New York’s biggest water issues.
Every time there is heavy rain in the city, raw sewage is pumped into the harbour, at a rate of 27 billion gallons each year.
Using an Arduino processor, a sensor which measures water levels in the sewer overflows and a smart phone app, Don’t Flush Me lets people know when it is ‘safe to flush’.
Meanwhile Egg, a community-led sensor network, is alerting people to an often hidden problem in our cities.
Researchers estimate that two million people die each year as a result of air pollution and as cities get more over-crowded, the problem is likely to get worse.
Egg is compiling data about air quality by selling cheap sensor which people put outside their homes where they collect readings of green gases, nitrogen oxide (NO2) and carbon monoxide (CO)….
The reality is that most smart city projects are currently pretty small scale – creating tech hubs or green areas of the city, experimenting with smart electricity grids or introducing electric buses or bike-sharing schemes.”

Collaboration In Biology's Century


Todd Sherer, Chief Executive Officer of The Michael J. Fox Foundation for Parkinson’s Research, in Forbes: “he problem is, we all still work in a system that feeds on secrecy and competition. It’s hard enough work just to dream up win/win collaborative structures; getting them off the ground can feel like pushing a boulder up a hill. Yet there is no doubt that the realities of today’s research environment — everything from the accumulation of big data to the ever-shrinking availability of funds — demand new models for collaboration. Call it “collaboration 2.0.”…I share a few recent examples in the hope of increasing the reach of these initiatives, inspiring others like them, and encouraging frank commentary on how they’re working.
Open-Access Data
The successes of collaborations in the traditional sense, coupled with advanced techniques such as genomic sequencing, have yielded masses of data. Consortia of clinical sites around the world are working together to collect and characterize data and biospecimens through standardized methods, leading to ever-larger pools — more like Great Lakes — of data. Study investigators draw their own conclusions, but there is so much more to discover than any individual lab has the bandwidth for….
Crowdsourcing
A great way to grow engagement with resources you’re willing to share? Ask for it. Collaboration 2.0 casts a wide net. We dipped our toe in the crowdsourcing waters earlier this year with our Parkinson’s Data Challenge, which asked anyone interested to download a set of data that had been collected from PD patients and controls using smart phones. …
Cross-Disciplinary Collaboration 2.0
The more we uncover about the interconnectedness and complexity of the human system, the more proof we are gathering that findings and treatments for one disease may provide invaluable insights for others. We’ve seen some really intriguing crosstalk between the Parkinson’s and Alzheimer’s disease research communities recently…
The results should be: More ideas. More discovery. Better health.”
 
 
 

A collaborative way to get to the heart of 3D printing problems


PSFK: “Because most of us only see the finished product when it comes to 3D printing projects – it’s easy to forget that things can, and do, go wrong when it comes to this miracle technology.
3D printing is constantly evolving, reaching exciting new heights, and touching every industry you can think of – but all this progress has left a trail of mangled plastic, and a devastated machines in it’s wake.
The Art of 3D Print Failure is a Flickr group that aims to document this failure, because after all, mistakes are how we learn, and how we make sure the same thing doesn’t happen the next time around. It can also prevent mistakes from happening to those who are new to 3D printing, before they even make them!”