Amy Dockser Marcus in the Wall Street Journal: “Hackathons, the high-octane, all-night problem-solving sessions popularized by the software-coding community, are making their way into the more traditional world of health care. At Massachusetts Institute of Technology, a recent event called Hacking Medicine’s Grand Hackfest attracted more than 450 people to work for one weekend on possible solutions to problems involving diabetes, rare diseases, global health and information technology used at hospitals.
Health institutions such as New York-Presbyterian Hospital and Brigham and Women’s Hospital in Boston have held hackathons. MIT, meantime, has co-sponsored health hackathons in India, Spain and Uganda.
Hackathons of all kinds are increasingly popular. Intel Corp. recently bought a group that organizes them. Companies hoping to spark creative thinking sponsor them. And student-run hackathons have turned into intercollegiate competitions.
But in health care, where change typically comes much more slowly than in Silicon Valley, they represent a cultural shift. To solve a problem, scientists and doctors can spend years painstakingly running experiments, gathering data, applying for grants and publishing results. So the idea of an event where people give two-minute pitches describing a problem, then join a team of strangers to come up with a solution in the course of one weekend is radical.
“We are not trying to replace the medical culture with Facebook culture,” said Elliot Cohen, who wore a hoodie over a button-down dress shirt at the MIT event in March and helped start MIT Hacking Medicine while at business school. “But we want to try to blend them more.”
Mr. Cohen co-founded and is chief technology officer at PillPack, a pharmacy that sends customers personalized packages of their medications, a company that started at a hackathon.
At MIT’s health-hack, physicians, researchers, students and a smattering of people wearing Google Glass sprawled on the floor of MIT’s Media Lab and at tables with a view of the Boston skyline. At one table, a group of college students, laptops plastered with stickers, pulled juice boxes and snacks out of backpacks, trash piling up next to them as they feverishly wrote code.
Nupur Garg, an emergency-room physician and one of the eventual winners, finished her hospital shift at 2 a.m. Saturday in New York, drove to Boston and arrived at MIT in time to pitch the need for a way to capture images of patients’ ears and throats that can be shared with specialists to help make diagnoses. She and her team immediately started working on a prototype for the device, testing early versions on anyone who stopped by their table.
Dr. Garg and teammate Nancy Liang, who runs a company that makes Web apps for 3-D printers, caught a few hours of sleep in a dorm room Saturday night. They came up with the idea for their product’s name—MedSnap—later that night while watching students use cellphone cameras to send SnapChats to one another. “There was no time to conduct surveys on what was the best name,” said Ms. Liang. “Many ideas happen after midnight.”
Winning teams in each category won $1,000, as well as access to the hackathons sponsors for advice and pilot projects.
Yet even supporters say hackathons can’t solve medicine’s challenges overnight. Harlan Krumholz, a professor at Yale School of Medicine who ran a many-months trial that found telemonitoring didn’t reduce hospitalizations or deaths of cardiology patients, said he supports the problem-solving ethos of hackathons. But he added that “improvements require a long-term commitment, not just a weekend.”
Ned McCague, a data scientist at Blue Cross Blue Shield of Massachusetts, served as a mentor at the hackathon. He said he wasn’t representing his employer, but he used his professional experiences to push groups to think about the potential customer. “They have a good idea and are excited about it, but they haven’t thought about who is paying for it,” he said.
Zen Chu, a senior lecturer in health-care innovation and entrepreneur-in-residence at MIT, and one of the founders of Hacking Medicine, said more than a dozen startups conceived since the first hackathon, in 2011, are still in operation. Some received venture-capital funding.
The upsides of hackathons were made clear to Sharon Moalem, a physician who studies rare diseases. He had spent years developing a mobile app that can take pictures of faces to help diagnose rare genetic conditions, but was stumped on how to give the images a standard size scale to make comparisons. At the hackathon, Dr. Moalem said he was approached by an MIT student who suggested sticking a coin on the subjects’ forehead. Since quarters have a standard measurement, it “creates a scale,” said Dr. Moalem.
Dr. Moalem said he had never considered such a simple, elegant solution. The team went on to write code to help standardize facial measurements based on the dimensions of a coin and a credit card.
“Sometimes when you are too close to something, you stop seeing solutions, you only see problems,” Dr. Moalem said. “I needed to step outside my own silo.”
Book Review: 'The Rule of Nobody' by Philip K. Howard
Stuart Taylor Jr in the Wall Street Journal: “Amid the liberal-conservative ideological clash that paralyzes our government, it’s always refreshing to encounter the views of Philip K. Howard, whose ideology is common sense spiked with a sense of urgency. In “The Rule of Nobody,” Mr. Howard shows how federal, state and local laws and regulations have programmed officials of both parties to follow rules so detailed, rigid and, often, obsolete as to leave little room for human judgment. He argues passionately that we will never solve our social problems until we abandon what he calls a misguided legal philosophy of seeking to put government on regulatory autopilot. He also predicts that our legal-governmental structure is “headed toward a stall and then a frightening plummet toward insolvency and political chaos.”
Mr. Howard, a big-firm lawyer who heads the nonpartisan government-reform coalition Common Good, is no conventional deregulator. But he warns that the “cumulative complexity” of the dense rulebooks that prescribe “every nuance of how law is implemented” leaves good officials without the freedom to do what makes sense on the ground. Stripped of the authority that they should have, he adds, officials have little accountability for bad results. More broadly, he argues that the very structure of our democracy is so clogged by deep thickets of dysfunctional law that it will only get worse unless conservatives and liberals alike cast off their distrust of human discretion.
The rulebooks should be “radically simplified,” Mr. Howard says, on matters ranging from enforcing school discipline to protecting nursing-home residents, from operating safe soup kitchens to building the nation’s infrastructure: Projects now often require multi-year, 5,000-page environmental impact statements before anything can begin to be constructed. Unduly detailed rules should be replaced by general principles, he says, that take their meaning from society’s norms and values and embrace the need for official discretion and responsibility.
Mr. Howard serves up a rich menu of anecdotes, including both the small-scale activities of a neighborhood and the vast administrative structures that govern national life. After a tree fell into a stream and caused flooding during a winter storm, Franklin Township, N.J., was barred from pulling the tree out until it had spent 12 days and $12,000 for the permits and engineering work that a state environmental rule required for altering any natural condition in a “C-1 stream.” The “Volcker Rule,” designed to prevent banks from using federally insured deposits to speculate in securities, was shaped by five federal agencies and countless banking lobbyists into 963 “almost unintelligible” pages. In New York City, “disciplining a student potentially requires 66 separate steps, including several levels of potential appeals”; meanwhile, civil-service rules make it virtually impossible to terminate thousands of incompetent employees. Children’s lemonade stands in several states have been closed down for lack of a vendor’s license.

Conservatives as well as liberals like detailed rules—complete with tedious forms, endless studies and wasteful legal hearings—because they don’t trust each other with discretion. Corporations like them because they provide not only certainty but also “a barrier to entry for potential competitors,” by raising the cost of doing business to prohibitive levels for small businesses with fresh ideas and other new entrants to markets. Public employees like them because detailed rules “absolve them of responsibility.” And, adds Mr. Howard, “lawsuits [have] exploded in this rules-based regime,” shifting legal power to “self-interested plaintiffs’ lawyers,” who have learned that they “could sue for the moon and extract settlements even in cases (as with some asbestos claims) that were fraudulent.”
So habituated have we become to such stuff, Mr. Howard says, that government’s “self-inflicted ineptitude is accepted as a state of nature, as if spending an average of eight years on environmental reviews—which should be a national scandal—were an unavoidable mountain range.” Common-sensical laws would place outer boundaries on acceptable conduct based on reasonable norms that are “far better at preventing abuse of power than today’s regulatory minefield.”
“As Mr. Howard notes, his book is part of a centuries-old rules-versus-principles debate. The philosophers and writers whom he quotes approvingly include Aristotle, James Madison, Isaiah Berlin and Roscoe Pound, a prominent Harvard law professor and dean who condemned “mechanical jurisprudence” and championed broad official discretion. Berlin, for his part, warned against “monstrous bureaucratic machines, built in accordance with the rules that ignore the teeming variety of the living world, the untidy and asymmetrical inner lives of men, and crush them into conformity.” Mr. Howard juxtaposes today’s roughly 100 million words of federal law and regulations with Madison’s warning that laws should not be “so voluminous that they cannot be read, or so incoherent that they cannot be understood.”…
Let’s get geeks into government
Gillian Tett in the Financial Times: “Fifteen years ago, Brett Goldstein seemed to be just another tech entrepreneur. He was working as IT director of OpenTable, then a start-up website for restaurant bookings. The company was thriving – and subsequently did a very successful initial public offering. Life looked very sweet for Goldstein. But when the World Trade Center was attacked in 2001, Goldstein had a moment of epiphany. “I spent seven years working in a startup but, directly after 9/11, I knew I didn’t want my whole story to be about how I helped people make restaurant reservations. I wanted to work in public service, to give something back,” he recalls – not just by throwing cash into a charity tin, but by doing public service. So he swerved: in 2006, he attended the Chicago police academy and then worked for a year as a cop in one of the city’s toughest neighbourhoods. Later he pulled the disparate parts of his life together and used his number-crunching skills to build the first predictive data system for the Chicago police (and one of the first in any western police force), to indicate where crime was likely to break out.
This was such a success that Goldstein was asked by Rahm Emanuel, the city’s mayor, to create predictive data systems for the wider Chicago government. The fruits of this effort – which include a website known as “WindyGrid” – went live a couple of years ago, to considerable acclaim inside the techie scene.
This tale might seem unremarkable. We are all used to hearing politicians, business leaders and management consultants declare that the computing revolution is transforming our lives. And as my colleague Tim Harford pointed out in these pages last week, the idea of using big data is now wildly fashionable in the business and academic worlds….
In America when top bankers become rich, they often want to “give back” by having a second career in public service: just think of all those Wall Street financiers who have popped up at the US Treasury in recent years. But hoodie-wearing geeks do not usually do the same. Sure, there are some former techie business leaders who are indirectly helping government. Steve Case, a co-founder of AOL, has supported White House projects to boost entrepreneurship and combat joblessness. Tech entrepreneurs also make huge donations to philanthropy. Facebook’s Mark Zuckerberg, for example, has given funds to Newark education. And the whizz-kids have also occasionally been summoned by the White House in times of crisis. When there was a disastrous launch of the government’s healthcare website late last year, the Obama administration enlisted the help of some of the techies who had been involved with the president’s election campaign.
But what you do not see is many tech entrepreneurs doing what Goldstein did: deciding to spend a few years in public service, as a government employee. There aren’t many Zuckerberg types striding along the corridors of federal or local government.
. . .
It is not difficult to work out why. To most young entrepreneurs, the idea of working in a state bureaucracy sounds like utter hell. But if there was ever a time when it might make sense for more techies to give back by doing stints of public service, that moment is now. The civilian public sector badly needs savvier tech skills (just look at the disaster of that healthcare website for evidence of this). And as the sector’s founders become wealthier and more powerful, they need to show that they remain connected to society as a whole. It would be smart political sense.
So I applaud what Goldstein has done. I also welcome that he is now trying to persuade his peers to do the same, and that places such as the University of Chicago (where he teaches) and New York University are trying to get more young techies to think about working for government in between doing those dazzling IPOs. “It is important to see more tech entrepreneurs in public service. I am always encouraging people I know to do a ‘stint in government”. I tell them that giving back cannot just be about giving money; we need people from the tech world to actually work in government, “ Goldstein says.
But what is really needed is for more technology CEOs and leaders to get involved by actively talking about the value of public service – or even encouraging their employees to interrupt their private-sector careers with the occasional spell as a government employee (even if it is not in a sector quite as challenging as the police). Who knows? Maybe it could be Sheryl Sandberg’s next big campaigning mission. After all, if she does ever jump back to Washington, that could have a powerful demonstration effect for techie women and men. And shake DC a little too.”
Eight (No, Nine!) Problems With Big Data
Gary Marcus and Ernest Davis in the New York Times: “BIG data is suddenly everywhere. Everyone seems to be collecting it, analyzing it, making money from it and celebrating (or fearing) its powers. Whether we’re talking about analyzing zillions of Google search queries to predict flu outbreaks, or zillions of phone records to detect signs of terrorist activity, or zillions of airline stats to find the best time to buy plane tickets, big data is on the case. By combining the power of modern computing with the plentiful data of the digital era, it promises to solve virtually any problem — crime, public health, the evolution of grammar, the perils of dating — just by crunching the numbers.
Or so its champions allege. “In the next two decades,” the journalist Patrick Tucker writes in the latest big data manifesto, “The Naked Future,” “we will be able to predict huge areas of the future with far greater accuracy than ever before in human history, including events long thought to be beyond the realm of human inference.” Statistical correlations have never sounded so good.
Is big data really all it’s cracked up to be? There is no doubt that big data is a valuable tool that has already had a critical impact in certain areas. For instance, almost every successful artificial intelligence computer program in the last 20 years, from Google’s search engine to the I.B.M. “Jeopardy!” champion Watson, has involved the substantial crunching of large bodies of data. But precisely because of its newfound popularity and growing use, we need to be levelheaded about what big data can — and can’t — do.
The first thing to note is that although big data is very good at detecting correlations, especially subtle correlations that an analysis of smaller data sets might miss, it never tells us which correlations are meaningful. A big data analysis might reveal, for instance, that from 2006 to 2011 the United States murder rate was well correlated with the market share of Internet Explorer: Both went down sharply. But it’s hard to imagine there is any causal relationship between the two. Likewise, from 1998 to 2007 the number of new cases of autism diagnosed was extremely well correlated with sales of organic food (both went up sharply), but identifying the correlation won’t by itself tell us whether diet has anything to do with autism.
Second, big data can work well as an adjunct to scientific inquiry but rarely succeeds as a wholesale replacement. Molecular biologists, for example, would very much like to be able to infer the three-dimensional structure of proteins from their underlying DNA sequence, and scientists working on the problem use big data as one tool among many. But no scientist thinks you can solve this problem by crunching data alone, no matter how powerful the statistical analysis; you will always need to start with an analysis that relies on an understanding of physics and biochemistry.
Third, many tools that are based on big data can be easily gamed. For example, big data programs for grading student essays often rely on measures like sentence length and word sophistication, which are found to correlate well with the scores given by human graders. But once students figure out how such a program works, they start writing long sentences and using obscure words, rather than learning how to actually formulate and write clear, coherent text. Even Google’s celebrated search engine, rightly seen as a big data success story, is not immune to “Google bombing” and “spamdexing,” wily techniques for artificially elevating website search placement.
Fourth, even when the results of a big data analysis aren’t intentionally gamed, they often turn out to be less robust than they initially seem. Consider Google Flu Trends, once the poster child for big data. In 2009, Google reported — to considerable fanfare — that by analyzing flu-related search queries, it had been able to detect the spread of the flu as accurately and more quickly than the Centers for Disease Control and Prevention. A few years later, though, Google Flu Trends began to falter; for the last two years it has made more bad predictions than good ones.
As a recent article in the journal Science explained, one major contributing cause of the failures of Google Flu Trends may have been that the Google search engine itself constantly changes, such that patterns in data collected at one time do not necessarily apply to data collected at another time. As the statistician Kaiser Fung has noted, collections of big data that rely on web hits often merge data that was collected in different ways and with different purposes — sometimes to ill effect. It can be risky to draw conclusions from data sets of this kind.
A fifth concern might be called the echo-chamber effect, which also stems from the fact that much of big data comes from the web. Whenever the source of information for a big data analysis is itself a product of big data, opportunities for vicious cycles abound. Consider translation programs like Google Translate, which draw on many pairs of parallel texts from different languages — for example, the same Wikipedia entry in two different languages — to discern the patterns of translation between those languages. This is a perfectly reasonable strategy, except for the fact that with some of the less common languages, many of the Wikipedia articles themselves may have been written using Google Translate. In those cases, any initial errors in Google Translate infect Wikipedia, which is fed back into Google Translate, reinforcing the error.
A sixth worry is the risk of too many correlations. If you look 100 times for correlations between two variables, you risk finding, purely by chance, about five bogus correlations that appear statistically significant — even though there is no actual meaningful connection between the variables. Absent careful supervision, the magnitudes of big data can greatly amplify such errors.
Seventh, big data is prone to giving scientific-sounding solutions to hopelessly imprecise questions. In the past few months, for instance, there have been two separate attempts to rank people in terms of their “historical importance” or “cultural contributions,” based on data drawn from Wikipedia. One is the book “Who’s Bigger? Where Historical Figures Really Rank,” by the computer scientist Steven Skiena and the engineer Charles Ward. The other is an M.I.T. Media Lab project called Pantheon.
Both efforts get many things right — Jesus, Lincoln and Shakespeare were surely important people — but both also make some egregious errors. “Who’s Bigger?” claims that Francis Scott Key was the 19th most important poet in history; Pantheon has claimed that Nostradamus was the 20th most important writer in history, well ahead of Jane Austen (78th) and George Eliot (380th). Worse, both projects suggest a misleading degree of scientific precision with evaluations that are inherently vague, or even meaningless. Big data can reduce anything to a single number, but you shouldn’t be fooled by the appearance of exactitude.
FINALLY, big data is at its best when analyzing things that are extremely common, but often falls short when analyzing things that are less common. For instance, programs that use big data to deal with text, such as search engines and translation programs, often rely heavily on something called trigrams: sequences of three words in a row (like “in a row”). Reliable statistical information can be compiled about common trigrams, precisely because they appear frequently. But no existing body of data will ever be large enough to include all the trigrams that people might use, because of the continuing inventiveness of language.
To select an example more or less at random, a book review that the actor Rob Lowe recently wrote for this newspaper contained nine trigrams such as “dumbed-down escapist fare” that had never before appeared anywhere in all the petabytes of text indexed by Google. To witness the limitations that big data can have with novelty, Google-translate “dumbed-down escapist fare” into German and then back into English: out comes the incoherent “scaled-flight fare.” That is a long way from what Mr. Lowe intended — and from big data’s aspirations for translation.
Wait, we almost forgot one last problem: the hype….
Smart cities are here today — and getting smarter
Computer World: “Smart cities aren’t a science fiction, far-off-in-the-future concept. They’re here today, with municipal governments already using technologies that include wireless networks, big data/analytics, mobile applications, Web portals, social media, sensors/tracking products and other tools.
These smart city efforts have lofty goals: Enhancing the quality of life for citizens, improving government processes and reducing energy consumption, among others. Indeed, cities are already seeing some tangible benefits.
But creating a smart city comes with daunting challenges, including the need to provide effective data security and privacy, and to ensure that myriad departments work in harmony.
What makes a city smart? As with any buzz term, the definition varies. But in general, it refers to using information and communications technologies to deliver sustainable economic development and a higher quality of life, while engaging citizens and effectively managing natural resources.
Making cities smarter will become increasingly important. For the first time ever, the majority of the world’s population resides in a city, and this proportion continues to grow, according to the World Health Organization, the coordinating authority for health within the United Nations.
A hundred years ago, two out of every 10 people lived in an urban area, the organization says. As recently as 1990, less than 40% of the global population lived in a city — but by 2010 more than half of all people lived in an urban area. By 2050, the proportion of city dwellers is expected to rise to 70%.
As many city populations continue to grow, here’s what five U.S. cities are doing to help manage it all:
Scottsdale, Ariz.
The city of Scottsdale, Ariz., has several initiatives underway.
One is MyScottsdale, a mobile application the city deployed in the summer of 2013 that allows citizens to report cracked sidewalks, broken street lights and traffic lights, road and sewer issues, graffiti and other problems in the community….”
Coke Creates Volunteering App For Local Do-Gooders
PSFK: “If you’ve ever wanted to volunteer some time but didn’t know where to look, Coke Romania has the app for you. After teaming up with digital marketing company McCann Bucharest, Coke just created a new app that shows good Samaritans local volunteer opportunities. ‘Radar For Good‘ scans your location and brings up NGO’s, soup kitchens, orphanages, or libraries that want help right now.
Any opportunity that “Radar For Good’ discovers is a site that is definitely looking for volunteers at that moment. The app shows company names, websites, and contact information, as well as directions from where you are. It even allows you to save your favorite organizations for future reference, and has options to receive notifications from those companies.
Coca-Cola has numerous iOS apps, most of which deal with their soda products, but ‘Radar For Good’ is the first of its kind. While the app currently only works in Romania, Coke’s innovative creation has opened doors for similar mobile apps to get started in the United States.”
Open Data: What Is It and Why Should You Care?
Jason Shueh at Government Technology: “Though the debate about open data in government is an evolving one, it is indisputably here to stay — it can be heard in both houses of Congress, in state legislatures, and in city halls around the nation.
Already, 39 states and 46 localities provide data sets to data.gov, the federal government’s online open data repository. And 30 jurisdictions, including the federal government, have taken the additional step of institutionalizing their practices in formal open data policies.
Though the term “open data” is spoken of frequently — and has been since President Obama took office in 2009 — what it is and why it’s important isn’t always clear. That’s understandable, perhaps, given that open data lacks a unified definition.
“People tend to conflate it with big data,” said Emily Shaw, the national policy manager at the Sunlight Foundation, “and I think it’s useful to think about how it’s different from big data in the sense that open data is the idea that public information should be accessible to the public online.”
Shaw said the foundation, a Washington, D.C., non-profit advocacy group promoting open and transparent government, believes the term open data can be applied to a variety of information created or collected by public entities. Among the benefits of open data are improved measurement of policies, better government efficiency, deeper analytical insights, greater citizen participation, and a boost to local companies by way of products and services that use government data (think civic apps and software programs).
“The way I personally think of open data,” Shaw said, “is that it is a manifestation of the idea of open government.”
What Makes Data Open
For governments hoping to adopt open data in policy and in practice, simply making data available to the public isn’t enough to make that data useful. Open data, though straightforward in principle, requires a specific approach based on the agency or organization releasing it, the kind of data being released and, perhaps most importantly, its targeted audience.
According to the foundation’s California Open Data Handbook, published in collaboration with Stewards of Change Institute, a national group supporting innovation in human services, data must first be both “technically open” and “legally open.” The guide defines the terms in this way:
Technically open: [data] available in a machine-readable standard format, which means it can be retrieved and meaningfully processed by a computer application
Legally open: [data] explicitly licensed in a way that permits commercial and non-commercial use and re-use without restrictions.
Technically open means that data is easily accessible to its intended audience. If the intended users are developers and programmers, Shaw said, the data should be presented within an application programming interface (API); if it’s intended for researchers in academia, data might be structured in a bulk download; and if it’s aimed at the average citizen, data should be available without requiring software purchases.
….
4 Steps to Open Data
Creating open data isn’t without its complexities. There are many tasks that need to happen before an open data project ever begins. A full endorsement from leadership is paramount. Adding the project into the work flow is another. And allaying fears and misunderstandings is expected with any government project.
After the basic table stakes are placed, the handbook prescribes four steps: choosing a set of data, attaching an open license, making it available through a proper format and ensuring the data is discoverable.
1. Choose a Data Set
Choosing a data set can appear daunting, but it doesn’t have to be. Shaw said ample resources are available from the foundation and others on how to get started with this — see our list of open data resources for more information. In the case of selecting a data set, or sets, she referred to the foundation’s recently updated guidelines that urge identifying data sets based on goals and the demand from citizen feedback.
2. Attach an Open License
Open licenses dispel ambiguity and encourage use. However, they need to be proactive, and this means users should not be forced to request the information in order to use it — a common symptom of data accessed through the Freedom of Information Act. Tips for reference can be found at Opendefinition.org, a site that has a list of examples and links to open licenses that meet the definition of open use.
3. Format the Data to Your Audience
As previously stated, Shaw recommends tailoring the format of data to the audience, with the ideal being that data is packaged in formats that can be digested by all users: developers, civic hackers, department staff, researchers and citizens. This could mean it’s put into APIs, spreadsheet docs, text and zip files, FTP servers and torrent networking systems (a way to download files from different sources). The file type and the system for download all depends on the audience.
“Part of learning about what formats government should offer data in is to engage with the prospective users,” Shaw said.
4. Make it Discoverable
If open data is strewn across multiple download links and wedged into various nooks and crannies of a website, it probably won’t be found. Shaw recommends a centralized hub that acts as a one-stop shop for all open data downloads. In many jurisdictions, these Web pages and websites have been called “portals;” they are the online repositories for a jurisdiction’s open data publishing.
“It is important for thinking about how people can become aware of what their governments hold. If the government doesn’t make it easy for people to know what kinds of data is publicly available on the website, it doesn’t matter what format it’s in,” Shaw said. She pointed to public participation — a recurring theme in open data development — to incorporate into the process to improve accessibility.
Examples of portals, can be found in numerous cities across the U.S., such as San Francisco, New York, Los Angeles, Chicago and Sacramento, Calif.
Visit page 2 of our story for open data resources, and page 3 for open data file formats.
“Government Entrepreneur” is Not an Oxymoron
But imagine if the road that led to the Seattle City Council ridesharing hearings this month — with rulings that sharply curtail UberX, Lyft, and Sidecar’s operations there — had been a vastly different one. Imagine that public leaders had conceived and built a platform to provide this new, shared model of transit. Or at the very least, that instead of having a revolution of the current transit regime done to Seattle public leaders, it was done with them. Amidst the acrimony, it seems hard to imagine that public leaders could envision and operate such a platform, or that private innovators could work with them more collaboratively on it — but it’s not impossible. What would it take? Answer: more public entrepreneurs.
The idea of ”public entrepreneurship” may sound to you like it belongs on a list of oxymorons right alongside “government intelligence.” But it doesn’t. Public entrepreneurs around the world are improving our lives, inventing entirely new ways to serve the public. They are using sensors to detect potholes; word pedometers to help students learn; harnessing behavioral economics to encourage organ donation; crowdsourcing patent review; and transforming Medellin, Colombia with cable cars. They are coding in civic hackathons and competing in the Bloomberg challenge. They are partnering with an Office of New Urban Mechanics in Boston or in Philadelphia, co-developing products in San Francisco’s Entrepreneurship-in-Residence program, or deploying some of the more than $430 million invested into civic-tech in the last two years.
There is, however, a big problem with public entrepreneurs: there just aren’t enough of them. Without more public entrepreneurship, it’s hard to imagine meeting our public challenges or making the most of private innovation. One might argue that bungled healthcare website roll-outs or internet spying are evidence of too much activity on the part of public leaders, but I would argue that what they really show is too little entrepreneurial skill and judgment.
The solution to creating more public entrepreneurs is straightforward: train them. But, by and large, we don’t. Consider Howard Stevenson’s definition of entrepreneurship: “the pursuit of opportunity without regard to resources currently controlled.” We could teach that approach to people heading towards the public sector. But now consider the following list of terms: “acknowledgement of multiple constituencies,” “risk reduction,” “formal planning,” “coordination,” “efficiency measures,” “clearly defined responsibility,” and “organizational culture.” It reads like a list of the kinds of concepts we would want a new public official to know; like it might be drawn from an interview evaluation form or graduate school syllabus. In fact, it’s from Stevenson’s list of pressures that pull managers away from entrepreneurship and towards administration. Of course, that’s not all bad. We must have more great public administrators. But with all our challenges and amidst all the dynamism, we are going to need more than analysts and strategists in the public sector, we need inventors and builders, too.
Public entrepreneurship is not simply innovation in the public sector (though it makes use of innovation), and it’s not just policy reform (though it can help drive reform). Public entrepreneurs build something from nothing with resources — be they financial capital or human talent or new rules — they didn’t command. In Boston, I worked with many amazing public managers and a handful of outstanding public entrepreneurs. Chris Osgood and Nigel Jacob brought the country’s first major-city mobile 311 app to life, and they are public entrepreneurs. They created Citizens Connect in 2009 by bringing together iPhones on loan together with a local coder and the most under-tapped resource in the public sector: the public. They transformed the way basic neighborhood issues are reported and responded to (20% of all constituent cases in Boston are reported over smartphones now), and their model is now accessible to 40 towns in Massachusetts and cities across the country. The Mayor’s team in Boston that started-up the One Fund in the days after the Marathon bombings were public entrepreneurs. We built the organization from PayPal and a Post Office Box, and it went on to channel $61 million from donors to victims and survivors in just 75 days. It still operates today….
It’s worth noting that public entrepreneurship, perhaps newly buzzworthy, is not actually new. Elinor Ostrom (44 years before her Nobel Prize) observed public entrepreneurs inventing new models in the 1960s. Back when Ronald Reagan was president, Peter Drucker wrote that it was entrepreneurship that would keep public service “flexible and self-renewing.” And almost two decades have passed since David Osborne and Ted Gaebler’s “Reinventing Government” (the then handbook for public officials) carried the promising subtitle: “How the Entrepreneurial Spirit is Transforming the Public Sector”. Public entrepreneurship, though not nearly as widespread as its private complement, or perhaps as fashionable as its “social” counterpart (focussed on non-profits and their ecosystem), has been around for a while and so have those who practiced it.
But still today, we mostly train future public leaders to be public administrators. We school them in performance management and leave them too inclined to run from risk instead of managing it. And we communicate often, explicitly or not, to private entrepreneurs that government officials are failures and dinosaurs. It’s easy to see how that road led to Seattle this month, but hard see how it empowers public officials to take on the enormous challenges that still lie ahead of us, or how it enables the public to help them.”
The GovLab Index: Privacy and Security
Please find below the latest installment in The GovLab Index series, inspired by the Harper’s Index. “The GovLab Index: Privacy and Security” examines the attitudes and concerns of American citizens regarding online privacy. Previous installments include Designing for Behavior Change, The Networked Public, Measuring Impact with Evidence, Open Data, The Data Universe, Participation and Civic Engagement and Trust in Institutions.
Globally
- Percentage of people who feel the Internet is eroding their personal privacy: 56%
- Internet users who feel comfortable sharing personal data with an app: 37%
- Number of users who consider it important to know when an app is gathering information about them: 70%
- How many people in the online world use privacy tools to disguise their identity or location: 28%, or 415 million people
- Country with the highest penetration of general anonymity tools among Internet users: Indonesia, where 42% of users surveyed use proxy servers
- Percentage of China’s online population that disguises their online location to bypass governmental filters: 34%
In the United States
Over the Years
- In 1996, percentage of the American public who were categorized as having “high privacy concerns”: 25%
- Those with “Medium privacy concerns”: 59%
- Those who were unconcerned with privacy: 16%
- In 1998, number of computer users concerned about threats to personal privacy: 87%
- In 2001, those who reported “medium to high” privacy concerns: 88%
- Individuals who are unconcerned about privacy: 18% in 1990, down to 10% in 2004
- How many online American adults are more concerned about their privacy in 2014 than they were a year ago, indicating rising privacy concerns: 64%
- Number of respondents in 2012 who believe they have control over their personal information: 35%, downward trend for 7 years
- How many respondents in 2012 continue to perceive privacy and the protection of their personal information as very important or important to the overall trust equation: 78%, upward trend for seven years
- How many consumers in 2013 trust that their bank is committed to ensuring the privacy of their personal information is protected: 35%, down from 48% in 2004
Privacy Concerns and Beliefs
- How many Internet users worry about their privacy online: 92%
- Those who report that their level of concern has increased from 2013 to 2014: 7 in 10
- How many are at least sometimes worried when shopping online: 93%, up from 89% in 2012
- Those who have some concerns when banking online: 90%, up from 86% in 2012
- Number of Internet users who are worried about the amount of personal information about them online: 50%, up from 33% in 2009
- Those who report that their photograph is available online: 66%
- Their birthdate: 50%
- Home address: 30%
- Cell number: 24%
- A video: 21%
- Political affiliation: 20%
- Those who report that their photograph is available online: 66%
- Consumers who are concerned about companies tracking their activities: 58%
- Those who are concerned about the government tracking their activities: 38%
- How many users surveyed felt that the National Security Association (NSA) overstepped its bounds in light of recent NSA revelations: 44%
- Respondents who are comfortable with advertisers using their web browsing history to tailor advertisements as long as it is not tied to any other personally identifiable information: 36%, up from 29% in 2012
- Percentage of voters who do not want political campaigns to tailor their advertisements based on their interests: 86%
- Percentage of respondents who do not want news tailored to their interests: 56%
- Percentage of users who are worried about their information will be stolen by hackers: 75%
- Those who are worried about companies tracking their browsing history for targeted advertising: 54%
- How many consumers say they do not trust businesses with their personal information online: 54%
- Top 3 most trusted companies for privacy identified by consumers from across 25 different industries in 2012: American Express, Hewlett Packard and Amazon
- Most trusted industries for privacy: Healthcare, Consumer Products and Banking
- Least trusted industries for privacy: Internet and Social Media, Non-Profits and Toys
- Respondents who admit to sharing their personal information with companies they did not trust in 2012 for reasons such as convenience when making a purchase: 63%
- Percentage of users who say they prefer free online services supported by targeted ads: 61%
- Those who prefer paid online services without targeted ads: 33%
- How many Internet users believe that it is not possible to be completely anonymous online: 59%
- Those who believe complete online anonymity is still possible: 37%
- Those who say people should have the ability to use the Internet anonymously: 59%
- Percentage of Internet users who believe that current laws are not good enough in protecting people’s privacy online: 68%
- Those who believe current laws provide reasonable protection: 24%
FULL LIST at http://thegovlab.org/the-govlab-index-privacy-and-trust/
Open Government: Building Trust and Civic Engagement
Gavin Newsom and Zachary Bookman in the Huffington Post: “Daily life has become inseparable from new technologies. Our phones and tablets let us shop from the couch, track how many miles we run, and keep in touch with friends across town and around the world – benefits barely possible a decade ago.
With respect to our communities, Uber and Lyft now shuttle us around town, reducing street traffic and parking problems. Adopt-a-Hydrant apps coordinate efforts to dig out hydrants after snowstorms, saving firefighters time when battling blazes. Change.org, helps millions petition for and effect social and political change.
Yet as a sector, government typically embraces technology well-behind the consumer curve. This leads to disheartening stories, like veterans waiting months or years for disability claims due to outdated technology or the troubled rollout of the Healthcare.gov website. This is changing.
Cities and states are now the driving force in a national movement to harness technology to share a wealth of government information and data. Many forward thinking local governments now provide effective tools to the public to make sense of all this data.
This is the Open Government movement.
For too long, government information has been locked away in agencies, departments, and archaic IT systems. Senior administrators often have to request the data they need to do their jobs from system operators. Elected officials, in turn, often have to request data from these administrators. The public remains in the dark, and when data is released, it appears in the form of inaccessible or incomprehensible facts and figures.
Governments keep massive volumes of data, from 500 page budget documents to population statistics to neighborhood crime rates. Although raw data is a necessary component of Open Government, for it to empower citizens and officials the data must be transformed into meaningful and actionable insights. Governments must both publish information in “machine readable” format and give people the tools to understand and act on it.
New platforms can transform data from legacy systems into meaningful visualizations. Instant, web-based access to this information not only saves time and money, but also helps government make faster and better decisions. This allows them to serve their communities and builds trust with citizens.
Leading governments like Palo Alto have begun employing technology to leverage these benefits. Even the City of Bell, California, which made headlines in 2010 when senior administrators siphoned millions of dollars from the general fund, is now leveraging cloud technology to turn a new page in its history. The city has presented its financial information in an easily accessible, interactive platform at Bell.OpenGov.com. Citizens and officials alike can see vivid, user generated charts and graphs that show where money goes, what services are offered to residents, and how much those services cost.
In 2009, San Francisco became an early adopter of the open data movement when an executive order made open and machine-readable the default for our consolidated government. That simple order spurred an entirely new industry and the City of San Francisco has been adopting apps like the San Francisco Heat Vulnerability Index and Neighborhood Score ever since. The former identifies areas vulnerable to heat waves with the hope of better preparedness, while the latter provides an overall health and sustainability score, block-by-block for every neighborhood in the city. These new apps use local, state, federal, and private data sets to allow residents to see how their neighborhoods rank.
The California State Lands Commission, responsible for the stewardship of the state’s lands, waterways, and natural resources, is getting in on the Open Government movement too. The Commission now publishes five years of expense and revenue data at CAStateLands.opengov.com (which just launched today!). California residents can now see how the state generates nearly half a billion dollars in revenue from oil and gas contracts, mineral royalties, and leasing programs. The State can now communicate how it manages those resources, so that citizens understand how their government works for them.
The Open Government movement provides a framework for improved public administration and a path for more trust and engagement. Governments have been challenged to do better, and now they can.”