New Report from the Knight Foundation: “Since 2007, Knight Foundation has run or funded nearly a dozen open contests, many over multiple years, choosing some 400 winners from almost 25,000 entries, and granting more than $75 million to individuals, businesses, schools and nonprofits. The winners believe, as we do, that democracy thrives when people and communities are informed and engaged. The contests reflect the full diversity of our program areas: journalism and media innovation, engaging communities and fostering the arts. Over the past seven years, we have learned a lot about how good contests work, what they can do, and what the challenges are. Though contests represent less than 20 percent of our grant-making, they have improved our traditional programs in myriad ways.
A 2009 McKinsey & Company Report, “And the winner is…, ” put it this way: “Every leading philanthropist should consider the opportunity to use prizes to help achieve their mission, and to accept the challenge of fully exploiting this powerful tool. ” But of America ‘s more than 76,000 grant-making foundations, only a handful, maybe 100 at most, have embraced the use of contests. That means 99.9 percent do not.
Sharing these lessons here is an invitation to others to consider how contests, when appropriate, might widen their networks, deepen the work they already do, and broaden their definition of philanthropic giving.
Before you launch and manage your own contests, you might want to consider the six major lessons we ‘ve learned about how contests improved our philanthropy.
1. They bring in new blood and new ideas.
2. They create value beyond the winners.
3. They help organizations spot emerging trends.
4. They challenge routines and entrenched foundation behaviors.
5. They complement existing philanthropy strategies.
6. They create new ways to engage communities.
…Depending upon the competition, the odds of winning one of Knight’s contests are, at their lowest, one in six, and at their highest, more than one in 100. But if you think of your contest only as a funnel spitting out a handful of winning ideas, you overlook what’s really happening. A good contest is more a megaphone for a cause.”
Data Science for Social Good
Data Science for Social Good: “By analyzing data from police reports to website clicks to sensor signals, governments are starting to spot problems in real-time and design programs to maximize impact. More nonprofits are measuring whether or not they’re helping people, and experimenting to find interventions that work.
None of this is inevitable, however.
We’re just realizing the potential of using data for social impact and face several hurdles to it’s widespread adoption:
- Most governments and nonprofits simply don’t know what’s possible yet. They have data – but often not enough and maybe not the right kind.
- There are too few data scientists out there – and too many spending their days optimizing ads instead of bettering lives.
To make an impact, we need to show social good organizations the power of data and analytics. We need to work on analytics projects that have high social impact. And we need to expose data scientists to the problems that really matter.
The fellowship
That’s exactly why we’re doing the Eric and Wendy Schmidt Data Science for Social Good summer fellowship at the University of Chicago.
We want to bring three dozen aspiring data scientists to Chicago, and have them work on data science projects with social impact.
Working closely with governments and nonprofits, fellows will take on real-world problems in education, health, energy, transportation, and more.
Over the next three months, they’ll apply their coding, machine learning, and quantitative skills, collaborate in a fast-paced atmosphere, and learn from mentors in industry, academia, and the Obama campaign.
The program is led by a strong interdisciplinary team from the Computation institute and the Harris School of Public Policy at the University of Chicago.”
Metadata Liberation Movement
Holman Jenkins in the Wall Street Journal: “The biggest problem, then, with metadata surveillance may simply be that the wrong agencies are in charge of it. One particular reason why this matters is that the potential of metadata surveillance might actually be quite large but is being squandered by secret agencies whose narrow interest is only looking for terrorists….
“Big data” is only as good as the algorithms used to find out things worth finding out. The efficacy and refinement of big-data techniques are advanced by repetition, by giving more chances to find something worth knowing. Bringing metadata out of its black box wouldn’t only be a way to improve public trust in what government is doing. It would be a way to get more real value for society out of techniques that are being squandered on a fairly minor threat.
Bringing metadata out of the black box would open up new worlds of possibility—from anticipating traffic jams to locating missing persons after a disaster. It would also create an opportunity to make big data more consistent with the constitutional prohibition of unwarranted search and seizure. In the first instance, with the computer withholding identifying details of the individuals involved, any red flag could be examined by a law-enforcement officer to see, based on accumulated experience, whether the indication is of interest.
If so, a warrant could be obtained to expose the identities involved. If not, the record could immediately be expunged. All this could take place in a reasonably aboveboard, legal fashion, open to inspection in court when and if charges are brought or—this would be a good idea—a court is informed of investigations that led to no action.
Our guess is that big data techniques would pop up way too many false positives at first, and only considerable learning and practice would allow such techniques to become a useful tool. At the same time, bringing metadata surveillance out of the shadows would help the Googles, Verizons and Facebooks defend themselves from a wholly unwarranted suspicion that user privacy is somehow better protected by French or British or (heavens) Chinese companies from their own governments than U.S. data is from the U.S. government.
Most of all, it would allow these techniques to be put to work on solving problems that are actual problems for most Americans, which terrorism isn’t.”
Feedback Labs
Feedback Labs: “If you find yourself asking the following three questions, then you have come to the right place:
- “What do citizens want?”
- “Are they getting it?”
- “If not, how will things change?”
Much excellent work has been done over recent years to answer the first and second questions. Our goal is to catalyze that work and make it matter by focusing on the third question – “How will things change?”
Aid, philanthropy, and government programs are often designed, implemented and evaluated by experts. We think that citizens should increasingly be in the driver’s seat. Experts are still important, but in many cases their role needs to shift from being a decision-maker to being people who enrich and inform conversations among citizens.
What will Feedback Labs do?
Based on what we have heard so far, we think we can add value in three ways:
- Frame the issues – for example, what exactly do we mean by feedback loops? What works and what doesn’t? What is the evidence for impact?
- Help close the feedback loop – uncover approaches that are succeeding at finding out what people want and whether they are getting it, and then helping to close the loop by understanding (and in some cases funding) what it takes to translate citizen voice into real changes in programs.
- Facilitate mainstreaming – i.e., assist aid, philanthropy and government organizations adopt feedback loops in their normal course of operation. We want to make feedback loops the norm rather than the exception.
Historically we have often assumed that the flow of knowledge is from the richer countries to the poorer. But learning goes both ways, and in the case of feedback loops, some of the most innovative approaches are being pioneered in developing countries. So we plan to support work both internationally and domestically.”
9 models to scale open data – past, present and future
Ones that are working now
1) Form a community to enter in new data. Open Street Map and MusicBrainz are two big examples. It works as the community is the originator of the data. That said, neither has dominated its industry as much as I thought they would have by now.
2) Sell tools to an upstream generator of open data. This is what CKAN does for central Governments (and the new ScraperWiki CKAN tool helps with). It’s what mySociety does, when selling FixMyStreet installs to local councils, thereby publishing their potholes as RSS feeds.
3) Use open data (quietly). Every organisation does this and never talks about it. It’s key to quite old data resellers like Bloomberg. It is what most of ScraperWiki’s professional services customers ask us to do. The value to society is enormous and invisible. The big flaw is that it doesn’t help scale supply of open data.
4) Sell tools to downstream users. This isn’t necessarily open data specific – existing software like spreadsheets and Business Intelligence can be used with open or closed data. Lots of open data is on the web, so tools like the new ScraperWiki which work well with web data are particularly suited to it.
Ones that haven’t worked
5) Collaborative curation ScraperWiki started as an audacious attempt to create an open data curation community, based on editing scraping code in a wiki. In its original form (now called ScraperWiki Classic) this didn’t scale. …With a few exceptions, notably OpenCorporates, there aren’t yet open data curation projects.
6) General purpose data marketplaces, particularly ones that are mainly reusing open data, haven’t taken off. They might do one day, however I think they need well-adopted higher level standards for data formatting and syncing first (perhaps something like dat, perhaps something based on CSV files).
Ones I expect more of in the future
These are quite exciting models which I expect to see a lot more of.
7) Give labour/money to upstream to help them create better data. This is quite new. The only, and most excellent, example of it is the UK’s National Archive curating the Statute Law Database. They do the work with the help of staff seconded from commercial legal publishers and other parts of Government.
It’s clever because it generates money for upstream, which people trust the most, and which has the most ability to improve data quality.
8) Viral open data licensing. MySQL made lots of money this way, offering proprietary dual licenses of GPLd software to embedded systems makers. In data this could use OKFN’s Open Database License, and organisations would pay when they wanted to mix the open data with their own closed data. I don’t know anyone actively using it, although Chris Taggart from OpenCorporates mentioned this model to me years ago.
9) Corporations release data for strategic advantage. Companies are starting to release their own data for strategic gain. This is very new. Expect more of it.”
What Happens When Everyone Makes Maps?
Laura Mallonee in the Atlantic: “On a spring Sunday in a Soho penthouse, ten people have gathered for a digital mapping “Edit-A-Thon.” Potted plants grow to the ceiling and soft cork carpets the floor. At a long wooden table, an energetic woman named Liz Barry is showing me how to map my neighborhood. “This is what you’ll see when you look at OpenStreetMap,” she says.
Though visually similar to Google’s, the map on the screen gives users unfettered access to its underlying data — anyone can edit it. Barry lives in Williamsburg, and she’s added many of the neighborhood’s boutiques and restaurants herself. “Sometimes when I’m tired at the end of the day and can’t work anymore, I just edit OpenStreetMap,” she says. “Kind of a weird habit.” Barry then shows me the map’s “guts.” I naively assume it will be something technical and daunting, but it’s just an editable version of the same map, with tools that let you draw roads, identify landmarks, and even label your own house.”
Crowdsourcing—Harnessing the Masses to Advance Health and Medicine
A Systematic Review of the literature in the Journal of General Internal Medicine: “Crowdsourcing research allows investigators to engage thousands of people to provide either data or data analysis. However, prior work has not documented the use of crowdsourcing in health and medical research. We sought to systematically review the literature to describe the scope of crowdsourcing in health research and to create a taxonomy to characterize past uses of this methodology for health and medical research..
Twenty-one health-related studies utilizing crowdsourcing met eligibility criteria. Four distinct types of crowdsourcing tasks were identified: problem solving, data processing, surveillance/monitoring, and surveying. …
Utilizing crowdsourcing can improve the quality, cost, and speed of a research project while engaging large segments of the public and creating novel science. Standardized guidelines are needed on crowdsourcing metrics that should be collected and reported to provide clarity and comparability in methods.”
Why We Collaborate
NPR and TED Radio Hour:
The Internet as a tool allows for really brilliant people to do things that they weren’t really able to do in the past. — Jimmy Wales
“The world has over a trillion hours a year of free time to commit to shared projects,” says professor Clay Shirky. But what motivates dozens, thousands, even millions of people to come together on the Internet and commit their time to a project for free? What is the key to making a successful collaboration work? In this hour, TED speakers unravel ideas behind the mystery of mass collaborations that build a better world.
The Science of Familiar Strangers: Society’s Hidden Social Network
The Physics arXiv Blog “We’ve all experienced the sense of being familiar with somebody without knowing their name or even having spoken to them. These so-called “familiar strangers” are the people we see every day on the bus on the way to work, in the sandwich shop at lunchtime, or in the local restaurant or supermarket in the evening.
These people are the bedrock of society and a rich source of social potential as neighbours, friends, or even lovers.
But while many researchers have studied the network of intentional links between individuals—using mobile-phone records, for example—little work has been on these unintentional links, which form a kind of hidden social network.
Today, that changes thanks to the work of Lijun Sun at the Future Cities Laboratory in Singapore and a few pals who have analysed the passive interactions between 3 million residents on Singapore’s bus network (about 55 per cent of the city’s population). ”This is the first time that such a large network of encounters has been identied and analyzed,” they say.
The results are a fascinating insight into this hidden network of familiar strangers and the effects it has on people….
Perhaps the most interesting result involves the way this hidden network knits society together. Lijun and co say that the data hints that the connections between familiar strangers grows stronger over time. So seeing each other more often increases the chances that familiar strangers will become socially connected.
That’s a fascinating insight into the hidden social network in which we are all embedded. It’s important because it has implications for our understanding of the way things like epidemics can spread through cities.
Perhaps a more interesting is the insight it gives into how links form within communities and how these can strengthened. With the widespread adoption of smart cards on transport systems throughout the world, this kind of study can easily be repeated in many cities, which may help to tease apart some of the factors that make them so different.”
Ref: arxiv.org/abs/1301.5979: Understanding Metropolitan Patterns of Daily Encounters