By Aleks Krotoski: “The World Wide Web is the most revolutionary innovation of our time. In the last decade, it has utterly transformed our lives. But what real effects is it having on our social world? What does it mean to be a modern family when dinner table conversations take place over smartphones? What happens to privacy when we readily share our personal lives with friends and corporations? Are our Facebook updates and Twitterings inspiring revolution or are they just a symptom of our global narcissism? What counts as celebrity, when everyone can have a following or be a paparazzo? And what happens to relationships when love, sex and hate can be mediated by a computer? Social psychologist Aleks Krotoski has spent a decade untangling the effects of the Web on how we work, live and play. In this groundbreaking book, she uncovers how much humanity has – and hasn’t – changed because of our increasingly co-dependent relationship with the computer. In Untangling the Web, she tells the story of how the network became woven in our lives, and what it means to be alive in the age of the Internet.” Blog: http://untanglingtheweb.tumblr.com/
The Power of Hackathons
Woodrow Wilson International Center for Scholars: “The Commons Lab of the Science and Technology Innovation Program is proud to announce the release of The Power of Hackathons: A Roadmap for Sustainable Open Innovation. Hackathons are collaborative events that have long been part of programmer culture, where people gather in person, online or both to work together on a problem. This could involve creating an application, improving an existing one or testing a platform.
In recent years, government agencies at multiple levels have started holding hackathon events of their own. For this brief, author Zachary Bastian interviewed agency staff, hackathon planners and hackathon participants to better understand how these events can be structured. The fundamental lesson was that a hackathon is not a panacea, but instead should be part of a broader open data and innovation centric strategy.
The full brief can be found here”
Why you should never trust a data visualisation
John Burn-Murdoch in The Guardian: “An excellent blogpost has been receiving a lot of attention over the last week. Pete Warden, an experienced data scientist and author for O’Reilly on all things data, writes:
The wonderful thing about being a data scientist is that I get all of the credibility of genuine science, with none of the irritating peer review or reproducibility worries … I thought I was publishing an entertaining view of some data I’d extracted, but it was treated like a scientific study.
This is an important acknowledgement of a very real problem, but in my view Warden has the wrong target in his crosshairs. Data presented in any medium is a powerful tool and must be used responsibly, but it is when information is expressed visually that the risks are highest.
The central example Warden uses is his visualisation of Facebook friend networks across the United States, which proved extremely popular and was even cited in the New York Times as evidence for growing social division.
As he explains in his post, the methodology behind his underlying network graph is perfectly defensible, but the subsequent clustering process was “produced by me squinting at all the lines, coloring in some areas that seemed more connected in a paint program, and picking silly names for the areas”. The exercise was only ever intended as a bit of fun with a large and interesting dataset, so there really shouldn’t be any problem here.
But there is: humans are visual creatures. Peer-reviewed studies have shown that we can consume information more quickly when it is expressed in diagrams than when it is presented as text.
Even something as simple as colour scheme can have a marked impact on the perceived credibility of information presented visually – often a considerably more marked impact than the actual authority of the data source.
Another great example of this phenomenon was the Washington Post’s ‘map of the world’s most and least racially tolerant countries‘, which went viral back in May of this year. It was widely accepted as an objective, scientific piece of work, despite a number of social scientists identifying flaws in the methodology and the underlying data itself.”
The Role of Digital Media in Participatory Politics
Interview with Joseph Kahne, chair of the MacArthur Foundation’s Research Network on Youth and Participatory Politics: “What we found was that many games provided civic learning opportunities, such as opportunities to take on the role of a leader—the president, for example—or opportunities to help others. There also were simulations where players had opportunities to work on a societal issue and to learn about institutional processes—how a legislature works, for example. And we found that when games provided those kinds of civic learning opportunities, playing them was associated with much higher commitments to civic engagement. We think some of the relationship was due to youth with civic interests choosing to play those games, and that some of the relationship was due to these games orienting youth towards the potential of civic activity. – …
Why Contests Improve Philantropy
New Report from the Knight Foundation: “Since 2007, Knight Foundation has run or funded nearly a dozen open contests, many over multiple years, choosing some 400 winners from almost 25,000 entries, and granting more than $75 million to individuals, businesses, schools and nonprofits. The winners believe, as we do, that democracy thrives when people and communities are informed and engaged. The contests reflect the full diversity of our program areas: journalism and media innovation, engaging communities and fostering the arts. Over the past seven years, we have learned a lot about how good contests work, what they can do, and what the challenges are. Though contests represent less than 20 percent of our grant-making, they have improved our traditional programs in myriad ways.
A 2009 McKinsey & Company Report, “And the winner is…, ” put it this way: “Every leading philanthropist should consider the opportunity to use prizes to help achieve their mission, and to accept the challenge of fully exploiting this powerful tool. ” But of America ‘s more than 76,000 grant-making foundations, only a handful, maybe 100 at most, have embraced the use of contests. That means 99.9 percent do not.
Sharing these lessons here is an invitation to others to consider how contests, when appropriate, might widen their networks, deepen the work they already do, and broaden their definition of philanthropic giving.
Before you launch and manage your own contests, you might want to consider the six major lessons we ‘ve learned about how contests improved our philanthropy.
1. They bring in new blood and new ideas.
2. They create value beyond the winners.
3. They help organizations spot emerging trends.
4. They challenge routines and entrenched foundation behaviors.
5. They complement existing philanthropy strategies.
6. They create new ways to engage communities.
…Depending upon the competition, the odds of winning one of Knight’s contests are, at their lowest, one in six, and at their highest, more than one in 100. But if you think of your contest only as a funnel spitting out a handful of winning ideas, you overlook what’s really happening. A good contest is more a megaphone for a cause.”
Data Science for Social Good
Data Science for Social Good: “By analyzing data from police reports to website clicks to sensor signals, governments are starting to spot problems in real-time and design programs to maximize impact. More nonprofits are measuring whether or not they’re helping people, and experimenting to find interventions that work.
None of this is inevitable, however.
We’re just realizing the potential of using data for social impact and face several hurdles to it’s widespread adoption:
- Most governments and nonprofits simply don’t know what’s possible yet. They have data – but often not enough and maybe not the right kind.
- There are too few data scientists out there – and too many spending their days optimizing ads instead of bettering lives.
To make an impact, we need to show social good organizations the power of data and analytics. We need to work on analytics projects that have high social impact. And we need to expose data scientists to the problems that really matter.
The fellowship
That’s exactly why we’re doing the Eric and Wendy Schmidt Data Science for Social Good summer fellowship at the University of Chicago.
We want to bring three dozen aspiring data scientists to Chicago, and have them work on data science projects with social impact.
Working closely with governments and nonprofits, fellows will take on real-world problems in education, health, energy, transportation, and more.
Over the next three months, they’ll apply their coding, machine learning, and quantitative skills, collaborate in a fast-paced atmosphere, and learn from mentors in industry, academia, and the Obama campaign.
The program is led by a strong interdisciplinary team from the Computation institute and the Harris School of Public Policy at the University of Chicago.”
Metadata Liberation Movement
Holman Jenkins in the Wall Street Journal: “The biggest problem, then, with metadata surveillance may simply be that the wrong agencies are in charge of it. One particular reason why this matters is that the potential of metadata surveillance might actually be quite large but is being squandered by secret agencies whose narrow interest is only looking for terrorists….
“Big data” is only as good as the algorithms used to find out things worth finding out. The efficacy and refinement of big-data techniques are advanced by repetition, by giving more chances to find something worth knowing. Bringing metadata out of its black box wouldn’t only be a way to improve public trust in what government is doing. It would be a way to get more real value for society out of techniques that are being squandered on a fairly minor threat.
Bringing metadata out of the black box would open up new worlds of possibility—from anticipating traffic jams to locating missing persons after a disaster. It would also create an opportunity to make big data more consistent with the constitutional prohibition of unwarranted search and seizure. In the first instance, with the computer withholding identifying details of the individuals involved, any red flag could be examined by a law-enforcement officer to see, based on accumulated experience, whether the indication is of interest.
If so, a warrant could be obtained to expose the identities involved. If not, the record could immediately be expunged. All this could take place in a reasonably aboveboard, legal fashion, open to inspection in court when and if charges are brought or—this would be a good idea—a court is informed of investigations that led to no action.
Our guess is that big data techniques would pop up way too many false positives at first, and only considerable learning and practice would allow such techniques to become a useful tool. At the same time, bringing metadata surveillance out of the shadows would help the Googles, Verizons and Facebooks defend themselves from a wholly unwarranted suspicion that user privacy is somehow better protected by French or British or (heavens) Chinese companies from their own governments than U.S. data is from the U.S. government.
Most of all, it would allow these techniques to be put to work on solving problems that are actual problems for most Americans, which terrorism isn’t.”
Feedback Labs
Feedback Labs: “If you find yourself asking the following three questions, then you have come to the right place:
- “What do citizens want?”
- “Are they getting it?”
- “If not, how will things change?”
Much excellent work has been done over recent years to answer the first and second questions. Our goal is to catalyze that work and make it matter by focusing on the third question – “How will things change?”
Aid, philanthropy, and government programs are often designed, implemented and evaluated by experts. We think that citizens should increasingly be in the driver’s seat. Experts are still important, but in many cases their role needs to shift from being a decision-maker to being people who enrich and inform conversations among citizens.
What will Feedback Labs do?
Based on what we have heard so far, we think we can add value in three ways:
- Frame the issues – for example, what exactly do we mean by feedback loops? What works and what doesn’t? What is the evidence for impact?
- Help close the feedback loop – uncover approaches that are succeeding at finding out what people want and whether they are getting it, and then helping to close the loop by understanding (and in some cases funding) what it takes to translate citizen voice into real changes in programs.
- Facilitate mainstreaming – i.e., assist aid, philanthropy and government organizations adopt feedback loops in their normal course of operation. We want to make feedback loops the norm rather than the exception.
Historically we have often assumed that the flow of knowledge is from the richer countries to the poorer. But learning goes both ways, and in the case of feedback loops, some of the most innovative approaches are being pioneered in developing countries. So we plan to support work both internationally and domestically.”
9 models to scale open data – past, present and future
Ones that are working now
1) Form a community to enter in new data. Open Street Map and MusicBrainz are two big examples. It works as the community is the originator of the data. That said, neither has dominated its industry as much as I thought they would have by now.
2) Sell tools to an upstream generator of open data. This is what CKAN does for central Governments (and the new ScraperWiki CKAN tool helps with). It’s what mySociety does, when selling FixMyStreet installs to local councils, thereby publishing their potholes as RSS feeds.
3) Use open data (quietly). Every organisation does this and never talks about it. It’s key to quite old data resellers like Bloomberg. It is what most of ScraperWiki’s professional services customers ask us to do. The value to society is enormous and invisible. The big flaw is that it doesn’t help scale supply of open data.
4) Sell tools to downstream users. This isn’t necessarily open data specific – existing software like spreadsheets and Business Intelligence can be used with open or closed data. Lots of open data is on the web, so tools like the new ScraperWiki which work well with web data are particularly suited to it.
Ones that haven’t worked
5) Collaborative curation ScraperWiki started as an audacious attempt to create an open data curation community, based on editing scraping code in a wiki. In its original form (now called ScraperWiki Classic) this didn’t scale. …With a few exceptions, notably OpenCorporates, there aren’t yet open data curation projects.
6) General purpose data marketplaces, particularly ones that are mainly reusing open data, haven’t taken off. They might do one day, however I think they need well-adopted higher level standards for data formatting and syncing first (perhaps something like dat, perhaps something based on CSV files).
Ones I expect more of in the future
These are quite exciting models which I expect to see a lot more of.
7) Give labour/money to upstream to help them create better data. This is quite new. The only, and most excellent, example of it is the UK’s National Archive curating the Statute Law Database. They do the work with the help of staff seconded from commercial legal publishers and other parts of Government.
It’s clever because it generates money for upstream, which people trust the most, and which has the most ability to improve data quality.
8) Viral open data licensing. MySQL made lots of money this way, offering proprietary dual licenses of GPLd software to embedded systems makers. In data this could use OKFN’s Open Database License, and organisations would pay when they wanted to mix the open data with their own closed data. I don’t know anyone actively using it, although Chris Taggart from OpenCorporates mentioned this model to me years ago.
9) Corporations release data for strategic advantage. Companies are starting to release their own data for strategic gain. This is very new. Expect more of it.”
What Happens When Everyone Makes Maps?
Laura Mallonee in the Atlantic: “On a spring Sunday in a Soho penthouse, ten people have gathered for a digital mapping “Edit-A-Thon.” Potted plants grow to the ceiling and soft cork carpets the floor. At a long wooden table, an energetic woman named Liz Barry is showing me how to map my neighborhood. “This is what you’ll see when you look at OpenStreetMap,” she says.
Though visually similar to Google’s, the map on the screen gives users unfettered access to its underlying data — anyone can edit it. Barry lives in Williamsburg, and she’s added many of the neighborhood’s boutiques and restaurants herself. “Sometimes when I’m tired at the end of the day and can’t work anymore, I just edit OpenStreetMap,” she says. “Kind of a weird habit.” Barry then shows me the map’s “guts.” I naively assume it will be something technical and daunting, but it’s just an editable version of the same map, with tools that let you draw roads, identify landmarks, and even label your own house.”