Information Now: Open Access and the Public Good


Podcast from SMARTech (Georgia Tech): “Every year, the international academic and research community dedicates a week in October to discuss, debate, and learn more about Open Access. Open Access in the academic sense refers to the free, immediate, and online access to the results of scholarly research, primarily academic, peer-reviewed journal articles. In the United States, the movement in support of Open Access has, in the last decade, been growing dramatically. Because of this growing interest in Open Access, a group of academic librarians from the Georgia Tech library, Wendy Hagenmaier (Digital Collections Archivist), Fred Rascoe (Scholarly Communication Librarian), and Lizzy Rolando (Research Data Librarian), got together to talk to folks in the thick of it, to try and unravel some of the different concerns and benefits of Open Access. But we didn’t just want to talk about Open Access for journal articles – we wanted to examine more broadly what it means to be “open”, what is open information, and what relationship open information has to the public good. In this podcast, we talk with different people who have seen and experienced open information and open access in practice. In the first act, Dan Cohen from the DPLA speaks about efforts to expand public access to archival and library collections. In the second, we’ll hear an argument from Christine George about why things sometimes need to be closed, if we want them to be open in the future. Third, Kari Watkins speaks about specific example of when a government agency decided, against legitimate concerns, to make transit data open, and why it worked for them. Fourth, Peter Suber from Harvard University will give us the background on the Open Access movement, some myths that have been dispelled, and why it is important for academic researchers to take the leap to make their research openly accessible. And finally, we’ll hear from Michael Chang, a researcher who did take that leap and helped start an Open Access journal, and why he sees openness in research as his obligation.”

See also Personal Guide to Open Access

Are We Puppets in a Wired World?


Sue Halpern in The New York Review of Books: “Also not obvious was how the Web would evolve, though its open architecture virtually assured that it would. The original Web, the Web of static homepages, documents laden with “hot links,” and electronic storefronts, segued into Web 2.0, which, by providing the means for people without technical knowledge to easily share information, recast the Internet as a global social forum with sites like Facebook, Twitter, FourSquare, and Instagram.
Once that happened, people began to make aspects of their private lives public, letting others know, for example, when they were shopping at H+M and dining at Olive Garden, letting others know what they thought of the selection at that particular branch of H+M and the waitstaff at that Olive Garden, then modeling their new jeans for all to see and sharing pictures of their antipasti and lobster ravioli—to say nothing of sharing pictures of their girlfriends, babies, and drunken classmates, or chronicling life as a high-paid escort, or worrying about skin lesions or seeking a cure for insomnia or rating professors, and on and on.
The social Web celebrated, rewarded, routinized, and normalized this kind of living out loud, all the while anesthetizing many of its participants. Although they likely knew that these disclosures were funding the new information economy, they didn’t especially care…
The assumption that decisions made by machines that have assessed reams of real-world information are more accurate than those made by people, with their foibles and prejudices, may be correct generally and wrong in the particular; and for those unfortunate souls who might never commit another crime even if the algorithm says they will, there is little recourse. In any case, computers are not “neutral”; algorithms reflect the biases of their creators, which is to say that prediction cedes an awful lot of power to the algorithm creators, who are human after all. Some of the time, too, proprietary algorithms, like the ones used by Google and Twitter and Facebook, are intentionally biased to produce results that benefit the company, not the user, and some of the time algorithms can be gamed. (There is an entire industry devoted to “optimizing” Google searches, for example.)
But the real bias inherent in algorithms is that they are, by nature, reductive. They are intended to sift through complicated, seemingly discrete information and make some sort of sense of it, which is the definition of reductive.”
Books reviewed:

The End of Hypocrisy


New paper by Henry Farrell and Martha Finnemore in Foreign Affairs: “The U.S. government seems outraged that people are leaking classified materials about its less attractive behavior. It certainly acts that way: three years ago, after Chelsea Manning, an army private then known as Bradley Manning, turned over hundreds of thousands of classified cables to the anti-secrecy group WikiLeaks, U.S. authorities imprisoned the soldier under conditions that the UN special rapporteur on torture deemed cruel and inhumane. The Senate’s top Republican, Mitch McConnell, appearing on Meet the Press shortly thereafter, called WikiLeaks’ founder, Julian Assange, “a high-tech terrorist.””
More recently, following the disclosures about U.S. spying programs by Edward Snowden, a former National Security Agency analyst, U.S. officials spent a great deal of diplomatic capital trying to convince other countries to deny Snowden refuge. And U.S. President Barack Obama canceled a long-anticipated summit with Russian President Vladimir Putin when he refused to comply.
Despite such efforts, however, the U.S. establishment has often struggled to explain exactly why these leakers pose such an enormous threat. Indeed, nothing in the Manning and Snowden leaks should have shocked those who were paying attention…
The deeper threat that leakers such as Manning and Snowden pose is more subtle than a direct assault on U.S. national security: they undermine Washington’s ability to act hypocritically and get away with it. Their danger lies not in the new information that they reveal but in the documented confirmation they provide of what the United States is actually doing and why…”

IRM releases United States report for public comment


“The Open Government Partnership’s Independent Reporting Mechanism (IRM) has launched its eighth progress reports for public comment; this one is on the United States and can be found below….
The United States’ action plan was highly varied and, in many respects, ambitious and innovative and significant progress was made on most of the commitments. While OGP implementation in the United States drew inspiration from an unprecedented consultation on open government during the implementation of the 2009 Open Government Directive, the dedicated public consultation for the OGP action plan was more limited and arguably more targeted.
Several of the commitments in the action plan focused on improving transparency; however, open government progress has been relatively slower in controversial areas such as national security, ethics reform, declassification of documents, and Freedom of Information Act reform.
The United States completed half of the commitments in its action plan, while the other half saw limited or substantial progress.
Due to the nature of the US government, wherein federal agencies are to some degree independent of the White House, much of the best participation took place within agencies. There were several notable examples of participation and collaboration at this level, including the commitments around the Extractive Industries Transparency Initiative, the National Dialogue on Federal Website Policy, and NASA’s Space Apps competition.
This report is a draft for public comment.  All interested parties are encouraged to comment on this blog or to send public comments to [email protected] until November 14. Comments will be collated and published, except where the requestor asks to be anonymous. Where substantive factual errors are identified, comments will be integrated into a final version of the report.”
 

United States IRM Report

Text messages are saving Swedes from cardiac arrest


Philip A. Stephenson in Quartz: “Sweden has found a faster way to treat people experiencing cardiac emergencies through a text message and a few thousand volunteers.

A program called SMSlivräddare, (or SMSLifesaver) (link in Swedish) solicits people who’ve been trained in cardiopulmonary resuscitation (CPR). When a Stockholm resident dials 112 for emergency services, a text message is sent to all volunteers within 500 meters of the person in need. The volunteer then arrives at the location within the crucial first minutes to perform lifesaving CPR. The odds for surviving cardiac arrest drop 10% for every minute it takes first responders to arrive…

With ambulance resources stretched thin, the average response time is some eight minutes, allowing SMS-livräddare-volunteers to reach victims before ambulances in 54% of cases.

Through a combination of techniques, including SMS-livräddare, Stockholm County has seen survival rates after cardiac arrest rise from 3% to nearly 11%, over the last decade. Local officials have also enlisted fire and police departments to respond to cardiac emergencies, but the Lifesavers routinely arrive before them as well.

Currently 9,600 Stockholm residents are registered SMS-livräddare-volunteers and there are plans to continue to increase enrollment. An estimated 200,000 Swedes have completed the necessary CPR training, and could, potentially, join the program….

Medical officials in other countries, including Scotland, are now considering similar community-based programs for cardiac arrest.”

The "crowd computing" revolution


Michael Copeland in the Atlantic: “Software might be eating the world, but Rob Miller, a professor of computer science at MIT, foresees a “crowd computing” revolution that makes workers and machines colleagues rather than competitors….
Miller studies human-computer interaction, specifically a field called crowd computing. A play on the more common term “cloud computing,” crowd computing is software that employs a group of people to do small tasks and solve a problem better than an algorithm or a single expert. Examples of crowd computing include Wikipedia, Amazon’s Mechanical Turk (where workers outsource projects that computers can’t do to an online community) a Facebook’s photo tagging feature.
But just as humans are better than computers at some things, Miller concedes that algorithms have surpassed human capability in several fields. Take a look at libraries, which now have advanced digital databases, eliminating the need for most human reference librarians. There’s also flight search, where algorithms are much better than people at finding the cheapest fare.
That said, more complicated tasks even in those fields can get tricky for a computer.
“For complex flight search, people are still better,” Miller says. A site called Flightfox lets travelers input a complex trip while a group of experts help find the cheapest or most convenient combination of flights. “There are travel agents and frequent flyers in that crowd, people with expertise at working angles of the airfare system that are not covered by the flight searches and may never be covered because they involve so many complex intersecting rules that are very hard to code.”
Social and cultural understanding is another area in which humans will always exceed computers, Miller says. People are constantly inventing new slang, watching the latest viral videos and movies, or partaking in some other cultural phenomena together. That’s something that an algorithm won’t ever be able to catch up to. “There’s always going to be a frontier of human understanding that leads the machines,” he says.
A post-employee economy where every task is automated by a computer is something Miller does not see happening, nor does he want it to happen. Instead, he considers the relationship between human and machine symbiotic. Both machines and humans benefit in crowd computing, “the machine wants to acquire data so it can train and get better. The crowd is improved in many ways, like through pay or education,” Miller says. And finally, the end users “get the benefit of a more accurate and fast answer.”
Miller’s User Interface Design Group at MIT has made several programs illustrating how this symbiosis between user, crowd and machine works. Most recently, the MIT group created Cobi, a tool that taps into an academic community to plan a large-scale conference. The software allows members to identify papers they want presented and what authors are experts in specific fields. A scheduling tool combines the community’s input with an algorithm that finds the best times to meet.
Programs more practical for everyday users include Adrenaline, a camera driven by a crowd, and Soylent, a word processing tool that allows people to do interactive document shortening and proofreading. The Adrenaline camera took a video and then had a crowd on call to very quickly identify the best still in that video, whether it was the best group portrait, mid-air jump, or angle of somebody’s face. Soylent also used users on Mechanical Turk to proofread and shorten text in Microsoft Word. In the process, Miller and his students found that the crowd found errors that neither a single expert proofreader nor the program—with spell and grammar check turned on—could find.
“It shows this is the essential thing that human beings bring that algorithms do not,” Miller said.
That said, you can’t just use any crowd for any task. “It does depend on having appropriate expertise in the crowd. If [the text] had been about computational biology, they might not have caught [the error]. The crowd does have to have skills.” Going forward, Miller thinks that software will increasingly use the power of the crowd. “In the next 10 or 20 years it will be more likely we already have a crowd,” he says. “There will already be these communities and they will have needs, some of which will be satisfied by software and some which will require human help and human attention. I think a lot of these algorithms and system techniques that are being developed by all these startups, who are experimenting with it in their own spaces, are going to be things that we’ll just naturally pick up and use as tools.”

Residents remix their neighborhood’s streets through platform


Springwise: “City residents may not have degrees in urban planning, but their everyday use of high streets, parks and main roads means they have some valuable input into what’s best for their local environment. A new website called Streetmix is helping to empower citizens, enabling them to become architects with an easy-to-use street-building platform.
Developed by Code for America, the site greets users with a colorful cartoon representation of a typical street, split into segments of varying widths. Designers can then swap and change each piece into road, cycle paths, pedestrian areas, bus stops, bike racks and other amenities, as well as alter their dimensions. Users can create their own perfect high street or use the exact measurements of their own neighborhood to come up with new propositions for planned construction work. Indeed, Streetmix has already found use among residents and organizations to demonstrate how to better use the local space available. Kansas City’s Bike Walk KC has utilized the platform to show how new bike lanes could figure in an upcoming study of traffic flow in the region, while New Zealand’s Transport Blog has presented several alternatives to current street layouts in Auckland.
Streetmix is an easy-to-use visualization tool that can help amateurs present their ideas to local authorities in a more coherent way, potentially increasing the chances of politicians hearing calls for change. Are there other ways to help laymen express complex ideas more eloquently?”
Spotted by Murtaza Patel, written by Springwise

A Data Revolution for Poverty Eradication


Report from devint.org: “The High Level Panel on the Post–2015 Development Agenda called for a data revolution for sustainable development, with a new international initiative to improve the quality of statistics and information available to citizens. It recommended actively taking advantage of new technology, crowd sourcing, and improved connectivity to empower people with information on the progress towards the targets. Development Initiatives believes there a number of steps that should be put in place in order to deliver the ambition set out by the Panel.
The data revolution should be seen as a basis on which greater openness and a wider transparency revolution can be built. The openness movement – one of the most exciting and promising developments of the last decade – is starting to transform the citizen-state compact. Rich and developing country governments are adapting the way they do business, recognising that greater transparency and participation leads to more effective, efficient, and equitable management of scarce public resources. Increased openness of data has potential to democratise access to information, empowering individuals with the knowledge they need to tackle the problems that they face. To realise this bold ambition, the revolution will need to reach beyond the niche data and statistical communities, sell the importance of the revolution to a wide range of actors (governments, donors, CSOs and the media) and leverage the potential of open data to deliver more usable information”

7 Tactics for 21st-Century Cities


Abhi Nemani, co-director of Code for America: “Be it the burden placed on them by shrinking federal support, or the opportunity presented by modern technology, 21st-century cities are finding new ways to do things. For four years, Code for America has worked with dozens of cities, each finding creative ways to solve neighborhood problems, build local capacity and steward a national network. These aren’t one-offs. Cities are championing fundamental, institutional reforms to commit to an ongoing innovation agenda.
Here are a few of the ways how:

  1. …Create an office of new urban mechanics or appoint a chief innovation officer…
  2. …Appoint a chief data officer or create an office of performance management/enhancement…
  3. …Adopt the Gov.UK Design Principles, and require plain, human language on every interface….
  4. …Share open source technology with a sister city or change procurement rules to make it easier to redeploy civic tech….
  5. …Work with the local civic tech community and engage citizens for their feedback on city policy through events, tech and existing forums…
  6. …Create an open data policy and adopt open data specifications…
  7. …Attract tech talent into city leadership, and create training opportunities citywide to level up the tech literacy for city staff…”

From open data to open democracy


Article by : “Such debates further underscore the complexities of open data and where it might lead. While open data may be viewed by some inside and outside government as a technically-focused and largely incremental project based upon information formatting and accessibility (with the degree of openness subject to a myriad of security and confidentiality provisions), such an approach greatly limits its potential. Indeed, the growing ubiquity of mobile and smart devices, the advent of open source operating systems and social media platforms, and the growing commitment by governments themselves to expansive public engagement objectives, all suggest a widening scope.
Yet, what will incentivize the typical citizen to access open data and to partake in collective efforts to create public value? It is here where our digital culture may well fall short, emphasizing individualized service and convenience at the expense of civic responsibility and community-mindedness. For one American academic, this “citizenship deficit” erodes democratic legitimacy and renders our politics more polarized and less discursive. For other observers in Europe, notions of the digital divide are giving rise to new “data divides.”
The politics and practicalities of data privacy often bring further confusion. While privacy advocates call for greater protection and a culture of data activism among Internet users themselves, the networked ethos of online communities and commercialization fuels speed and sharing, often with little understanding of the ramifications of doing so. Differences between consumerism and citizenship are subtle yet profoundly important, while increasingly blurred and overlooked.
A key conundrum provincially and federally, within the Westminster confines of parliamentary democracy, is that open data is being hatched mainly from within the executive branch, whereas the legislative branch watches and withers. In devising genuine democratic openness, politicians and their parties must do more than post expenses online: they must become partners and advocates for renewal. A lesson of open source technology, however, is that systemic change demands an informed and engaged civil society, disgruntled with the status quo but also determined to act anew.
Most often, such actions are highly localized, even in a virtual world, giving rise to the purpose and meaning of smarter and more intelligent communities. And in Canada it bears noting that we see communities both large and small embracing open data and other forms of online experimentation such as participatory budgeting. It is often within small but connected communities where a virtuous cycle of online and in-person identities and actions can deepen and impact decision-making most directly.
How, then, do we reconcile traditional notions of top-down political federalism and national leadership with this bottom-up approach to community engagement and democratic renewal? Shifting from open data to open democracy is likely to be an uneven, diverse, and at times messy affair. Better this way than attempting to ordain top-down change in a centralized and standardized manner.”