Wikipedia Recent Changes Map


Wikipedia

The Verge: “By watching a new visualization, known plainly as the Wikipedia Recent Changes Map, viewers can see the location of every unregistered Wikipedia user who makes a change to the open encyclopedia. It provides a voyeuristic look at the rate that knowledge is contributed to the website, giving you the faintest impression of the Spaniard interested in the television show Jackass or the Brazilian who defaced the page on the Jersey Devil to feature a photograph of the new pope. Though the visualization moves quickly, it’s only displaying about one-fifth of the edits being made: Wikipedia doesn’t reveal location data for registered users, and unregistered users make up just 15 to 20 percent of all contribution, according to studies of the website.”

Social networks as evolutionary game theory


in the Financial Times: “FT Alphaville has been taking a closer look at the collaborative economy, and noting the stellar growth this mysterious sector has been experiencing of late.
An important question to consider, however, is to what degree is this growth being driven by a genuine rise in reciprocity and altruism in the economy — or to what degree is this just the result of natural opportunism…
Which begs the question why should anyone put a free good out there for the taking anyway? And why is it that in most collaborative models there are very few examples of people abusing the system?
With respects to the free issue, internet pioneer Jaron Lanier believes this is because there isn’t really any such thing as free at all. What appears free is usually a veiled reciprocity or exploitation in disguise….
Lanier controversially believes users should be paid for that contribution. But in doing so we would argue that he forgets that the relationship Facebook has with its users is in fact much more reciprocal than exploitative. Users get a free platform, Facebook gets their data.
What’s more, as the BBC’s tech expert Bill Thompson has commented before, user content doesn’t really have much value on its own. It is only when that data is pooled together on a massive scale which allows the economies of scale to make sense. At least in a way that “the system” feels keen to reward. It is not independent data that has value, it is networked data that the system is demanding. Consequently, there is possibly some form of social benefit associated with contributing data to the platform, which is yet to be recognised….
A rise in collaboration, however, suggests there is more chance of personal survival if everyone collaborates together (and does not cheat the system). There is less incentive to cheat the system. In the current human economy context then, has collaboration ended up being the best pay-off for all ?
And in that context has social media, big data and the rise of networked communities simply encouraged participants in the universal survival game of prisoner’s dilemma to take the option that’s best for all?
We obviously have no idea if that’s the case, but it seems a useful thought experiment for us all to run through.”
 

OpenData Latinoamérica


Mariano Blejman and Miguel Paz @ IJNet Blog: “We need a central repository where you can share the data that you have proved to be reliable. Our answer to this need: OpenData Latinoamérica, which we are leading as ICFJ Knight International Journalism Fellows.
Inspired by the open data portal created by ICFJ Knight International Journalism Fellow Justin Arenstein in Africa, OpenData Latinoamérica aims to improve the use of data in this region where data sets too often fail to show up where they should, and when they do, are scattered about the web at governmental repositories and multiple independent repositories where the data is removed too quickly.

The portal will be used at two big upcoming events: Bolivia’s first DataBootCamp and the Conferencia de Datos Abiertos (Open Data Conference) in Montevideo, Uruguay. Then, we’ll hold a series of hackathons and scrape-athons in Chile, which is in a period of presidential elections in which citizens increasingly demand greater transparency. Releasing data and developing applications for accountability will be the key.”

Global Internet Policy Observatory (GIPO)


European Commission Press Release: “The Commission today unveiled plans for the Global Internet Policy Observatory (GIPO), an online platform to improve knowledge of and participation of all stakeholders across the world in debates and decisions on Internet policies. GIPO will be developed by the Commission and a core alliance of countries and Non Governmental Organisations involved in Internet governance. Brazil, the African Union, Switzerland, the Association for Progressive Communication, Diplo Foundation and the Internet Society have agreed to cooperate or have expressed their interest to be involved in the project.
The Global Internet Policy Observatory will act as a clearinghouse for monitoring Internet policy, regulatory and technological developments across the world.
It will:

  • automatically monitor Internet-related policy developments at the global level, making full use of “big data” technologies;
  • identify links between different fora and discussions, with the objective to overcome “policy silos”;
  • help contextualise information, for example by collecting existing academic information on a specific topic, highlighting the historical and current position of the main actors on a particular issue, identifying the interests of different actors in various policy fields;
  • identify policy trends, via quantitative and qualitative methods such as semantic and sentiment analysis;
  • provide easy-to-use briefings and reports by incorporating modern visualisation techniques;”

What the Obama Campaign's Chief Data Scientist Is Up to Now


Alexis Madrigal in The Atlantic: “By all accounts, Rayid Ghani’s data work for President Obama’s reelection campaign was brilliant and unprecedented. Ghani probably could have written a ticket to work at any company in the world, or simply collected speaking fees for a few years telling companies how to harness the power of data like the campaign did.
But instead, Ghani headed to the University of Chicago to bring sophisticated data analysis to difficult social problems. Working with Computation Institute and the Harris School of Public Policy, Ghani will serve as the chief data scientist for the Urban Center for Computation and Data.”

Challenge: Visualizing Online Takedown Requests


visualizing.org: “The free flow of information defines the Internet. Innovations like Wikipedia and crowdsourcing owe their existence to and are powered by the resulting streams of knowledge and ideas. Indeed, more information means more choice, more freedom, and ultimately more power for the individual and society. But — citing reasons like defamation, national security, and copyright infringement — governments, corporations, and other organizations at times may regulate and restrict information online. By blocking or filtering sites, issuing court orders limiting access to information, enacting legislation or pressuring technology and communication companies, governments and other organizations aim to censor one of the most important means of free expression in the world. What does this mean and to what extent should attempts to censor online content be permitted?…
We challenge you to visualize the removal requests in Google’s Transparency Report. What in this data should be communicated to the general public? Are there any trends or patterns in types of requests that have been complied with? Have legal and policy environments shaped what information is available and/or restricted in different countries? The data set on government requests (~1 thousand rows) provides summaries broken down by country, Google product, and reason. The data set on copyright requests, however, is much larger (~1 million rows) and includes each individual request. Use one or both data sets, by themselves or with other open data sets. We’re excited to partner with Google for this challenge, and we’re offering $5,000 in prizes.”

Enter

Deadline: Thursday, June 27, 2013, 11:59 pm EDT
Winner Announced: Thursday, July 11, 2013

 More at http://visualizing.org/contests/visualizing-online-takedown-requests

New Open Data Executive Order and Policy


The White House: “The Obama Administration today took groundbreaking new steps to make information generated and stored by the Federal Government more open and accessible to innovators and the public, to fuel entrepreneurship and economic growth while increasing government transparency and efficiency.
Today’s actions—including an Executive Order signed by the President and an Open Data Policy released by the Office of Management and Budget and the Office of Science and Technology Policy—declare that information is a valuable national asset whose value is multiplied when it is made easily accessible to the public.  The Executive Order requires that, going forward, data generated by the government be made available in open, machine-readable formats, while appropriately safeguarding privacy, confidentiality, and security.
The move will make troves of previously inaccessible or unmanageable data easily available to entrepreneurs, researchers, and others who can use those files to generate new products and services, build businesses, and create jobs….
Along with the Executive Order and Open Data Policy, the Administration announced a series of complementary actions:
• A new Data.Gov.  In the months ahead, Data.gov, the powerful central hub for open government data, will launch new services that include improved visualization, mapping tools, better context to help locate and understand these data, and robust Application Programming Interface (API) access for developers.
• New open source tools to make data more open and accessible.  The US Chief Information Officer and the US Chief Technology Officer are releasing free, open source tools on Github, a site that allows communities of developers to collaboratively develop solutions.  This effort, known as Project Open Data, can accelerate the adoption of open data practices by providing plug-and-play tools and best practices to help agencies improve the management and release of open data.  For example, one tool released today automatically converts simple spreadsheets and databases into APIs for easier consumption by developers.  Anyone, from government agencies to private citizens to local governments and for-profit companies, can freely use and adapt these tools starting immediately.
• Building a 21st century digital government.  As part of the Administration’s Digital Government Strategy and Open Data Initiatives in health, energy, education, public safety, finance, and global development, agencies have been working to unlock data from the vaults of government, while continuing to protect privacy and national security.  Newly available or improved data sets from these initiatives will be released today and over the coming weeks as part of the one year anniversary of the Digital Government Strategy.
• Continued engagement with entrepreneurs and innovators to leverage government data.  The Administration has convened and will continue to bring together companies, organizations, and civil society for a variety of summits to highlight how these innovators use open data to positively impact the public and address important national challenges.  In June, Federal agencies will participate in the fourth annual Health Datapalooza, hosted by the nonprofit Health Data Consortium, which will bring together more than 1,800 entrepreneurs, innovators, clinicians, patient advocates, and policymakers for information sessions, presentations, and “code-a-thons” focused on how the power of data can be harnessed to help save lives and improve healthcare for all Americans.
For more information on open data highlights across government visit: http://www.whitehouse.gov/administration/eop/ostp/library/docsreports”

Open government data shines a light on hospital billing and health care costs


hospital-costsAlex Howard: “If transparency is the best disinfectant, casting sunlight upon the cost of care in hospitals across the United States will make the health care system itself healthier.
The Department of Health and Human Services has released open data that compares the billing for the 100 most common treatments and procedures performed at more than 3000 hospital in the U.S. The Medicare provider charge data shows significant variation within communies and across the country for the same procedures.
One hospital charged $8,000, another $38,000 — for the same condition. This data is enabling newspapers like the Washington Post to show people the actual costs of health care and create  interactive features that enable  people to search for individual hospitals and see how they compare. The New York Times explored the potential reasons behind wild disparities in billing at length today, from sicker patients to longer hospitalizations to higher labor costs.”

The Uncertain Relationship Between Open Data and Accountability


Tiago Peixoto’s Response to Yu and Robinson’s paper on  The New Ambiguity of “ Open Government ”: “By looking at the nature of data that may be disclosed by governments, Harlan Yu and David Robinson provide an analytical framework that evinces the ambiguities underlying the term “open government data.” While agreeing with their core analysis, I contend that the authors ignore the enabling conditions under which transparency may lead to accountability, notably the publicity and political agency conditions. I argue that the authors also overlook the role of participatory mechanisms as an essential element in unlocking the potential for open data to produce better government decisions and policies. Finally, I conduct an empirical analysis of the publicity and political agency conditions in countries that have launched open data efforts, highlighting the challenges associated with open data as a path to accountability.”
 

The Commodification of Patient Opinion: the Digital Patient Experience Economy in the Age of Big Data


Paper by Lupton, Deborah, from the Sydney Unversity’s Department of Sociology and Social Policy . Abstract: “As part of the digital health phenomenon, a plethora of interactive digital platforms have been established in recent years to elicit lay people’s experiences of illness and healthcare. The function of these platforms, as expressed on the main pages of their websites, is to provide the tools and forums whereby patients and caregivers, and in some cases medical practitioners, can share their experiences with others, benefit from the support and knowledge of other contributors and contribute to large aggregated data archives as part of developing better medical treatments and services and conducting medical research.
However what may not always be readily apparent to the users of these platforms are the growing commercial uses by many of the platforms’ owners of the archives of the data they contribute. This article examines this phenomenon of what I term ‘the digital patient experience economy’. In so doing I discuss such aspects as prosumption, the phenomena of big data and metric assemblages, the discourse and ethic of sharing and the commercialisation of affective labour via such platforms. I argue that via these online platforms patients’ opinions and experiences may be expressed in more diverse and accessible forums than ever before, but simultaneously they have become exploited in novel ways.”