Study to examine Australian businesses’ use of government data


ComputerWorld: “The New York University’s GovLab and the federal Department of Communications have embarked on a study of how Australian organisations are employing government data sets.

The ‘Open Data 500’ study was launched today at the Locate15 conference. It aims to provide a basis for assessing the value of open data and encourage the development of new businesses based on open data, as well as encourage discussion about how to make government data more useful to businesses and not-for-profit organisations.

The study is part of a series of studies taking place under the auspices of the OD500 Global Network.

“This study will help ensure the focus of Government is on the publication of high value datasets, with an emphasis on quality rather than quantity,” a statement issued by the Department of Communications said.

“Open Data 500 advances the government’s policy of increasing the number of high value public datasets in Australia in an effort to drive productivity and innovation, as well as its commitment to greater consultation with private sector stakeholders on open data,” Communications Minister Malcolm Turnbull said in remarks prepared for the Locate 15 conference….(More)”

New take on game theory offers clues on why we cooperate


Alexander J Stewart at The Conversation: “Why do people cooperate? This isn’t a question anyone seriously asks. The answer is obvious: we cooperate because doing so is usually synergistic. It creates more benefit for less cost and makes our lives easier and better.
Maybe it’s better to ask why don’t people always cooperate. But the answer here seems obvious too. We don’t do so if we think we can get away with it. If we can save ourselves the effort of working with someone else but still gain the benefits of others’ cooperation. And, perhaps, we withhold cooperation as punishment for others’ past refusal to collaborate with us.
Since there are good reasons to cooperate – and good reasons not to do so – we are left with a question without an obvious answer: under what conditions will people cooperate?
Despite its seeming simplicity, this question is very complicated, from both a theoretical and an experimental point of view. The answer matters a great deal to anyone trying to create an environment that fosters cooperation, from corporate managers and government bureaucrats to parents of unruly siblings.
New research into game theory I’ve conducted with Joshua Plotkin offers some answers – but raises a lot of questions of its own too.
Traditionally, research into game theory – the study of strategic decision making – focused either on whether a rational player should cooperate in a one-off interaction or on looking for the “winning solutions” that allow an individual who wants to cooperate make the best decisions across repeated interactions.
Our more recent inquiries aim to understand the subtle dynamics of behavioral change when there are an infinite number of potential strategies (much like life) and the game payoffs are constantly shifting (also much like life).
By investigating this in more detail, we can better learn how to incentivize people to cooperate – whether by setting the allowance we give kids for doing chores, by rewarding teamwork in school and at work or even by how we tax to pay for public benefits such as healthcare and education.
What emerges from our studies is a complex and fascinating picture: the amount of cooperation we see in large groups is in constant flux, and incentives that mean well can inadvertently lead to less rather than more cooperative behavior….(More)”

The Algorithmic Self


Frank Pasquale in The Hedgehog Review:“…For many technology enthusiasts, the answer to the obesity epidemic—and many other problems—lies in computational countermeasures to the wiles of the food scientists. App developers are pioneering behavioristic interventions to make calorie counting and exercise prompts automatic. For example, users of a new gadget, the Pavlok wristband, can program it to give them an electronic shock if they miss exercise targets. But can such stimuli break through the blooming, buzzing distractions of instant gratification on offer in so many rival games and apps? Moreover, is there another way of conceptualizing our relationship to our surroundings than as a suboptimal system of stimulus and response?
Some of our subtlest, most incisive cultural critics have offered alternatives. Rather than acquiesce to our manipulability, they urge us to become more conscious of its sources—be they intrusive advertisements or computers that we (think we) control. For example, Sherry Turkle, founder and director of the MIT Initiative on Technology and Self, sees excessive engagement with gadgets as a substitution of the “machinic” for the human—the “cheap date” of robotized interaction standing in for the more unpredictable but ultimately challenging and rewarding negotiation of friendship, love, and collegiality. In The Glass Cage, Nicholas Carr critiques the replacement of human skill with computer mediation that, while initially liberating, threatens to sap the reserves of ingenuity and creativity that enabled the computation in the first place.
Beyond the psychological, there is a political dimension, too. Legal theorist and Georgetown University law professor Julie Cohen warns of the dangers of “modulation,” which enables advertisers, media executives, political consultants, and intelligence operatives to deploy opaque algorithms to monitor and manipulate behavior. Cultural critic Rob Horning ups the ante on the concerns of Cohen and Turkle with a series of essays dissecting feedback loops among surveillance entities, the capture of important information, and self-readjusting computational interventions designed to channel behavior and thought into ever-narrower channels. Horning also criticizes Carr for failing to emphasize the almost irresistible economic logic behind algorithmic self-making—at first for competitive advantage, then, ultimately, for survival.
To negotiate contemporary algorithms of reputation and search—ranging from resumé optimization on LinkedIn to strategic Facebook status updates to OkCupid profile grooming—we are increasingly called on to adopt an algorithmic self, one well practiced in strategic self-promotion. This algorithmic selfhood may be critical to finding job opportunities (or even maintaining a reliable circle of friends and family) in an era of accelerating social change. But it can also become self-defeating. Consider, for instance, the self-promoter whose status updates on Facebook or LinkedIn gradually tip from informative to annoying. Or the search engine−optimizing website whose tactics become a bit too aggressive, thereby causing it to run afoul of Google’s web spam team and consequently sink into obscurity. The algorithms remain stubbornly opaque amid rapidly changing social norms. A cyber-vertigo results, as we are pressed to promote our algorithmic selves but puzzled over the best way to do so….(More)
 

How’s the Weather There? Crowdsourcing App Promises Better Forecasts


Rachel Metz  at MIT Technology Review: “An app called Sunshine wants you to help it create more accurate, localized weather forecasts.
The app, currently in a private beta test, combines data from the National Oceanic and Atmospheric Administration (NOAA) with atmospheric pressure readings captured by a smartphone. The latest iPhones, and some Android smartphones, include barometers for measuring atmospheric pressure. These sensors are generally used to determine elevation for navigation, but changes in air pressure can also signal changes in the weather.
Sunshine will also rely on users to report sudden weather hazards like fog, cofounder Katerina Stroponiati says. About 250 people spread out among the Bay Area, New York, and Dallas are now using Sunshine, she says, and the team behind it plans to release the app publicly at the end of March for the iPhone. It will be free, though some features may eventually cost extra.
While weather predictions have gotten more accurate over the years, they’re far from perfect. Weather information usually isn’t localized, either. The goal of Sunshine is to better serve places like its home base of San Francisco, where weather can be markedly different over just a few blocks.
Stroponiati aims for Sunshine to get enough people sending in data—three per square mile would be needed, according to experiments the team has conducted—that the app can be used to make weather prediction more accurate than it tends to be today. Some other apps, like PressureNet and WeatherSignal, already gather data entered manually by users, but they don’t yet offer crowdsourced forecasts….(More)
 

Budgets for the People


The Data Disclosure Decision


“The CIO Council Innovation Committee has released its first Open Data case study, The Data Disclosure Decision, showcasing the Department of Education (Education) Disclosure Review Board.
The Department of Education is a national warehouse for open data across a decentralized educational system, managing and exchanging education related data from across the country. Education collects large amounts of aggregate data at the state, district, and school level, disaggregated by a number of demographic variables. A majority of the data Education collects is considered personally identifiable information (PII), making data disclosure avoidance plans a mandatory component of Education’s data releases. With their expansive data sets and a need to protect sensitive information, Education quickly realized the need to organize and standardize their data disclosure protocol.
Education formally established the Data Disclosure Board with Secretary of Education Arne Duncan signing their Charter in August 2013. Since its inception, the Disclosure Review Board has recognized substantial successes and has greatly increased the volume and quality of data being released. Education’s Disclosure Review Board is continually learning through its open data journey and improving their approach through cultural change and leadership buy-in.
Learn more about Education’s Data Review Board’s story by reading The Data Disclosure Decision where you will find the full account of their experience and what they learned along the way. Read The Data Disclosure Decision

Civic Media Project


Site and Book edited by Eric Gordon and Paul Mihailidis: “Civic life is comprised of the attention and actions an individual devotes to a common good. Participating in a human rights rally, creating and sharing a video online about unfair labor practices, connecting with neighbors after a natural disaster: these are all civic actions wherein the actor seeks to benefit a perceived common good. But where and how civic life takes place, is an open question. The lines between the private and the public, the self-interested and the civic are blurring as digital cultures transform means and patterns of communication around the world.

As the definition of civic life is in flux, there is urgency in defining and questioning the mediated practices that compose it. Civic media are the mediated practices of designing, building, implementing or using digital tools to intervene in or participate in civic life. The Civic Media Project (CMP) is a collection of short case studies from scholars and practitioners from all over the world that range from the descriptive to the analytical, from the single tool to the national program, from the enthusiastic to the critical. What binds them together is not a particular technology or domain (i.e. government or social movements), but rather the intentionality of achieving a common good. Each of the case studies collected in this project reflects the practices associated with the intentional effort of one or many individuals to benefit or disrupt a community or institution outside of one’s intimate and professional spheres.

As the examples of civic media continue to grow every day, the Civic Media Project is intended as a living resource. New cases will be added on a regular basis after they have gone through an editorial process. Most importantly, the CMP is meant to be a place for conversation and debate about what counts as civic, what makes a citizen, what practices are novel, and what are the political, social and cultural implications of the integration of technology into civic lives.

How to Use the Site

Case studies are divided into four sections: Play + CreativitySystems + DesignLearning + Engagement, and Community + Action. Each section contains about 25 case studies that address the themes of the section. But there is considerable crossover and thematic overlap between sections as well. For those adventurous readers, the Tag Cloud provides a more granular entry point to the material and a more diverse set of connections.

We have also developed a curriculum that provides some suggestions for educators interested in using the Civic Media Project and other resources to explore the conceptual and practical implications of civic media examples.

One of the most valuable elements of this project is the dialogue about the case studies. We have asked all of the project’s contributors to write in-depth reviews of others’ contributions, and we also invite all readers to comment on cases and reviews. Do not be intimidated by the long “featured comments” in the Disqus section—these formal reviews should be understood as part of the critical commentary that makes each of these cases come alive through discussion and debate.

The Book

Civic Media: Technology, Design, Practice is forthcoming from MIT Press and will serve as the print book companion to the Civic Media Project. The book identifies the emerging field of Civic Media by brining together leading scholars and practitioners from a diversity of disciplines to shape theory, identify problems and articulate opportunities.  The book includes 19 chapters (and 25 case studies) from fields as diverse as philosophy, communications, education, sociology, media studies, art, policy and philanthropy, and attempts to find common language and common purpose through the investigation of civic media….(More)”

Apple’s ResearchKit Is a New Way to Do Medical Research


Wired: “….Apple announced a new software framework it hopes will help turn the 700 million iPhones in users’ hands into medical diagnostic tools.

ResearchKit is an open-source framework that lets medical researchers create diagnostic apps that tap into the screens and accelerometers on the iPhone, as well as data from HealthKit apps. The first five apps built with ResearchKit are available today, and they’re built to help diagnose various disorders.

Apple Senior Vice President of Operations Jeff Williams detailed some of the specialized applications available at launch. They include the mPower app, which is built to gauge the effects of Parkinsons’ Disease and was developed in conjunction with the University of Rochester, Xuanwu Hospital at Capital Medical University in Beijing, and Sage Bionetworks.

On stage, Williams demoed tests within the app that could measure hand tremors by using an iPhone touchscreen, vocal trembling using the microphone, and a walking balance test.

Williams said he hopes ResearchKit can help address a few problems with medical research in its current state, such as limited patient participation, infrequent data sampling, and one-way communication from the patient to a medical professional. The ResearchKit apps are designed to be more interactive and allow a patient to control when and with whom to share data.

Along with the mPower demo, Williams mentioned a few more apps that will be available immediately for iOS: a diabetes-diagnostic app from Massachusetts General Hospital; an app to diagnose heart disease from Stanford and the University of Oxford; an Asthma Health app from Mount Sinai Hospital and Weill Cornell Medical College; and an app to help victims of breast cancer made by the Dana-Farber Cancer Institute, UCLA School of Public Health, Penn Medicine, and Sage Bionetworks.

Williams also stressed that customers would be able to control the data shared by each ResearchKit app, and that sensitive data would only be visible by medical researchers….(More)”

New portal to crowdsource captions, transcripts of old photos, national archives


Irene Tham at The Straits Times: “Wanted: history enthusiasts to caption old photographs and transcribe handwritten manuscripts that contain a piece of Singapore’s history.

They are invited to contribute to an upcoming portal that will carry some 3,000 unidentified photographs dating back to the late 1800s, and 3,000 pages of Straits Settlement records including letters written during Sir Stamford Raffles’ administration of Singapore.

These are collections from the Government and individuals waiting to be “tagged” on the new portal – The Citizen Archivist Project at www.nas.gov.sg/citizenarchivist….

Without tagging – such as by photo captioning and digital transcription – these records cannot be searched. There are over 140,000 photos and about one million pages of Straits Settlements Records in total that cannot be searched today.

These records date back to the 1800s, and include letters written during Sir Stamford Raffles’ administration in Singapore.

“The key challenge is that they were written in elaborate cursive penmanship which is not machine-readable,” said Dr Yaacob, adding that the knowledge and wisdom of the public can be tapped on to make these documents more accessible.

Mr Arthur Fong (West Coast GRC) had asked how the Government could get young people interested in history, and Dr Yaacob said this initiative was something they would enjoy.

Portal users must first log in using their existing Facebook, Google or National Library Board accounts. Contributions will be saved in users’ profiles, automatically created upon signing in.

Transcript contributions on the portal work in similar ways to Wikipedia; contributed text will be uploaded immediately on the portal.

However, the National Archives will take up to three days to review photo caption contributions. Approved captions will be uploaded on its website at www.nas.gov.sg/archivesonline….(More)”

On the importance of being negative


Stephen Curry in The Guardian: “The latest paper from my group, published just over a week ago in the open access journal PeerJ, reports an unusual result. It was not the result we were looking for because it was negative: our experiment failed.

Nevertheless I am pleased with the paper – negative results matter. Their value lies in mapping out blind alleys, warning other investigators not to waste their time or at least to tread carefully. The only trouble is, it can be hard to get them published.

The scientific literature has long been skewed by a preponderance of positive results, largely because journals are keen to nurture their reputations for publishing significant, exciting research – new discoveries that change the way we think about the world. They have tended to look askance at manuscripts reporting beautiful hypotheses undone by the ugly fact of experimental failure. Scientific reporting inverts the traditional values of news media: good news sells. This tendency is reinforced within academic culture because our reward mechanisms are so strongly geared to publication in the most prestigious journals. In the worst cases it can foster fraudulent or sloppy practices by scientists and journals. A complete record of reporting positive and negative results is at the heart of the AllTrials campaign to challenge the distortion of clinical trials for commercial gain….

Normally that would have been that. Our data would have sat on the computer hard-drive till the machine decayed to obsolescence and was thrown out. But now it’s easier to publish negative results, so we did. The change has come about because of the rise of online publishing through open access, which aims to make research freely available on the internet.

The most significant change is the emergence of new titles from nimble-footed publishers aiming to leverage the reduced costs of publishing digitally rather than on paper. They have created open access journals that judge research only on its originality and competency; in contrast to more traditional outlets, no attempt is made to pre-judge significance. These journals include titles such as PLOS ONE (the originator of the concept), F1000 Research, ScienceOpen, and Scientific Reports, as well as new pre-print servers, such as PeerJ Preprints or bioaRXiv, which are seeking to emulate the success of the ArXiv that has long served physics and maths researchers.

As far as I know, these outlets were not designed specifically for negative results but the shift in the review criteria – and their lower costs – have opened up new opportunities and negative results are now creeping out of the laboratory in greater numbers. PLOS ONE has recently started to highlight collections of papers reporting negative findings; Elsevier, one of the more established publishers, has evidently sensed an opportunity and just launched a new journal dedicated to negative results in the plant sciences….(More)”