The Algorithmic Self


Frank Pasquale in The Hedgehog Review:“…For many technology enthusiasts, the answer to the obesity epidemic—and many other problems—lies in computational countermeasures to the wiles of the food scientists. App developers are pioneering behavioristic interventions to make calorie counting and exercise prompts automatic. For example, users of a new gadget, the Pavlok wristband, can program it to give them an electronic shock if they miss exercise targets. But can such stimuli break through the blooming, buzzing distractions of instant gratification on offer in so many rival games and apps? Moreover, is there another way of conceptualizing our relationship to our surroundings than as a suboptimal system of stimulus and response?
Some of our subtlest, most incisive cultural critics have offered alternatives. Rather than acquiesce to our manipulability, they urge us to become more conscious of its sources—be they intrusive advertisements or computers that we (think we) control. For example, Sherry Turkle, founder and director of the MIT Initiative on Technology and Self, sees excessive engagement with gadgets as a substitution of the “machinic” for the human—the “cheap date” of robotized interaction standing in for the more unpredictable but ultimately challenging and rewarding negotiation of friendship, love, and collegiality. In The Glass Cage, Nicholas Carr critiques the replacement of human skill with computer mediation that, while initially liberating, threatens to sap the reserves of ingenuity and creativity that enabled the computation in the first place.
Beyond the psychological, there is a political dimension, too. Legal theorist and Georgetown University law professor Julie Cohen warns of the dangers of “modulation,” which enables advertisers, media executives, political consultants, and intelligence operatives to deploy opaque algorithms to monitor and manipulate behavior. Cultural critic Rob Horning ups the ante on the concerns of Cohen and Turkle with a series of essays dissecting feedback loops among surveillance entities, the capture of important information, and self-readjusting computational interventions designed to channel behavior and thought into ever-narrower channels. Horning also criticizes Carr for failing to emphasize the almost irresistible economic logic behind algorithmic self-making—at first for competitive advantage, then, ultimately, for survival.
To negotiate contemporary algorithms of reputation and search—ranging from resumé optimization on LinkedIn to strategic Facebook status updates to OkCupid profile grooming—we are increasingly called on to adopt an algorithmic self, one well practiced in strategic self-promotion. This algorithmic selfhood may be critical to finding job opportunities (or even maintaining a reliable circle of friends and family) in an era of accelerating social change. But it can also become self-defeating. Consider, for instance, the self-promoter whose status updates on Facebook or LinkedIn gradually tip from informative to annoying. Or the search engine−optimizing website whose tactics become a bit too aggressive, thereby causing it to run afoul of Google’s web spam team and consequently sink into obscurity. The algorithms remain stubbornly opaque amid rapidly changing social norms. A cyber-vertigo results, as we are pressed to promote our algorithmic selves but puzzled over the best way to do so….(More)
 

Civic Media Project


Site and Book edited by Eric Gordon and Paul Mihailidis: “Civic life is comprised of the attention and actions an individual devotes to a common good. Participating in a human rights rally, creating and sharing a video online about unfair labor practices, connecting with neighbors after a natural disaster: these are all civic actions wherein the actor seeks to benefit a perceived common good. But where and how civic life takes place, is an open question. The lines between the private and the public, the self-interested and the civic are blurring as digital cultures transform means and patterns of communication around the world.

As the definition of civic life is in flux, there is urgency in defining and questioning the mediated practices that compose it. Civic media are the mediated practices of designing, building, implementing or using digital tools to intervene in or participate in civic life. The Civic Media Project (CMP) is a collection of short case studies from scholars and practitioners from all over the world that range from the descriptive to the analytical, from the single tool to the national program, from the enthusiastic to the critical. What binds them together is not a particular technology or domain (i.e. government or social movements), but rather the intentionality of achieving a common good. Each of the case studies collected in this project reflects the practices associated with the intentional effort of one or many individuals to benefit or disrupt a community or institution outside of one’s intimate and professional spheres.

As the examples of civic media continue to grow every day, the Civic Media Project is intended as a living resource. New cases will be added on a regular basis after they have gone through an editorial process. Most importantly, the CMP is meant to be a place for conversation and debate about what counts as civic, what makes a citizen, what practices are novel, and what are the political, social and cultural implications of the integration of technology into civic lives.

How to Use the Site

Case studies are divided into four sections: Play + CreativitySystems + DesignLearning + Engagement, and Community + Action. Each section contains about 25 case studies that address the themes of the section. But there is considerable crossover and thematic overlap between sections as well. For those adventurous readers, the Tag Cloud provides a more granular entry point to the material and a more diverse set of connections.

We have also developed a curriculum that provides some suggestions for educators interested in using the Civic Media Project and other resources to explore the conceptual and practical implications of civic media examples.

One of the most valuable elements of this project is the dialogue about the case studies. We have asked all of the project’s contributors to write in-depth reviews of others’ contributions, and we also invite all readers to comment on cases and reviews. Do not be intimidated by the long “featured comments” in the Disqus section—these formal reviews should be understood as part of the critical commentary that makes each of these cases come alive through discussion and debate.

The Book

Civic Media: Technology, Design, Practice is forthcoming from MIT Press and will serve as the print book companion to the Civic Media Project. The book identifies the emerging field of Civic Media by brining together leading scholars and practitioners from a diversity of disciplines to shape theory, identify problems and articulate opportunities.  The book includes 19 chapters (and 25 case studies) from fields as diverse as philosophy, communications, education, sociology, media studies, art, policy and philanthropy, and attempts to find common language and common purpose through the investigation of civic media….(More)”

On the importance of being negative


Stephen Curry in The Guardian: “The latest paper from my group, published just over a week ago in the open access journal PeerJ, reports an unusual result. It was not the result we were looking for because it was negative: our experiment failed.

Nevertheless I am pleased with the paper – negative results matter. Their value lies in mapping out blind alleys, warning other investigators not to waste their time or at least to tread carefully. The only trouble is, it can be hard to get them published.

The scientific literature has long been skewed by a preponderance of positive results, largely because journals are keen to nurture their reputations for publishing significant, exciting research – new discoveries that change the way we think about the world. They have tended to look askance at manuscripts reporting beautiful hypotheses undone by the ugly fact of experimental failure. Scientific reporting inverts the traditional values of news media: good news sells. This tendency is reinforced within academic culture because our reward mechanisms are so strongly geared to publication in the most prestigious journals. In the worst cases it can foster fraudulent or sloppy practices by scientists and journals. A complete record of reporting positive and negative results is at the heart of the AllTrials campaign to challenge the distortion of clinical trials for commercial gain….

Normally that would have been that. Our data would have sat on the computer hard-drive till the machine decayed to obsolescence and was thrown out. But now it’s easier to publish negative results, so we did. The change has come about because of the rise of online publishing through open access, which aims to make research freely available on the internet.

The most significant change is the emergence of new titles from nimble-footed publishers aiming to leverage the reduced costs of publishing digitally rather than on paper. They have created open access journals that judge research only on its originality and competency; in contrast to more traditional outlets, no attempt is made to pre-judge significance. These journals include titles such as PLOS ONE (the originator of the concept), F1000 Research, ScienceOpen, and Scientific Reports, as well as new pre-print servers, such as PeerJ Preprints or bioaRXiv, which are seeking to emulate the success of the ArXiv that has long served physics and maths researchers.

As far as I know, these outlets were not designed specifically for negative results but the shift in the review criteria – and their lower costs – have opened up new opportunities and negative results are now creeping out of the laboratory in greater numbers. PLOS ONE has recently started to highlight collections of papers reporting negative findings; Elsevier, one of the more established publishers, has evidently sensed an opportunity and just launched a new journal dedicated to negative results in the plant sciences….(More)”

Tweets Can Predict Health Insurance Exchange Enrollment


PennMedicine: “An increase in Twitter sentiment (the positivity or negativity of tweets) is associated with an increase in state-level enrollment in the Affordable Care Act’s (ACA) health insurance marketplaces — a phenomenon that points to use of the social media platform as a real-time gauge of public opinion and provides a way for marketplaces to quickly identify enrollment changes and emerging issues. Although Twitter has been previously used to measure public perception on a range of health topics, this study, led by researchers at the Perelman School of Medicine at the University of Pennsylvania and published online in the Journal of Medical Internet Research, is the first to look at its relationship with the new national health insurance marketplace enrollment.

The study examined 977,303 ACA and “Obamacare”-related tweets — along with those directed toward the Twitter handle for HealthCare.gov and the 17 state-based marketplace Twitter accounts — in March 2014, then tested a correlation of Twitter sentiment with marketplace enrollment by state. Tweet sentiment was determined using the National Research Council (NRC) sentiment lexicon, which contains more than 54,000 words with corresponding sentiment weights ranging from positive to negative. For example, the word “excellent” has a positive sentiment weight, and is more positive than the word “good,” but the word “awful” is negative. Using this lexicon, researchers found that a .10 increase in the sentiment of tweets was associated with a nine percent increase in health insurance marketplace enrollment at the state level. While a .10 increase may seem small, these numbers indicate a significant correlation between Twitter sentiment and enrollment based on a continuum of sentiment scores that were examined over a million tweets.

“The correlation between Twitter sentiment and the number of eligible individuals who enrolled in a marketplace plan highlights the potential for Twitter to be a real-time monitoring strategy for future enrollment periods,” said first author Charlene A. Wong, MD, a Robert Wood Johnson Foundation Clinical Scholar and Fellow in Penn’s Leonard Davis Institute of Health Economics. “This would be especially valuable for quickly identifying emerging issues and making adjustments, instead of having to wait weeks or months for that information to be released in enrollment reports, for example.”…(More)”

“Data on the Web” Best Practices


W3C First Public Working Draft: “…The best practices described below have been developed to encourage and enable the continued expansion of the Web as a medium for the exchange of data. The growth of open data by governments across the world [OKFN-INDEX], the increasing publication of research data encouraged by organizations like the Research Data Alliance [RDA], the harvesting and analysis of social media, crowd-sourcing of information, the provision of important cultural heritage collections such as at the Bibliothèque nationale de France [BNF] and the sustained growth in the Linked Open Data Cloud [LODC], provide some examples of this phenomenon.

In broad terms, data publishers aim to share data either openly or with controlled access. Data consumers (who may also be producers themselves) want to be able to find and use data, especially if it is accurate, regularly updated and guaranteed to be available at all times. This creates a fundamental need for a common understanding between data publishers and data consumers. Without this agreement, data publishers’ efforts may be incompatible with data consumers’ desires.

Publishing data on the Web creates new challenges, such as how to represent, describe and make data available in a way that it will be easy to find and to understand. In this context, it becomes crucial to provide guidance to publishers that will improve consistency in the way data is managed, thus promoting the re-use of data and also to foster trust in the data among developers, whatever technology they choose to use, increasing the potential for genuine innovation.

This document sets out a series of best practices that will help publishers and consumers face the new challenges and opportunities posed by data on the Web.

Best practices cover different aspects related to data publishing and consumption, like data formats, data access, data identification and metadata. In order to delimit the scope and elicit the required features for Data on the Web Best Practices, the DWBP working group compiled a set of use cases [UCR] that represent scenarios of how data is commonly published on the Web and how it is used. The set of requirements derived from these use cases were used to guide the development of the best practice.

The Best Practices proposed in this document are intended to serve a more general purpose than the practices suggested in Best Practices for Publishing Linked Data [LD-BP] since it is domain-independent and whilst it recommends the use of Linked Data, it also promotes best practices for data on the web in formats such as CSV and JSON. The Best Practices related to the use of vocabularies incorporate practices that stem from Best Practices for Publishing Linked Data where appropriate….(More)

New research project to map the impact of open budget data


Jonathan Gray at Open Knowledge: “…a new research project to examine the impact of open budget data, undertaken as a collaboration between Open Knowledge and the Digital Methods Initiative at the University of Amsterdam, supported by the Global Initiative for Financial Transparency (GIFT).

The project will include an empirical mapping of who is active around open budget data around the world, and what the main issues, opportunities and challenges are according to different actors. On the basis of this mapping it will provide a review of the various definitions and conceptions of open budget data, arguments for why it matters, best practises for publication and engagement, as well as applications and outcomes in different countries around the world.

As well as drawing on Open Knowledge’s extensive experience and expertise around open budget data (through projects such as Open Spending), it will utilise innovative tools and methods developed at the University of Amsterdam to harness evidence from the web, social media and collections of documents to inform and enrich our analysis.

As part of this project we’re launching a collaborative bibliography of existing research and literature on open budget data and associated topics which we hope will become a useful resource for other organisations, advocates, policy-makers, and researchers working in this area. If you have suggestions for items to add, please do get in touch.

This project follows on from other research projects we’ve conducted around this area – including on data standards for fiscal transparency, on technology for transparent and accountable public finance, and on mapping the open spending community….(More)”

US government and private sector developing ‘precrime’ system to anticipate cyber-attacks


Martin Anderson at The Stack: “The USA’s Office of the Director of National Intelligence (ODNI) is soliciting the involvement of the private and academic sectors in developing a new ‘precrime’ computer system capable of predicting cyber-incursions before they happen, based on the processing of ‘massive data streams from diverse data sets’ – including social media and possibly deanonymised Bitcoin transactions….
At its core the predictive technologies to be developed in association with the private sector and academia over 3-5 years are charged with the mission ‘to invest in high-risk/high-payoff research that has the potential to provide the U.S. with an overwhelming intelligence advantage over our future adversaries’.
The R&D program is intended to generate completely automated, human-free prediction systems for four categories of event: unauthorised access, Denial of Service (DoS), malicious code and scans and probes which are seeking access to systems.
The CAUSE project is an unclassified program, and participating companies and organisations will not be granted access to NSA intercepts. The scope of the project, in any case, seems focused on the analysis of publicly available Big Data, including web searches, social media exchanges and trawling ungovernable avalanches of information in which clues to future maleficent actions are believed to be discernible.
Program manager Robert Rahmer says: “It is anticipated that teams will be multidisciplinary and might include computer scientists, data scientists, social and behavioral scientists, mathematicians, statisticians, content extraction experts, information theorists, and cyber-security subject matter experts having applied experience with cyber capabilities,”
Battelle, one of the concerns interested in participating in CAUSE, is interested in employing Hadoop and Apache Spark as an approach to the data mountain, and includes in its preliminary proposal an intent to ‘de-anonymize Bitcoin sale/purchase activity to capture communication exchanges more accurately within threat-actor forums…’.
Identifying and categorising quality signal in the ‘white noise’ of Big Data is a central plank in CAUSE, and IARPA maintains several offices to deal with different aspects of it. Its pointedly-named ‘Office for Anticipating Surprise’  frames the CAUSE project best, since it initiated it. The OAS is occupied with ‘Detecting and forecasting the emergence of new technical capabilities’, ‘Early warning of social and economic crises, disease outbreaks, insider threats, and cyber attacks’ and ‘Probabilistic forecasts of major geopolitical trends and rare events’.
Another concerned department is The Office of Incisive Analysis, which is attempting to break down the ‘data static’ problem into manageable mission stages:
1) Large data volumes and varieties – “Providing powerful new sources of information from massive, noisy data that currently overwhelm analysts”
2) Social-Cultural and Linguistic Factors – “Analyzing language and speech to produce insights into groups and organizations. “
3) Improving Analytic Processes – “Dramatic enhancements to the analytic process at the individual and group level. “
The Office of Smart Collection develops ‘new sensor and transmission technologies, with the seeking of ‘Innovative approaches to gain access to denied environments’ as part of its core mission, while the Office of Safe and Secure Operations concerns itself with ‘Revolutionary advances in science and engineering to solve problems intractable with today’s computers’.
The CAUSE program, which attracted 150 developers, organisations, academics and private companies to the initial event, will announce specific figures about funding later in the year, and practice ‘predictions’ from participants will begin in the summer, in an accelerating and stage-managed program over five years….(More)”

Why Information Grows: The Evolution of Order, from Atoms to Economies


Forthcoming book: “In Why Information Grows, rising star César Hidalgo offers a radical interpretation of global economicsWhile economists often turn to measures like GDP or per-capita income, César Hidalgo turns to information theory to explain the success or failure of a country’s economic performance. Through a radical rethinking of what the economy is, Hidalgo shows that natural constraints in our ability to accumulate knowledge, knowhow and information explain the evolution of social and economic complexity. This is a rare tour de force, linking economics, sociology, physics, biology and information theory, to explain the evolution of social and economic systems as a consequence of the physical embodiment of information in a world where knowledge is quite literally power.
César Hidalgo leads the Macro Connections group at the MIT Media Lab. A trained statistical physicist and an expert on Networks and Complex Systems, he also has extensive experience in the field of economic development and has pioneered research on how big data impacts economic decision-making….(More)”

The Ubiquitous Internet: User and Industry Perspectives


New book edited by Anja Bechmann, and Stine Lomborg: “This book presents state of the art theoretical and empirical research on the ubiquitous internet: its everyday users and its economic stakeholders. The book offers a 360-degree media analysis of the contemporary terrain of the internet by examining both user and industry perspectives and their relation to one another. Contributors consider user practices in terms of internet at your fingertips—the abundance, free flow, and interconnectivity of data. They then consider industry’s use of user data and standards in commodification and value-creation…. Introduction Part I: Users and Usage Patterns 1. Next Generation Users: Changing Access to the Internet Grant Blank and William H. Dutton 2. The Internet in My Pocket Stine Lomborg 3. Managing the Interoperable Self Anja Bechmann 4. The Dynamics of Real-Time Contentious Politics: How Ubiquitous Internet Shapes and Transforms Popular Protest in China Jun Liu Part II: Commercialization, Standards, and Politics 5. Histories of Ubiquitous Web Standardization Indrek Ibrus 6. Mobile Internet: The Politics of Code and Networks Lela Mosemghvdlishvili 7. Predictive Algorithms and Personalization Services on Social Network Sites: Implications for Users and Society Robert Bodle 8. The Digital Transformation of Physical Retailing: Sellers, Customers, and the Ubiquitous Internet Joseph Turow Conclusion…(More)

Measuring government impact in a social media world


Arthur Mickoleit & Ryan Androsoff at OECD Insights: “There is hardly a government around the world that has not yet felt the impact of social media on how it communicates and engages with citizens. And while the most prominent early adopters in the public sector have tended to be politicians (think of US President Barack Obama’s impressive use of social media during his 2008 campaign), government offices are also increasingly jumping on the bandwagon. Yes, we are talking about those – mostly bricks-and-mortar – institutions that often toil away from the public gaze, managing the public administration in our countries. As the world changes, they too are increasingly engaging in a very public way through social media.
Research from our recent OECD working paper “Social Media Use by Governments” shows that as of November 2014, out of 34 OECD countries, 28 have a Twitter account for the office representing the top executive institution (head of state, head of government, or government as a whole), and 21 have a Facebook account….
 
But what is the impact governments can or should expect from social media? Is it all just vanity and peer pressure? Surely not.
Take the Spanish national police force (e.g. on Twitter, Facebook & YouTube), a great example of using social media to build long-term engagement, trust and a better public service. The thing so many governments yearn for, in this case the Spanish police seem to have managed well.
Or take the Danish “tax daddy” on Twitter – @Skattefar. It started out as the national tax administration’s quest to make it easier for everyone to submit correct tax filings; it is now one of the best examples around of a tax agency gone social.
Government administrations can use social media for internal purposes too. The Government of Canada used public platforms like Twitter and internal platforms like GCpedia and GCconnex to conduct a major employee engagement exercise (Blueprint 2020) to develop a vision for the future of the Canadian federal public service.
And when it comes to raising efficiency in the public sector, read this account of a Dutch research facility’s Director who decided to stop email. Not reduce it, but stop it altogether and replace it with social media.
There are so many other examples that could be cited. But the major question is how can we even begin to appraise the impact of these different initiatives? Because as we’ve known since the 19th century, “if you cannot measure it, you cannot improve it” (quote usually attributed to Lord Kelvin). Some aspects of impact measurement for social media can be borrowed from the private sector with regards to presence, popularity, penetration, and perception. But it’s around purpose that impact measurement agendas will split between the private sector and government. Virtually all companies will want to calculate the return on social media investments based on whether it helps them improve their financial returns. That’s different in the public sector where purpose is rarely defined in commercial terms.
A good impact assessment for social media in the public sector therefore needs to be built around its unique purpose-orientation. This is much more difficult to measure and it will involve a mix of quantitative data (e.g. reach of target audience) and qualitative data (e.g. case studies describing tangible impact). Social Media Use by Governments proposes a framework to start looking at social media measurement in gradual steps – from measuring presence, to popularity, to penetration, to perception, and finally, to purpose-orientation. The aim of this framework is to help governments develop truly relevant metrics and start treating social media activity by governments with the same public management rigour that is applied to other government activities. You can see a table summarising the framework by clicking on the thumbnail below.
This is far from an exact science, but we are beginning the work collaborating with member and partner governments to develop a toolkit that will help decision-makers implement the OECD Recommendation on Digital Government Strategies, including on the issue of social media metrics…(More)”.