How Our Days Became Numbered


Review by Clive Cookson of ‘How Our Days Became Numbered’, by Dan Bouk in the Financial Times: “The unemployed lumber worker whose 1939 portrait adorns the cover of How Our Days Became Numbered has a “face fit for a film star”, as Dan Bouk puts it. But he is not there for his looks. Bouk wants us to focus on his bulging bicep, across which is tattooed “SSN 535-07-5248”: his social security number.

The photograph of Thomas Cave by documentary photographer Dorothea Lange illustrates the high water mark of American respect for statistical labelling. Cave was so proud of his newly issued number that he had it inked forever on his arm.

When the Roosevelt administration introduced the federal social security system in the 1930s, it worked out rates of contribution and benefit on the basis of statistical practices already established by life insurance companies. The industry is at the heart of Bouk’s history of personal data collection and analysis — because it worked out how to measure and predict the health of ordinary Americans in the late 19th and early 20th centuries. (More)”

Harnessing Mistrust for Civic Action


Ethan Zuckerman: “…One predictable consequence of mistrust in institutions is a decrease in participation. Fewer than 37% of eligible US voters participated in the 2014 Congressional election. Participation in European parliamentary and national elections across Europe is higher than the US’s dismal rates, but has steadily declined since 1979, with turnout for the 2014 European parliamentary elections dropping below 43%. It’s a mistake to blame low turnout on distracted or disinterested voters, when a better explanation exists: why vote if you don’t believe the US congress or European Parliament is capable of making meaningful change in the world?

In his 2012 book, “Twilight of the Elites”, Christopher Hayes suggests that the political tension of our time is not between left and right, but between institutionalists and insurrectionists. Institutionalists believe we can fix the world’s problems by strengthening and revitalizing the institutions we have. Insurrectionists believe we need to abandon these broken institutions we have and replace them with new, less corrupted ones, or with nothing at all. The institutionalists show up to vote in elections, but they’re being crowded out by the insurrectionists, who take to the streets to protest, or more worryingly, disengage entirely from civic life.

Conventional wisdom suggests that insurrectionists will grow up, stop protesting and start voting. But we may have reached a tipping point where the cultural zeitgeist favors insurrection. My students at MIT don’t want to work for banks, for Google or for universities – they want to build startups that disrupt banks, Google and universities.

The future of democracy depends on finding effective ways for people who mistrust institutions to make change in their communities, their nations and the world as a whole. The real danger is not that our broken institutions are toppled by a wave of digital disruption, but that a generation disengages from politics and civics as a whole.

It’s time to stop criticizing youth for their failure to vote and time to start celebrating the ways insurrectionists are actually trying to change the world. Those who mistrust institutions aren’t just ignoring them. Some are building new systems designed to make existing institutions obsolete. Others are becoming the fiercest and most engaged critics of of our institutions, while the most radical are building new systems that resist centralization and concentration of power.

Those outraged by government and corporate complicity in surveillance of the internet have the option of lobbying their governments to forbid these violations of privacy, or building and spreading tools that make it vastly harder for US and European governments to read our mail and track our online behavior. We need both better laws and better tools. But we must recognize that the programmers who build systems like Tor, PGP and Textsecure are engaged in civics as surely as anyone crafting a party’s political platform. The same goes for entrepreneurs building better electric cars, rather than fighting to legislate carbon taxes. As people lose faith in institutions, they seek change less through passing and enforcing laws, and more through building new technologies and businesses whose adoption has the same benefits as wisely crafted and enforced laws….(More)”

Weathernews thinks crowdsourcing is the future of weather


Andrew Freedman at Mashable: “The weather forecast of the future will be crowdsourced, if one Japanese weather firm sees its vision fulfilled.

On Monday, Weathernews Inc. of Japan announced a partnership with the Chinese firm Moji to bring Weathernews’ technology to the latter company’s popular MoWeather app.

The benefit for Weathernews, in addition to more users and entry into the Chinese market, is access to more data that can then be turned into weather forecasts.

The company says that this additional user base, when added to its existing users, will make Weathernews “the largest crowdsourced weather service in the world,” with 420 million users across 175 countries.

 

…So far, though, mobile phones have not proven to be more reliable weather sensors than the network of thousands of far more expensive and specialized surface weather observation sites throughout the world, but crowdsourcing’s day in the sun may be close at hand. As Weathernews leaders were quick to point out to Mashable in an interview, the existing weather observing network on which most forecasts rely has significant drawbacks that makes crowdsourcing especially appealing outside the U.S.

For example, most surface weather stations are in wealthy nations, primarily in North America and Europe. There’s a giant forecasting blind spot over much of Africa, where many countries lack a national weather agency. However, these countries do have rapidly growing mobile phone networks that, if utilized in certain ways, could provide a way to fill in data gaps and make weather forecasts more accurate, too.

“At Weathernews, we have a core belief that more weather data is better,” said Weathernews managing director Tomohiro Ishibashi.

“So having access to the additional datasets from MoWeather’s vast user community allows us to provide more accurate and safer weather forecasting for all,” he said. “Our advanced algorithms analyze these new datasets and put them in our existing computer forecasting models.”

Weathernews is trying to use observations that most weather companies might regard as interesting but not worth the effort to tailor for computer modeling. For example, photos of clouds are a potential way to ground truth weather satellite imagery, Ishibashi told Mashable.

“For us the picture of the sky… has a lot of information,” he said. (The company’s website refers to such observations as “eye-servation.”)…

Compared to Weathernews’ ambitions, AccuWeather’s recent decision to incorporate crowdsourced data into its iOS app seems more traditional, like a TV weather forecaster adding a few new “weather watchers” to their station’s network during local television’s heyday in the 1980s and 90s.

Now, we’re all weather watchers….(More)”

Datafication and empowerment: How the open data movement re-articulates notions of democracy, participation, and journalism


Paper by Stefan Baack at Big Data and Society: “This article shows how activists in the open data movement re-articulate notions of democracy, participation, and journalism by applying practices and values from open source culture to the creation and use of data. Focusing on the Open Knowledge Foundation Germany and drawing from a combination of interviews and content analysis, it argues that this process leads activists to develop new rationalities around datafication that can support the agency of datafied publics. Three modulations of open source are identified: First, by regarding data as a prerequisite for generating knowledge, activists transform the sharing of source code to include the sharing of raw data. Sharing raw data should break the interpretative monopoly of governments and would allow people to make their own interpretation of data about public issues. Second, activists connect this idea to an open and flexible form of representative democracy by applying the open source model of participation to political participation. Third, activists acknowledge that intermediaries are necessary to make raw data accessible to the public. This leads them to an interest in transforming journalism to become an intermediary in this sense. At the same time, they try to act as intermediaries themselves and develop civic technologies to put their ideas into practice. The article concludes with suggesting that the practices and ideas of open data activists are relevant because they illustrate the connection between datafication and open source culture and help to understand how datafication might support the agency of publics and actors outside big government and big business….(More)

Crowdsourcing: a survey of applications


Paper by Jayshri Namdeorao Ganthade, Sunil R. Gupta: “Crowdsourcing, itself a multidisciplinary field, can be well-served by incorporating theories and methods from affective computing. We present a various applications which are based on crowdsourcing. The direction of research on principles and methods can enable to solve a general problem via human computation systems. Crowdsourcing is nothing but an act of outsourcing tasks to a large group of people through an open request via the Internet. It has become popular among social scientists as a source to recruit research participants from the general public for studies. Crowdsourcing is introduced as the new online distributed problem solving model in which networked people collaborate to complete a task and produce the result. However, the idea of crowdsourcing is not new, and can be traced back to Charles Darwin. Darwin was interested in studying the universality of facial expressions in conveying emotions. For this, it required large amount of database and for this he had to consider a global population to get more general conclusions.
This paper provides an introduction to crowdsourcing, guidelines for using crowdsourcing, and its applications in various fields. Finally, this article proposes conclusion which is based upon applications of crowdsourcing….(More)”.

 

AI tool turns complicated legal contracts into simple visual charts


Springwise: “We have seen a host of work related apps that aim to make tedious office tasks more approachable — there is a plugin that can find files without knowing the title, and a tracking tool which analyzes competitors online strategies. Joining this is Beagle, an intelligent contract analysis tool which provides users with a graphical summary of lengthy documents in seconds. It is a time-saving tool which translates complicated documents from elusive legal language into comprehensive visual summaries.

The Beagle system is powered by self-learning artificial intelligence which learns the client’s preferences and adapts accordingly. Users begin by dropping in a file into the app. The AI — trained by lawyers and NLP experts — then converts the information into a single page document. It processes the contract at a rate of one page per 0.05 seconds and highlights key information, displaying it in easy to read graphs and charts. The system also comes with built-in collaboration tools so multiple users can edit and export the files….(More)”

Digital government evolution: From transformation to contextualization


Paper by Tomasz Janowski in the Government Information Quarterly: “The Digital Government landscape is continuously changing to reflect how governments are trying to find innovative digital solutions to social, economic, political and other pressures, and how they transform themselves in the process. Understanding and predicting such changes is important for policymakers, government executives, researchers and all those who prepare, make, implement or evaluate Digital Government decisions. This article argues that the concept of Digital Government evolves toward more complexity and greater contextualization and specialization, similar to evolution-like processes that lead to changes in cultures and societies. To this end, the article presents a four-stage Digital Government Evolution Model comprising Digitization (Technology in Government), Transformation (Electronic Government), Engagement (Electronic Governance) and Contextualization (Policy-Driven Electronic Governance) stages; provides some evidence in support of this model drawing upon the study of the Digital Government literature published in Government Information Quarterly between 1992 and 2014; and presents a Digital Government Stage Analysis Framework to explain the evolution. As the article consolidates a representative body of the Digital Government literature, it could be also used for defining and integrating future research in the area….(More)”

From Data to Impact: How the Governance Data Community Can Understand Users and Influence Government Decisions


Nicole Anand at Reboot: “Governance data initiatives are proliferating. And we’re making progress: As a community, we’ve moved from a focus on generating data to caring more about how that data is used. But are these efforts having the impact that we want? Are they influencing how governments make decisions?

Those of us who work with governance data (that is, data on public services or, say, legislative or fiscal issues) recognize its potential to increase government accountability. Yet as a community, we don’t know enough about what impact we’ve had. The one thing we do know is that the impact so far is more limited than we’d like—given our own expectations and the investments that donors have made.

In partnership with the Open Society Foundations’ (OSF) Information Program, we set out to investigate these questions, which we see as increasingly pressing as we expand our own work in this area. Today, we are excited the share the results of a new scoping study that presents further research insights, as well as implications and recommendations for donors….(More)”

We are data: the future of machine intelligence


Douglas Coupland in the Financial Times: “…But what if the rise of Artificial Intuition instead blossoms under the aegis of theology or political ideology? With politics we can see an interesting scenario developing in Europe, where Google is by far the dominant search engine. What is interesting there is that people are perfectly free to use Yahoo or Bing yet they choose to stick with Google and then they get worried about Google having too much power — which is an unusual relationship dynamic, like an old married couple. Maybe Google could be carved up into baby Googles? But no. How do you break apart a search engine? AT&T was broken into seven more or less regional entities in 1982 but you can’t really do that with a search engine. Germany gets gaming? France gets porn? Holland gets commerce? It’s not a pie that can be sliced.

The time to fix this data search inequity isn’t right now, either. The time to fix this problem was 20 years ago, and the only country that got it right was China, which now has its own search engine and social networking systems. But were the British or Spanish governments — or any other government — to say, “OK, we’re making our own proprietary national search engine”, that would somehow be far scarier than having a private company running things. (If you want paranoia, let your government control what you can and can’t access — which is what you basically have in China. Irony!)

The tendency in theocracies would almost invariably be one of intense censorship, extreme limitations of access, as well as machine intelligence endlessly scouring its system in search of apostasy and dissent. The Americans, on the other hand, are desperately trying to implement a two-tiered system to monetise information in the same way they’ve monetised medicine, agriculture, food and criminality. One almost gets misty-eyed looking at North Koreans who, if nothing else, have yet to have their neurons reconfigured, thus turning them into a nation of click junkies. But even if they did have an internet, it would have only one site to visit, and its name would be gloriousleader.nk.

. . .

To summarise. Everyone, basically, wants access to and control over what you will become, both as a physical and metadata entity. We are also on our way to a world of concrete walls surrounding any number of niche beliefs. On our journey, we get to watch machine intelligence become profoundly more intelligent while, as a society, we get to watch one labour category after another be systematically burped out of the labour pool. (Doug’s Law: An app is only successful if it puts a lot of people out of work.)…(More)”

Scientists Are Hoarding Data And It’s Ruining Medical Research


Ben Goldacre at Buzzfeed: “We like to imagine that science is a world of clean answers, with priestly personnel in white coats, emitting perfect outputs, from glass and metal buildings full of blinking lights.

The reality is a mess. A collection of papers published on Wednesday — on one of the most commonly used medical treatments in the world — show just how bad things have become. But they also give hope.

The papers are about deworming pills that kill parasites in the gut, at extremely low cost. In developing countries, battles over the usefulness of these drugs have become so contentious that some people call them “The Worm Wars.”…

This “deworm everybody” approach has been driven by a single, hugely influential trial published in 2004 by two economists, Edward Miguel and Michael Kremer. This trial, done in Kenya, found that deworming whole schools improved children’s health, school performance, and school attendance. What’s more, these benefits apparently extended to children in schools several miles away, even when those children didn’t get any deworming tablets (presumably, people assumed, by interrupting worm transmission from one child to the next).

A decade later, in 2013, these two economists did something that very few researchers have ever done. They handed over their entire dataset to independent researchers on the other side of the world, so that their analyses could be checked in public. What happened next has every right to kick through a revolution in science and medicine….

This kind of statistical replication is almost vanishingly rare. A recent study set out to find all well-documented cases in which the raw data from a randomized trial had been reanalysed. It found just 37, out of many thousands. What’s more, only five were conducted by entirely independent researchers, people not involved in the original trial.

These reanalyses were more than mere academic fun and games. The ultimate outcomes of the trials changed, with terrifying frequency: One-third of them were so different that the take-home message of the trial shifted.

This matters. Medical trials aren’t conducted out of an abstract philosophical interest, for the intellectual benefit of some rarefied class in ivory towers. Researchers do trials as a service, to find out what works, because they intend to act on the results. It matters that trials get an answer that is not just accurate, but also reliable.

So here we have an odd situation. Independent reanalysis can improve the results of clinical trials, and help us not go down blind alleys, or give the wrong treatment to the wrong people. It’s pretty cheap, compared to the phenomenal administrative cost of conducting a trial. And it spots problems at an alarmingly high rate.

And yet, this kind of independent check is almost never done. Why not? Partly, it’s resources. But more than that, when people do request raw data, all too often the original researchers duck, dive, or simply ignore requests….

Two years ago I published a book on problems in medicine. Front and center in this howl was “publication bias,” the problem of clinical trial results being routinely and legally withheld from doctors, researchers, and patients. The best available evidence — from dozens of studieschasing results for completed trials — shows that around half of all clinical trials fail to report their results. The same is true of industry trials, and academic trials. What’s more, trials with positive results are about twice as likely to post results, so we see a biased half of the literature.

This is a cancer at the core of evidence-based medicine. When half the evidence is withheld, doctors and patients cannot make informed decisions about which treatment is best. When I wrote about this, various people from the pharmaceutical industry cropped up to claim that the problem was all in the past. So I befriended some campaigners, we assembled a group of senior academics, and started the AllTrials.net campaign with one clear message: “All trials must be registered, with their full methods and results reported.”

Dozens of academic studies had been published on the issue, and that alone clearly wasn’t enough. So we started collecting signatures, and we now have more than 85,000 supporters. At the same time we sought out institutional support. Eighty patient groups signed up in the first month, with hundreds more since then. Some of the biggest research funders, and even government bodies, have now signed up.

This week we’re announcing support from a group of 85 pension funds and asset managers, representing more than 3.5 trillion euros in funds, who will be asking the pharma companies they invest in to make plans to ensure that all trials — past, present, and future — report their results properly. Next week, after two years of activity in Europe, we launch our campaign in the U.S….(More)”