Towards Scalable Governance: Sensemaking and Cooperation in the Age of Social Media


Iyad Rahwan in Philosophy & Technology: “Cybernetics, or self-governance of animal and machine, requires the ability to sense the world and to act on it in an appropriate manner. Likewise, self-governance of a human society requires groups of people to collectively sense and act on their environment. I argue that the evolution of political systems is characterized by a series of innovations that attempt to solve (among others) two ‘scalability’ problems: scaling up a group’s ability to make sense of an increasingly complex world, and to cooperate in increasingly larger groups. I then explore some recent efforts toward using the Internet and social media to provide alternative means for addressing these scalability challenges, under the banners of crowdsourcing and computer-supported argumentation. I present some lessons from those efforts about the limits of technology, and the research directions more likely to bear fruit….(More)”

From policing to news, how algorithms are changing our lives


Carl Miller at The National: “First, write out the numbers one to 100 in 10 rows. Cross out the one. Then circle the two, and cross out all of the multiples of two. Circle the three, and do likewise. Follow those instructions, and you’ve just completed the first three steps of an algorithm, and an incredibly ancient one. Twenty-three centuries ago, Eratosthenes was sat in the great library of Alexandria, using this process (it is called Eratosthenes’ Sieve) to find and separate prime numbers. Algorithms are nothing new, indeed even the word itself is old. Fifteen centuries after Eratosthenes, Algoritmi de numero Indorum appeared on the bookshelves of European monks, and with it, the word to describe something very simple in essence: follow a series of fixed steps, in order, to achieve a given answer to a given problem. That’s it, that’s an algorithm. Simple.

 Apart from, of course, the story of algorithms is not so simple, nor so humble. In the shocked wake of Donald Trump’s victory in the United States presidential election, a culprit needed to be found to explain what had happened. What had, against the odds, and in the face of thousands of polls, caused this tectonic shift in US political opinion? Soon the finger was pointed. On social media, and especially on Facebook, it was alleged that pro-Trump stories, based on inaccurate information, had spread like wildfire, often eclipsing real news and honestly-checked facts.
But no human editor was thrust into the spotlight. What took centre stage was an algorithm; Facebook’s news algorithm. It was this, critics said, that was responsible for allowing the “fake news” to circulate. This algorithm wasn’t humbly finding prime numbers; it was responsible for the news that you saw (and of course didn’t see) on the largest source of news in the world. This algorithm had somehow risen to become more powerful than any newspaper editor in the world, powerful enough to possibly throw an election.
So why all the fuss? Something is now happening in society that is throwing algorithms into the spotlight. They have taken on a new significance, even an allure and mystique. Algorithms are simply tools but a web of new technologies are vastly increasing the power that these tools have over our lives. The startling leaps forward in artificial intelligence have meant that algorithms have learned how to learn, and to become capable of accomplishing tasks and tackling problems that they were never been able to achieve before. Their learning is fuelled with more data than ever before, collected, stored and connected with the constellations of sensors, data farms and services that have ushered in the age of big data.

Algorithms are also doing more things; whether welding, driving or cooking, thanks to robotics. Wherever there is some kind of exciting innovation happening, algorithms are rarely far away. They are being used in more fields, for more things, than ever before and are incomparably, incomprehensibly more capable than the algorithms recognisable to Eratosthenes….(More)”

The data-driven social worker


NESTA: “Newcastle City Council has been using data to change the way it delivers long-term social work to vulnerable children and families.

Social workers have data analysts working alongside them. This is helping them to identify common factors among types of cases, understand the root causes of social problems and create more effective (and earlier) interventions.

What is it?

Social work teams have an embedded data analyst, whose role is to look for hypotheses to test and analysis to perform that offers insight into how best to support families.

Their role is not purely quantitative; they are expected to identify patterns, and undertake deep-dive or case study analysis. The data analysts also test what works, measuring the success of externally commissioned services, along with cost information.

While each social worker only has knowledge of their own individual cases, data analysts have a bird’s-eye view of the whole team’s activity, enabling them to look across sets of families for common patterns.

How does it work?

Data analysts are responsible for maintaining ChildStat, a data dashboard that social workers use to help manage their caseloads. The data insights found by the embedded analysts can highlight the need to work in a different way.

For example, one unit works with children at risk of physical abuse. Case file analysis of the mental health histories of the parents found that 20% of children had parents with a personality disorder, while 60-70% had a parent who had experienced sexual or physical abuse as children themselves.

Traditional social work methods may not have uncovered this insight, which led Newcastle to look for new responses to working with these types of families.

Data analysis has also helped to identify the factors that are most predictive of a child becoming NEET (not in education, employment or training), enabling the team to review their approach to working with families and focus on earlier intervention….(More)”

Big Data Coming In Faster Than Biomedical Researchers Can Process It


Richard Harris at NPR: “Biomedical research is going big-time: Megaprojects that collect vast stores of data are proliferating rapidly. But scientists’ ability to make sense of all that information isn’t keeping up.

This conundrum took center stage at a meeting of patient advocates, called Partnering For Cures, in New York City on Nov. 15.

On the one hand, there’s an embarrassment of riches, as billions of dollars are spent on these megaprojects.

There’s the White House’s Cancer Moonshot (which seeks to make 10 years of progress in cancer research over the next five years), the Precision Medicine Initiative (which is trying to recruit a million Americans to glean hints about health and disease from their data), The BRAIN Initiative (to map the neural circuits and understand the mechanics of thought and memory) and the International Human Cell Atlas Initiative (to identify and describe all human cell types).

“It’s not just that any one data repository is growing exponentially, the number of data repositories is growing exponentially,” said Dr. Atul Butte, who leads the Institute for Computational Health Sciences at the University of California, San Francisco.

One of the most remarkable efforts is the federal government’s push to get doctors and hospitals to put medical records in digital form. That shift to electronic records is costing billions of dollars — including more than $28 billion alone in federal incentives to hospitals, doctors and others to adopt them. The investment is creating a vast data repository that could potentially be mined for clues about health and disease, the way websites and merchants gather data about you to personalize the online ads you see and for other commercial purposes.

But, unlike the data scientists at Google and Facebook, medical researchers have done almost nothing as yet to systematically analyze the information in these records, Butte said. “As a country, I think we’re investing close to zero analyzing any of that data,” he said.

Prospecting for hints about health and disease isn’t going to be easy. The raw data aren’t very robust and reliable. Electronic medical records are often kept in databases that aren’t compatible with one another, at least without a struggle. Some of the potentially revealing details are also kept as free-form notes, which can be hard to extract and interpret. Errors commonly creep into these records….(More)”

How Should a Society Be?


Brian Christian: “This is another example where AI—in this case, machine-learning methods—intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.

It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit….(More) (Video)”

Making the Case for Evidence-Based Decision-Making


Jennifer Brooks in Stanford Social Innovation Review: “After 15 years of building linkages between evidence, policy, and practice in social programs for children and families, I have one thing to say about our efforts to promote evidence-based decision-making: We have failed to capture the hearts and minds of the majority of decision-makers in the United States.

I’ve worked with state and federal leadership, as well as program administrators in the public and nonprofit spheres. Most of them just aren’t with us. They aren’t convinced that the payoffs of evidence-based practice (the method that uses rigorous tests to assess the efficacy of a given intervention) are worth the extra difficulty or expense of implementing those practices.

Why haven’t we gotten more traction for evidence-based decision-making? Three key reasons: 1) we have wasted time debating whether randomized control trials are the optimal approach, rather than building demand for more data-based decision-making; 2) we oversold the availability of evidence-based practices and underestimated what it takes to scale them; and 3) we did all this without ever asking what problems decision-makers are trying to solve.

If we want to gain momentum for evidence-based practice, we need to focus more on figuring out how to implement such approaches on a larger scale, in a way that uses data to improve programs on an ongoing basis….

We must start by understanding and analyzing the problem the decision-maker wants to solve. We need to offer more than lists of evidence-based strategies or interventions. What outcomes do the decision-makers want to achieve? And what do data tell us about why we aren’t getting those outcomes with current methods?…

None of the following ideas is rocket science, nor am I the first person to say them, but they do suggest ways that we can move beyond our current approaches in promoting evidence-based practice.

1. We need better data.

As Michele Jolin pointed out recently, few federal programs have sufficient resources to build or use evidence. There are limited resources for evaluation and other evidence-building activities, which too often are seen as “extras.” Moreover, many programs at the local, state, and national level have minimal information to use for program management and even fewer staff with the skills required to use it effectively…

 

2. We should attend equally to practices and to the systems in which they sit.

Systems improvements without changes in practice won’t get outcomes, but without systems reforms, evidence-based practices will have difficulty scaling up. …

3. You get what you pay for.

One fear I have is that we don’t actually know whether we can get better outcomes in our public systems without spending more money. And yet cost-savings seem to be what we promise when we sell the idea of evidence-based practice to legislatures and budget directors….

4. We need to hold people accountable for program results and promote ongoing improvement.

There is an inherent tension between using data for accountability and using it for program improvement….(More)”

OpenStreetMap in Israel and Palestine – ‘Game changer’ or reproducer of contested cartographies?


Christian Bittner in Political Geography: “In Israel and Palestine, map-making practices were always entangled with contradictive spatial identities and imbalanced power resources. Although an Israeli narrative has largely dominated the ‘cartographic battlefield’, the latest chapter of this story has not been written yet: collaborative forms of web 2.0 cartographies have restructured power relations in mapping practices and challenged traditional monopolies on map and spatial data production. Thus, we can expect web 2.0 cartographies to be a ‘game changer’ for cartography in Palestine and Israel.

In this paper, I review this assumption with the popular example of OpenStreetMap (OSM). Following a mixed methods approach, I comparatively analyze the genesis of OSM in Israel and Palestine. Although nationalist motives do not play a significant role on either side, it turns out that the project is dominated by Israeli and international mappers, whereas Palestinians have hardly contributed to OSM. As a result, social fragmentations and imbalances between Israel and Palestine are largely reproduced through OSM data. Discussing the low involvement of Palestinians, I argue that OSM’s ground truth paradigm might be a watershed for participation. Presumably, the project’s data are less meaningful in some local contexts than in others. Moreover, the seemingly apolitical approach to map only ‘facts on the ground’ reaffirms present spatio-social order and thus the power relations behind it. Within a Palestinian narrative, however, many aspects of the factual material space might appear not as neutral physical objects but as results of suppression, in which case, any ‘accurate’ spatial representation, such as OSM, becomes objectionable….(More)”

How the Circle Line rogue train was caught with data


Daniel Sim at the Data.gov.sg Blog: “Singapore’s MRT Circle Line was hit by a spate of mysterious disruptions in recent months, causing much confusion and distress to thousands of commuters.

Like most of my colleagues, I take a train on the Circle Line to my office at one-north every morning. So on November 5, when my team was given the chance to investigate the cause, I volunteered without hesitation.

 From prior investigations by train operator SMRT and the Land Transport Authority (LTA), we already knew that the incidents were caused by some form of signal interference, which led to loss of signals in some trains. The signal loss would trigger the emergency brake safety feature in those trains and cause them to stop randomly along the tracks.

But the incidents — which first happened in August — seemed to occur at random, making it difficult for the investigation team to pinpoint the exact cause.

We were given a dataset compiled by SMRT that contained the following information:

  • Date and time of each incident
  • Location of incident
  • ID of train involved
  • Direction of train…

LTA and SMRT eventually published a joint press release on November 11 to share the findings with the public….

When we first started, my colleagues and I were hoping to find patterns that may be of interest to the cross-agency investigation team, which included many officers at LTA, SMRT and DSTA. The tidy incident logs provided by SMRT and LTA were instrumental in getting us off to a good start, as minimal cleaning up was required before we could import and analyse the data. We were also gratified by the effective follow-up investigations by LTA and DSTA that confirmed the hardware problems on PV46.

From the data science perspective, we were lucky that incidents happened so close to one another. That allowed us to identify both the problem and the culprit in such a short time. If the incidents were more isolated, the zigzag pattern would have been less apparent, and it would have taken us more time — and data — to solve the mystery….(More).”

Technocracy in America: Rise of the Info-State


Book by Parag Khanna: “American democracy just isn’t good enough anymore. A costly election has done more to divide American society than unite it, while trust in government—and democracy itself—is plummeting. But there are better systems out there, and America would be wise to learn from them. In this provocative manifesto, globalization scholar Parag Khanna tours cutting-edge nations from Switzerland to Singapore to reveal the inner workings that allow them that lead the way in managing the volatility of a fast-changing world while delivering superior welfare and prosperity for their citizens.

The ideal form of government for the complex 21st century is what Khanna calls a “direct technocracy,” one led by experts but perpetually consulting the people through a combination of democracy and data. From a seven-member presidency and a restructured cabinet to replacing the Senate with an Assembly of Governors, Technocracy in America is full of sensible proposals that have been proven to work in the world’s most successful societies. Americans have a choice for whom they elect president, but they should not wait any longer to redesign their political system following Khanna’s pragmatic vision….(More)”

The Crowd is Always There: A Marketplace for Crowdsourcing Crisis Response


Presentation by Patrick Meier at the Emergency Social Data Summit organized by the Red Cross …on “Collaborative Crisis Mapping” (the slides are available here): “What I want to expand on is the notion of a “marketplace for crowdsourcing” that I introduced at the Summit. The idea stems from my experience in the field of conflict early warning, the Ushahidi-Haiti deployment and my observations of the Ushahidi-DC and Ushahidi-Russia initiatives.

The crowd is always there. Paid Search & Rescue (SAR) teams and salaried emergency responders aren’t. Nor can they be on the corners of every street, whether that’s in Port-au-Prince, Haiti, Washington DC or Sukkur, Pakistan. But the real first responders, the disaster affected communities, are always there. Moreover, not all communities are equally affected by a crisis. The challenge is to link those who are most affected with those who are less affected (at least until external help arrives).

This is precisely what PIC Net and the Washington Post did when they  partnered to deploy this Ushahidi platform in response to the massive snow storm that paralyzed Washington DC earlier this year. They provided a way for affected residents to map their needs and for those less affected to map the resources they could share to help others. You don’t need to be a professional disaster response professional to help your neighbor dig out their car.

More recently, friends at Global Voices launched the most ambitious crowdsourcing initiative in Russia in response to the massive forest fires. But they didn’t use this Ushahidi platform to map the fires. Instead, they customized the public map so that those who needed help could find those who wanted to help. In effect, they created an online market place to crowdsource crisis response. You don’t need professional certification in disaster response to drive someone’s grandparents to the next town over.

There’s a lot that disaster affected populations can (and already do) to help each other out in times of crisis. What may help is to combine the crowdsourcing of crisis information with what I call crowdfeeding in order to create an efficient market place for crowdsourcing response. By crowdfeeding, I mean taking crowdsourced information and feeding it right back to the crowd. Surely they need that information as much if not more than external, paid responders who won’t get to the scene for hours or days….(More)”