‘’Everyone sees everything’: Overhauling Ukraine’s corrupt contracting sector


Open Contracting Stories: “When Yuriy Bugay, a Maidan revolutionary, showed up for work at Kiev’s public procurement office for the first time, it wasn’t the most uplifting sight. The 27-year-old had left his job in the private sector after joining a group of activists during the protests in Kiev’s main square, with dreams of reforming Ukraine’s dysfunctional public institutions. They chose one of the country’s most broken sectors, public procurement, as their starting point, and within a year, their project had been adopted by Ukraine’s economy ministry, Bugay’s new employer.

…The initial team behind the reform was made up of an eclectic bunch of several hundreds volunteers that included NGO workers, tech experts, businesspeople and civil servants. They decided the best way to make government deals more open was to create an e-procurement system, which they called ProZorro (meaning “transparent” in Ukrainian). Built on open source software, the system has been designed to make it possible for government bodies to conduct procurement deals electronically, in a transparent manner, while also making the state’s information about public contracts easily accessible online for anyone to see. Although it was initially conceived as a tool for fighting corruption, the potential benefits of the system are much broader — increasing competition, reducing the time and money spent on contracting processes, helping buyers make better decisions and making procurement fairer for suppliers….

In its pilot phase, ProZorro saved over UAH 1.5 billion (US$55 million) for more than 3,900 government agencies and state-owned enterprises across Ukraine. This pilot, which won a prestigious World Procurement Award in 2016, was so successful that Ukraine’s parliament passed a new public procurement law requiring all government contracting to be carried out via ProZorro from 1 August 2016. Since then, potential savings to the procurement budget have snowballed. As of November 2016, they stand at an estimated UAH 5.97 billion (US$233 million), with more than 15,000 buyers and 47,000 commercial suppliers using the new system.

At the same time, the team behind the project has evolved and professionalized….(More)”

A Guide to Data Innovation for Development – From idea to proof-of-concept


Press Release: “UNDP and UN Global Pulse today released a comprehensive guide on how to integrate new sources of data into development and humanitarian work.

New and emerging data sources such as mobile phone data, social media, remote sensors and satellites have the potential to improve the work of governments and development organizations across the globe.

Entitled A Guide to Data Innovation for Development – From idea to proof-of-concept,’ this publication was developed by practitioners for practitioners. It provides step-by-step guidance for working with new sources of data to staff of UN agencies and international Non-Governmental Organizations.

The guide is a result of a collaboration of UNDP and UN Global Pulse with support from UN Volunteers. Led by UNDP innovation teams in Europe and Central Asia and Arab States, six UNDP offices in Armenia, Egypt, Kosovo[1], fYR Macedonia, Sudan and Tunisia each completed data innovation projects applicable to development challenges on the ground.

The publication builds on these successful case trials and on the expertise of data innovators from UNDP and UN Global Pulse who managed the design and development of those projects.

It provides practical guidance for jump-starting a data innovation project, from the design phase through the creation of a proof-of-concept.

The guide is structured into three sections – (I) Explore the Problem & System, (II) Assemble the Team and (III) Create the Workplan. Each of the sections comprises of a series of tools for completing the steps needed to initiate and design a data innovation project, to engage the right partners and to make sure that adequate privacy and protection mechanisms are applied.

…Download ‘A Guide to Data Innovation for Development – From idea to proof-of-concept’ here.”

Scaling accountability through vertically integrated civil society policy monitoring and advocacy


Working paper by Jonathan Fox: “…argues that the growing field of transparency, participation and accountability (TPA) needs a conceptual reboot, to address the limited traction gained so far on the path to accountability. To inform more strategic approaches and to identify the drivers of more sustainable institutional change, fresh analytical work is needed.

The paper makes the case for one among several possible strategic approaches by distinguishing between ‘scaling up’ and ‘taking scale into account’, going on to examine several different ways that ‘scale’ is used in different fields.

It goes on to explain and discuss the strategy of vertical integration, which involves multi-level coordination by civil society organisations of policy monitoring and advocacy, grounded in broad pro-accountability constituencies. Vertical integration is discussed from several different angles, from its roots in politcal economy to its relationship with citizen voice, its capacity for multi-directional communication, and its relationship with feedback loops.

To spell out how this strategy can empower pro accountability actors, the paper contrasts varied terms of engagement between state and society, proposing a focus on collaborative coalitions as an alternative to the conventional dichotomy between confrontation and constructive engagement.

The paper continues by reviewing existing multi-level approaches, summarising nine cases – three each in the Philippines, Mexico and India – to demonstrate what can be revealed when TPA initiatives are seen through the lens of scale.

It concludes with a set of broad analytical questions for discussion, followed by testable hypotheses proposed to inform future research agendas.(Download the paper here, and a short summary here)…(More)”

Just good enough data: Figuring data citizenships through air pollution sensing and data stories


Jennifer Gabrys, Helen Pritchard, and Benjamin Barratt in Big Data & Society: “Citizen sensing, or the use of low-cost and accessible digital technologies to monitor environments, has contributed to new types of environmental data and data practices. Through a discussion of participatory research into air pollution sensing with residents of northeastern Pennsylvania concerned about the effects of hydraulic fracturing, we examine how new technologies for generating environmental data also give rise to new problems for analysing and making sense of citizen-gathered data. After first outlining the citizen data practices we collaboratively developed with residents for monitoring air quality, we then describe the data stories that we created along with citizens as a method and technique for composing data. We further mobilise the concept of ‘just good enough data’ to discuss the ways in which citizen data gives rise to alternative ways of creating, valuing and interpreting datasets. We specifically consider how environmental data raises different concerns and possibilities in relation to Big Data, which can be distinct from security or social media studies. We then suggest ways in which citizen datasets could generate different practices and interpretive insights that go beyond the usual uses of environmental data for regulation, compliance and modelling to generate expanded data citizenships….(More)”

Maybe the Internet Isn’t a Fantastic Tool for Democracy After All


 in NewYork Magazine: “My favorite story about the internet is the one about the anonymous Japanese guy who liberated Czechoslovakia. In 1989, as open dissent was spreading across the country, dissidents were attempting to coordinate efforts outside the watchful eye of Czechoslovak state security. The internet was a nascent technology, and the cops didn’t use it; modems were banned, and activists were able to use only those they could smuggle over the border, one at a time. Enter our Japanese guy. Bruce Sterling, who first told the story of the Japanese guy in a 1995 Wired article, says he talked to four different people who’d met the quiet stranger, but no one knew his name. What really mattered, anyway, is what he brought with him: “a valise full of brand-new and unmarked 2400-baud Taiwanese modems,” which he handed over to a group of engineering students in Prague before walking away. “The students,” Sterling would later write, “immediately used these red-hot 2400-baud scorcher modems to circulate manifestos, declarations of solidarity, rumors, and riot news.” Unrest expanded, the opposition grew, and within months, the Communist regime collapsed.

Is it true? Were free modems the catalyst for the Velvet Revolution? Probably not. But it’s a good story, the kind whose logic and lesson have become so widely understood — and so foundational to the worldview of Silicon Valley — as to make its truth irrelevant. Isn’t the best way to fortify the town square by giving more people access to it? And isn’t it nice to know, as one storied institution and industry after another falls to the internet’s disrupting sword, that everything will be okay in the end — that there might be some growing pains, but connecting billions of people to one another is both inevitable and good? Free speech will expand, democracy will flower, and we’ll all be rich enough to own MacBooks. The new princes of Silicon Valley will lead us into the rational, algorithmically enhanced, globally free future.

Or, they were going to, until earlier this month. The question we face now is: What happens when the industry destroyed is professional politics, the institutions leveled are the same few that prop up liberal democracy, and the values the internet disseminates are racism, nationalism, and demagoguery?

Powerful undemocratic states like China and Russia have for a while now put the internet to use to mislead the public, create the illusion of mass support, and either render opposition invisible or expose it to targeting…(More)”

Towards Scalable Governance: Sensemaking and Cooperation in the Age of Social Media


Iyad Rahwan in Philosophy & Technology: “Cybernetics, or self-governance of animal and machine, requires the ability to sense the world and to act on it in an appropriate manner. Likewise, self-governance of a human society requires groups of people to collectively sense and act on their environment. I argue that the evolution of political systems is characterized by a series of innovations that attempt to solve (among others) two ‘scalability’ problems: scaling up a group’s ability to make sense of an increasingly complex world, and to cooperate in increasingly larger groups. I then explore some recent efforts toward using the Internet and social media to provide alternative means for addressing these scalability challenges, under the banners of crowdsourcing and computer-supported argumentation. I present some lessons from those efforts about the limits of technology, and the research directions more likely to bear fruit….(More)”

From policing to news, how algorithms are changing our lives


Carl Miller at The National: “First, write out the numbers one to 100 in 10 rows. Cross out the one. Then circle the two, and cross out all of the multiples of two. Circle the three, and do likewise. Follow those instructions, and you’ve just completed the first three steps of an algorithm, and an incredibly ancient one. Twenty-three centuries ago, Eratosthenes was sat in the great library of Alexandria, using this process (it is called Eratosthenes’ Sieve) to find and separate prime numbers. Algorithms are nothing new, indeed even the word itself is old. Fifteen centuries after Eratosthenes, Algoritmi de numero Indorum appeared on the bookshelves of European monks, and with it, the word to describe something very simple in essence: follow a series of fixed steps, in order, to achieve a given answer to a given problem. That’s it, that’s an algorithm. Simple.

 Apart from, of course, the story of algorithms is not so simple, nor so humble. In the shocked wake of Donald Trump’s victory in the United States presidential election, a culprit needed to be found to explain what had happened. What had, against the odds, and in the face of thousands of polls, caused this tectonic shift in US political opinion? Soon the finger was pointed. On social media, and especially on Facebook, it was alleged that pro-Trump stories, based on inaccurate information, had spread like wildfire, often eclipsing real news and honestly-checked facts.
But no human editor was thrust into the spotlight. What took centre stage was an algorithm; Facebook’s news algorithm. It was this, critics said, that was responsible for allowing the “fake news” to circulate. This algorithm wasn’t humbly finding prime numbers; it was responsible for the news that you saw (and of course didn’t see) on the largest source of news in the world. This algorithm had somehow risen to become more powerful than any newspaper editor in the world, powerful enough to possibly throw an election.
So why all the fuss? Something is now happening in society that is throwing algorithms into the spotlight. They have taken on a new significance, even an allure and mystique. Algorithms are simply tools but a web of new technologies are vastly increasing the power that these tools have over our lives. The startling leaps forward in artificial intelligence have meant that algorithms have learned how to learn, and to become capable of accomplishing tasks and tackling problems that they were never been able to achieve before. Their learning is fuelled with more data than ever before, collected, stored and connected with the constellations of sensors, data farms and services that have ushered in the age of big data.

Algorithms are also doing more things; whether welding, driving or cooking, thanks to robotics. Wherever there is some kind of exciting innovation happening, algorithms are rarely far away. They are being used in more fields, for more things, than ever before and are incomparably, incomprehensibly more capable than the algorithms recognisable to Eratosthenes….(More)”

The data-driven social worker


NESTA: “Newcastle City Council has been using data to change the way it delivers long-term social work to vulnerable children and families.

Social workers have data analysts working alongside them. This is helping them to identify common factors among types of cases, understand the root causes of social problems and create more effective (and earlier) interventions.

What is it?

Social work teams have an embedded data analyst, whose role is to look for hypotheses to test and analysis to perform that offers insight into how best to support families.

Their role is not purely quantitative; they are expected to identify patterns, and undertake deep-dive or case study analysis. The data analysts also test what works, measuring the success of externally commissioned services, along with cost information.

While each social worker only has knowledge of their own individual cases, data analysts have a bird’s-eye view of the whole team’s activity, enabling them to look across sets of families for common patterns.

How does it work?

Data analysts are responsible for maintaining ChildStat, a data dashboard that social workers use to help manage their caseloads. The data insights found by the embedded analysts can highlight the need to work in a different way.

For example, one unit works with children at risk of physical abuse. Case file analysis of the mental health histories of the parents found that 20% of children had parents with a personality disorder, while 60-70% had a parent who had experienced sexual or physical abuse as children themselves.

Traditional social work methods may not have uncovered this insight, which led Newcastle to look for new responses to working with these types of families.

Data analysis has also helped to identify the factors that are most predictive of a child becoming NEET (not in education, employment or training), enabling the team to review their approach to working with families and focus on earlier intervention….(More)”

Big Data Coming In Faster Than Biomedical Researchers Can Process It


Richard Harris at NPR: “Biomedical research is going big-time: Megaprojects that collect vast stores of data are proliferating rapidly. But scientists’ ability to make sense of all that information isn’t keeping up.

This conundrum took center stage at a meeting of patient advocates, called Partnering For Cures, in New York City on Nov. 15.

On the one hand, there’s an embarrassment of riches, as billions of dollars are spent on these megaprojects.

There’s the White House’s Cancer Moonshot (which seeks to make 10 years of progress in cancer research over the next five years), the Precision Medicine Initiative (which is trying to recruit a million Americans to glean hints about health and disease from their data), The BRAIN Initiative (to map the neural circuits and understand the mechanics of thought and memory) and the International Human Cell Atlas Initiative (to identify and describe all human cell types).

“It’s not just that any one data repository is growing exponentially, the number of data repositories is growing exponentially,” said Dr. Atul Butte, who leads the Institute for Computational Health Sciences at the University of California, San Francisco.

One of the most remarkable efforts is the federal government’s push to get doctors and hospitals to put medical records in digital form. That shift to electronic records is costing billions of dollars — including more than $28 billion alone in federal incentives to hospitals, doctors and others to adopt them. The investment is creating a vast data repository that could potentially be mined for clues about health and disease, the way websites and merchants gather data about you to personalize the online ads you see and for other commercial purposes.

But, unlike the data scientists at Google and Facebook, medical researchers have done almost nothing as yet to systematically analyze the information in these records, Butte said. “As a country, I think we’re investing close to zero analyzing any of that data,” he said.

Prospecting for hints about health and disease isn’t going to be easy. The raw data aren’t very robust and reliable. Electronic medical records are often kept in databases that aren’t compatible with one another, at least without a struggle. Some of the potentially revealing details are also kept as free-form notes, which can be hard to extract and interpret. Errors commonly creep into these records….(More)”

How Should a Society Be?


Brian Christian: “This is another example where AI—in this case, machine-learning methods—intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.

It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit….(More) (Video)”