4 reasons why Data Collaboratives are key to addressing migration


Stefaan Verhulst and Andrew Young at the Migration Data Portal: “If every era poses its dilemmas, then our current decade will surely be defined by questions over the challenges and opportunities of a surge in migration. The issues in addressing migration safely, humanely, and for the benefit of communities of origin and destination are varied and complex, and today’s public policy practices and tools are not adequate. Increasingly, it is clear, we need not only new solutions but also new, more agile, methods for arriving at solutions.

Data are central to meeting these challenges and to enabling public policy innovation in a variety of ways. Yet, for all of data’s potential to address public challenges, the truth remains that most data generated today are in fact collected by the private sector. These data contains tremendous possible insights and avenues for innovation in how we solve public problems. But because of access restrictions, privacy concerns and often limited data science capacity, their vast potential often goes untapped.

Data Collaboratives offer a way around this limitation.

Data Collaboratives: A new form of Public-Private Partnership for a Data Age

Data Collaboratives are an emerging form of partnership, typically between the private and public sectors, but often also involving civil society groups and the education sector. Now in use across various countries and sectors, from health to agriculture to economic development, they allow for the opening and sharing of information held in the private sector, in the process freeing data silos up to serve public ends.

Although still fledgling, we have begun to see instances of Data Collaboratives implemented toward solving specific challenges within the broad and complex refugee and migrant space. As the examples we describe below suggest (which we examine in more detail Stanford Social Innovation Review), the use of such Collaboratives is geographically dispersed and diffuse; there is an urgent need to pull together a cohesive body of knowledge to more systematically analyze what works, and what doesn’t.

This is something we have started to do at the GovLab. We have analyzed a wide variety of Data Collaborative efforts, across geographies and sectors, with a goal of understanding when and how they are most effective.

The benefits of Data Collaboratives in the migration field

As part of our research, we have identified four main value propositions for the use of Data Collaboratives in addressing different elements of the multi-faceted migration issue. …(More)”,

Most Maps of the New Ebola Outbreak Are Wrong


Ed Kong in The Atlantic: “Almost all the maps of the outbreak zone that have thus far been released contain mistakes of this kind. Different health organizations all seem to use their own maps, most of which contain significant discrepancies. Things are roughly in the right place, but their exact positions can be off by miles, as can the boundaries between different regions.

Sinai, a cartographer at UCLA, has been working with the Ministry of Health to improve the accuracy of the Congo’s maps, and flew over on Saturday at their request. For each health zone within the outbreak region, Sinai compiled a list of the constituent villages, plotted them using the most up-to-date sources of geographical data, and drew boundaries that include these places and no others. The maps at the top of this piece show the before (left) and after (right) images….

Consider Bikoro, the health zone where the outbreak may have originated, and where most cases are found. Sinai took a list of all Bikoro’s villages, plotted them using the most up-to-date sources of geographical data, and drew a boundary that includes these places and no others. This new shape is roughly similar to the one on current maps, but with critical differences. Notably, existing maps have the village of Ikoko Impenge—one of the epicenters of the outbreak—outside the Bikoro health zone, when it actually lies within the zone.

 “These visualizations are important for communicating the reality on the ground to all levels of the health hierarchy, and to international partners who don’t know the country,” says Mathias Mossoko, the head of disease surveillance data in DRC.

“It’s really important for the outbreak response to have real and accurate data,” adds Bernice Selo, who leads the cartographic work from the Ministry of Health’s command center in Kinshasa. “You need to know exactly where the villages are, where the health facilities are, where the transport routes and waterways are. All of this helps you understand where the outbreak is, where it’s moving, how it’s moving. You can see which villages have the highest risk.”

To be clear, there’s no evidence that these problems are hampering the response to the current outbreak. It’s not like doctors are showing up in the middle of the forest, wondering why they’re in the wrong place. “Everyone on the ground knows where the health zones start and end,” says Sinai. “I don’t think this will make or break the response. But you surely want the most accurate data.”

It feels unusual to not have this information readily at hand, especially in an era when digital maps are so omnipresent and so supposedly truthful. If you search for San Francisco on Google Maps, you can be pretty sure that what comes up is actually where San Francisco is. On Google Street View, you can even walk along a beach at the other end of the world….(More)”.

But the Congo is a massive country—a quarter the size of the United States with considerably fewer resources. Until very recently, they haven’t had the resources to get accurate geolocalized data. Instead, the boundaries of the health zones and their constituent “health areas,” as well as the position of specific villages, towns, rivers, hospitals, clinics, and other landmarks, are often based on local knowledge and hand-drawn maps. Here’s an example, which I saw when I visited the National Institute for Biomedical Research in March. It does the job, but it’s clearly not to scale.

Citizen-generated evidence for a more sustainable and healthy food system


Research Report by Bill Vorley:  “Evidence generation by and with low-income citizens is particularly important if policy makers are to improve understanding of people’s diets and the food systems they use, in particular the informal economy. The informal food economy is the main route for low-income communities to secure their food, and is an important source of employment, especially for women and youth. The very nature of informality means that the realities of poor people’s lives are often invisible to policymakers. This invisibility is a major factor in exclusion and results in frequent mismatches between policy and local realities. This paper focuses on citizen-generated evidence as a means for defending and improving the food system of the poor. It clearly outlines a range of approaches to citizen-generated evidence including primary data collection and citizen access to and use of existing information….(More)”.

Blockchain as a force for good: How this technology could transform the sharing economy


Aaron Fernando at Shareable: “The volatility in the price of cryptocurrencies doesn’t matter to restaurateur Helena Fabiankovic, who started Baba’s Pierogies in Brooklyn with her partner Robert in 2015. Yet she and her business are already positioned to reap the real-world benefits of the technology that underpins these digital currencies — the blockchain — and they will be at the forefront  of a sustainable, community-based peer-to-peer energy revolution because of it.

So what does a restaurateur have to do with the blockchain and local energy? Fabiankovic is one of the early participants in the Brooklyn Microgrid, a project of the startup LO3 Energy that uses a combination of innovative technologies — blockchain and smart meters — to operate a virtual microgrid in the borough of Brooklyn in New York City, New York. This microgrid enables residents to buy and sell green energy directly to their neighbors at much better rates than if they only interacted with centralized utility providers.

Just as we don’t pay much attention to the critical infrastructure that powers our digital world and exists just out of sight — from the Automated Clearing House (ACH), which undergirds our financial system, to the undersea cables that enable the Internet to be globally useful, blockchain is likely to change our lives in ways that will eventually be invisible. In the sharing economy, we have traditionally just used existing infrastructure and built platforms and services on top of it. Considering that those undersea cables are owned by private companies with their own motives and that the locations of ACH data centers are heavily classified, there is a lot to be desired in terms of transparency, resilience, and independence from self-interested third parties. That’s where open-source, decentralized infrastructure of the blockchain for the sharing economy offers much promise and potential.

In the case of Brooklyn Microgrid, which is part of an emerging model for shared energy use via the blockchain, this decentralized infrastructure would allow residents like Fabiankovic to save money and make sustainable choices. Shared ownership and community financing for green infrastructure like solar panels is part of the model. “Everyone can pay a different amount and you can get a proportional amount of energy that’s put off by the panel, based on how much that you own,” says Scott Kessler, director of business development at LO3. “It’s really just a way of crowdfunding an asset.”

The type of blockchain used by the Brooklyn Microgrid makes it possible to collect and communicate data from smart meters every second, so that the price of electricity can be updated in real time and users will still transact with each other using U.S. dollars. The core idea of the Brooklyn Microgrid is to utilize a tailored blockchain to align energy consumption with energy production, and to do this with rapidly-updated price information that then changes behavior around energy….(More)

Open Data Charter Measurement Guide


Guide by Ana Brandusescu and Danny Lämmerhirt: “We are pleased to announce the launch of our Open Data Charter Measurement Guide. The guide is a collaborative effort of the Charter’s Measurement and Accountability Working Group (MAWG). It analyses the Open Data Charter principles and how they are assessed based on current open government data measurement tools. Governments, civil society, journalists, and researchers may use it to better understand how they can measure open data activities according to the Charter principles.

What can I find in the Measurement Guide?

  • An executive summary for people who want to quickly understand what measurement tools exist and for what principles.
  • An analysis of how each Charter principle is measured, including a comparison of indicators that are currently used to measure each Charter principle and its commitments. This analysis is based on the open data indicators used by the five largest measurement tools — the Web Foundation’s Open Data Barometer, Open Knowledge International’s Global Open Data Index, Open Data Watch’s Open Data Inventory, OECD’s OURdata Index, and the European Open Data Maturity Assessment . For each principle, we also highlight case studies of how Charter adopters have practically implemented the commitments of that principle.
  • Comprehensive indicator tables show how each Charter principle commitment can be measured. This table is especially helpful when used to compare how different indices approach the same commitment, and where gaps exist. Here, you can see an example of the indicator tables for Principle 1.
  • A methodology section that details how the Working Group conducted the analysis of mapping existing measurements indices against Charter commitments.
  • A recommended list of resources for anyone that wants to read more about measurement and policy.

The Measurement Guide is available online in the form of a Gitbook and in a printable PDF version

Mapping the economy in real time is almost ‘within our grasp’


Delphine Strauss at the Financial Times: “The goal of mapping economic activity in real time, just as we do for weather or traffic, is “closer than ever to being within our grasp”, according to Andy Haldane, the Bank of England’s chief economist. In recent years, “data has become the new oil . . . and data companies have become the new oil giants”, Mr Haldane told an audience at King’s Business School …

But economics and finance have been “rather reticent about fully embracing this oil-rush”, partly because economists have tended to prefer a deductive approach that puts theory ahead of measurement. This needs to change, he said, because relying too much on either theory or real-world data in isolation can lead to serious mistakes in policymaking — as was seen when the global financial crisis exposed the “empirical fragility” of macroeconomic models.

Parts of the private sector and academia have been far swifter to exploit the vast troves of ever-accumulating data now available — 90 per cent of which has been created in the last two years alone. Massachusetts Institute of Technology’s “Billion Prices Project”, name-checked in Mr Haldane’s speech, now collects enough data from online retailers for its commercial arm to provide daily inflation updates for 22 economies….

The UK’s Office for National Statistics — which has faced heavy criticism over the quality of its data in recent years — is experimenting with “web-scraping” to collect price quotes for food and groceries, for example, and making use of VAT data from small businesses to improve its output-based estimates of gross domestic product. In both cases, the increased sample size and granularity could bring considerable benefits on top of existing surveys, Mr Haldane said.

The BoE itself is trying to make better use of financial data — for example, by using administrative data on owner-occupied mortgages to better understand pricing decisions in the UK housing market. Mr Haldane sees scope to go further with the new data coming on stream on payment, credit and banking flows. …New data sources and techniques could also help policymakers think about human decision-making — which rarely conforms with the rational process assumed in many economic models. Data on music downloads from Spotify, used as an indicator of sentiment, has recently been shown to do at least as well as a standard consumer confidence survey in tracking consumer spending….(More)”.

Tech Platforms and the Knowledge Problem


Frank Pasquale at American Affairs: “Friedrich von Hayek, the preeminent theorist of laissez-faire, called the “knowledge problem” an insuperable barrier to central planning. Knowledge about the price of supplies and labor, and consumers’ ability and willingness to pay, is so scattered and protean that even the wisest authorities cannot access all of it. No person knows everything about how goods and services in an economy should be priced. No central decision-maker can grasp the idiosyncratic preferences, values, and purchasing power of millions of individuals. That kind of knowledge, Hayek said, is distributed.

In an era of artificial intelligence and mass surveillance, however, the possibility of central planning has reemerged—this time in the form of massive firms. Having logged and analyzed billions of transactions, Amazon knows intimate details about all its customers and suppliers. It can carefully calibrate screen displays to herd buyers toward certain products or shopping practices, or to copy sellers with its own, cheaper, in-house offerings. Mark Zuckerberg aspires to omniscience of consumer desires, by profiling nearly everyone on Facebook, Instagram, and WhatsApp, and then leveraging that data trove to track users across the web and into the real world (via mobile usage and device fingerprinting). You don’t even have to use any of those apps to end up in Facebook/Instagram/WhatsApp files—profiles can be assigned to you. Google’s “database of intentions” is legendary, and antitrust authorities around the world have looked with increasing alarm at its ability to squeeze out rivals from search results once it gains an interest in their lines of business. Google knows not merely what consumers are searching for, but also what other businesses are searching, buying, emailing, planning—a truly unparalleled matching of data-processing capacity to raw communication flows.

Nor is this logic limited to the online context. Concentration is paying dividends for the largest banks (widely assumed to be too big to fail), and major health insurers (now squeezing and expanding the medical supply chain like an accordion). Like the digital giants, these finance and insurance firms not only act as middlemen, taking a cut of transactions, but also aspire to capitalize on the knowledge they have gained from monitoring customers and providers in order to supplant them and directly provide services and investment. If it succeeds, the CVS-Aetna merger betokens intense corporate consolidations that will see more vertical integration of insurers, providers, and a baroque series of middlemen (from pharmaceutical benefit managers to group purchasing organizations) into gargantuan health providers. A CVS doctor may eventually refer a patient to a CVS hospital for a CVS surgery, to be followed up by home health care workers employed by CVS who bring CVS pharmaceuticals—allcovered by a CVS/Aetna insurance plan, which might penalize the patient for using any providers outside the CVS network. While such a panoptic firm may sound dystopian, it is a logical outgrowth of health services researchers’ enthusiasm for “integrated delivery systems,” which are supposed to provide “care coordination” and “wraparound services” more efficiently than America’s current, fragmented health care system.

The rise of powerful intermediaries like search engines and insurers may seem like the next logical step in the development of capitalism. But a growing chorus of critics questions the size and scope of leading firms in these fields. The Institute for Local Self-Reliance highlights Amazon’s manipulation of both law and contracts to accumulate unfair advantages. International antitrust authorities have taken Google down a peg, questioning the company’s aggressive use of its search engine and Android operating system to promote its own services (and demote rivals). They also question why Google and Facebook have for years been acquiring companies at a pace of more than two per month. Consumer advocates complain about manipulative advertising. Finance scholars lambaste megabanks for taking advantage of the implicit subsidies that too-big-to-fail status confers….(More)”.

CrowdLaw Manifesto


At the Rockefeller Foundation Bellagio Center this spring, assembled participants  met to discuss CrowdLaw, namely how to use technology to improve the quality and effectiveness of law and policymaking through greater public engagement. We put together and signed 12 principles to promote the use of CrowdLaw by local legislatures and national parliaments, calling for legislatures, technologists and the public to participate in creating more open and participatory lawmaking practices. We invite you to sign the Manifesto using the form below.

Draft dated May 29, 2018

  1. To improve public trust in democratic institutions, we must improve how we govern in the 21st century.
  2. CrowdLaw is any law, policy-making or public decision-making that offers a meaningful opportunity for the public to participate in one or multiples stages of decision-making, including but not limited to the processes of problem identification, solution identification, proposal drafting, ratification, implementation or evaluation.
  3. CrowdLaw draws on innovative processes and technologies and encompasses diverse forms of engagement among elected representatives, public officials, and those they represent.
  4. When designed well, CrowdLaw may help governing institutions obtain more relevant facts and knowledge as well as more diverse perspectives, opinions and ideas to inform governing at each stage and may help the public exercise political will.
  5. When designed well, CrowdLaw may help democratic institutions build trust and the public to play a more active role in their communities and strengthen both active citizenship and democratic culture.
  6. When designed well, CrowdLaw may enable engagement that is thoughtful, inclusive, informed but also efficient, manageable and sustainable.
  7. Therefore, governing institutions at every level should experiment and iterate with CrowdLaw initiatives in order to create formal processes for diverse members of society to participate in order to improve the legitimacy of decision-making, strengthen public trust and produce better outcomes.
  8. Governing institutions at every level should encourage research and learning about CrowdLaw and its impact on individuals, on institutions and on society.
  9. The public also has a responsibility to improve our democracy by demanding and creating opportunities to engage and then actively contributing expertise, experience, data and opinions.
  10. Technologists should work collaboratively across disciplines to develop, evaluate and iterate varied, ethical and secure CrowdLaw platforms and tools, keeping in mind that different participation mechanisms will achieve different goals.
  11. Governing institutions at every level should encourage collaboration across organizations and sectors to test what works and share good practices.
  12. Governing institutions at every level should create the legal and regulatory frameworks necessary to promote CrowdLaw and better forms of public engagement and usher in a new era of more open, participatory and effective governing.

The CrowdLaw Manifesto has been signed by the following individuals and organizations:

Individuals

  • Victoria Alsina, Senior Fellow at The GovLab and Faculty Associate at Harvard Kennedy School, Harvard University
  • Marta Poblet Balcell , Associate Professor, RMIT University
  • Robert Bjarnason — President & Co-founder, Citizens Foundation; Better Reykjavik
  • Pablo Collada — Former Executive Director, Fundación Ciudadano Inteligente
  • Mukelani Dimba — Co-chair, Open Government Partnership
  • Hélène Landemore, Associate Professor of Political Science, Yale University
  • Shu-Yang Lin, re:architect & co-founder, PDIS.tw
  • José Luis Martí , Vice-Rector for Innovation and Professor of Legal Philosophy, Pompeu Fabra University
  • Jessica Musila — Executive Director, Mzalendo
  • Sabine Romon — Chief Smart City Officer — General Secretariat, Paris City Council
  • Cristiano Ferri Faría — Director, Hacker Lab, Brazilian House of Representatives
  • Nicola Forster — President and Founder, Swiss Forum on Foreign Policy
  • Raffaele Lillo — Chief Data Officer, Digital Transformation Team, Government of Italy
  • Tarik Nesh-Nash — CEO & Co-founder, GovRight; Ashoka Fellow
  • Beth Simone Noveck, Director, The GovLab and Professor at New York University Tandon School of Engineering
  • Ehud Shapiro , Professor of Computer Science and Biology, Weizmann Institute of Science

Organizations

  • Citizens Foundation, Iceland
  • Fundación Ciudadano Inteligente, Chile
  • International School for Transparency, South Africa
  • Mzalendo, Kenya
  • Smart Cities, Paris City Council, Paris, France
  • Hacker Lab, Brazilian House of Representatives, Brazil
  • Swiss Forum on Foreign Policy, Switzerland
  • Digital Transformation Team, Government of Italy, Italy
  • The Governance Lab, New York, United States
  • GovRight, Morocco
  • ICT4Dev, Morocco

How the Math Men Overthrew the Mad Men


 in the New Yorker: “Once, Mad Men ruled advertising. They’ve now been eclipsed by Math Men—the engineers and data scientists whose province is machines, algorithms, pureed data, and artificial intelligence. Yet Math Men are beleaguered, as Mark Zuckerberg demonstrated when he humbled himself before Congress, in April. Math Men’s adoration of data—coupled with their truculence and an arrogant conviction that their “science” is nearly flawless—has aroused government anger, much as Microsoft did two decades ago.

The power of Math Men is awesome. Google and Facebook each has a market value exceeding the combined value of the six largest advertising and marketing holding companies. Together, they claim six out of every ten dollars spent on digital advertising, and nine out of ten new digital ad dollars. They have become more dominant in what is estimated to be an up to two-trillion-dollar annual global advertising and marketing business. Facebook alone generates more ad dollars than all of America’s newspapers, and Google has twice the ad revenues of Facebook.

In the advertising world, Big Data is the Holy Grail, because it enables marketers to target messages to individuals rather than general groups, creating what’s called addressable advertising. And only the digital giants possess state-of-the-art Big Data. “The game is no longer about sending you a mail order catalogue or even about targeting online advertising,” Shoshana Zuboff, a professor of business administration at the Harvard Business School, wrote on faz.net, in 2016. “The game is selling access to the real-time flow of your daily life—your reality—in order to directly influence and modify your behavior for profit.” Success at this “game” flows to those with the “ability to predict the future—specifically the future of behavior,” Zuboff writes. She dubs this “surveillance capitalism.”

However, to thrash just Facebook and Google is to miss the larger truth: everyone in advertising strives to eliminate risk by perfecting targeting data. Protecting privacy is not foremost among the concerns of marketers; protecting and expanding their business is. The business model adopted by ad agencies and their clients parallels Facebook and Google’s. Each aims to massage data to better identify potential customers. Each aims to influence consumer behavior. To appreciate how alike their aims are, sit in an agency or client marketing meeting and you will hear wails about Facebook and Google’s “walled garden,” their unwillingness to share data on their users. When Facebook or Google counter that they must protect “the privacy” of their users, advertisers cry foul: You’re using the data to target ads we paid for—why won’t you share it, so that we can use it in other ad campaigns?…(More)”

Inclusive Innovation in Biohacker Spaces: The Role of Systems and Networks


Paper by Jeremy de Beer and Vipal Jain in Technology Innovation Management Review: “The biohacking movement is changing who can innovate in biotechnology. Driven by principles of inclusivity and open science, the biohacking movement encourages sharing and transparency of data, ideas, and resources. As a result, innovation is now happening outside of traditional research labs, in unconventional spaces – do-it-yourself (DIY) biology labs known as “biohacker spaces”. Labelled like “maker spaces” (which contain the fabrication, metal/woodworking, additive manufacturing/3D printing, digitization, and related tools that “makers” use to tinker with hardware and software), biohacker spaces are attracting a growing number of entrepreneurs, students, scientists, and members of the public.

A biohacker space is a space where people with an interest in biotechnology gather to tinker with biological materials. These spaces, such as Genspace in New York, Biotown in Ottawa, and La Paillasse in Paris, exist outside of traditional academic and research labs with the aim of democratizing and advancing science by providing shared access to tools and resources (Scheifele & Burkett, 2016).

Biohacker spaces hold great potential for promoting innovation. Numerous innovative projects have emerged from these spaces. For example, biohackers have developed cheaper tools and equipment (Crook, 2011; see also Bancroft, 2016). They are also working to develop low-cost medicines for conditions such as diabetes (Ossolo, 2015). There is a general, often unspoken assumption that the openness of biohacker spaces facilitates greater participation in biotechnology research, and therefore, more inclusive innovation. In this article, we explore that assumption using the inclusive innovation framework developed by Schillo and Robinson (2017).

Inclusive innovation requires that opportunities for participation are broadly available to all and that the benefits of innovation are broadly shared by all (CSLS, 2016). In Schillo and Robinson’s framework, there are four dimensions along which innovation may be inclusive:

  1. The people involved in innovation (who)
  2. The type of innovation activities (what)
  3. The range of outcomes to be captured (why)
  4. The governance mechanism of innovation (how)…(More)”.