Don’t Fight Regulation. Reprogram It


Article by Alison Kutler and Antonio Sweet: “Businesspeople too often assume that the relationship between government and the private sector is (and should be) adversarial. They imagine two opposing forces, each setting their bounds of control. But if you can envision government and business as platforms that interact with one other, it becomes apparent why the word code applies to both technology and law. A successful business leader works with regulation the way a successful app developer works with another company’s operating system: testing it, providing innovative ways to get results within the system’s constraints, and offering guidance, where possible, to help make the system more efficient, more fair, and more valuable to end-users.

Like the computer language of an operating system, legal and regulatory codes follow rules designed to make them widely recognizable to those who are appropriately trained. As legislators, regulators, and other officials write that code, they seek input from stakeholders through hearings and public-comment filings on proposed rules. Policymakers rely on constituents, public filings, and response analysis the way software designers use beta testers, crash reports, and developer feedback — to debug and validate code before deploying it across the entire system.

Unfortunately, policymakers and business leaders don’t always embrace what software developers know about collaborative innovation. Think about how much less a smartphone could do if its manufacturers never worked closely with people outside of their engineering department. When only a small subset of voices are involved, the final code only reflects the needs of the most vocal groups. As a result, the unengaged are stuck with a system that doesn’t take into account their needs, or worse, disables their product.

Policymakers may also benefit by emulating the kind of interoperability that makes software effective. When enterprise systems are too different from each other, people struggle with system unfamiliarity. They also run into interoperability issues when trying to function across multiple systems. A product development team can devote massive amounts of resources to designing and building something to work perfectly in one operating system domain, only to have it slow down or completely freeze in another…(More)”.

4 reasons why Data Collaboratives are key to addressing migration


Stefaan Verhulst and Andrew Young at the Migration Data Portal: “If every era poses its dilemmas, then our current decade will surely be defined by questions over the challenges and opportunities of a surge in migration. The issues in addressing migration safely, humanely, and for the benefit of communities of origin and destination are varied and complex, and today’s public policy practices and tools are not adequate. Increasingly, it is clear, we need not only new solutions but also new, more agile, methods for arriving at solutions.

Data are central to meeting these challenges and to enabling public policy innovation in a variety of ways. Yet, for all of data’s potential to address public challenges, the truth remains that most data generated today are in fact collected by the private sector. These data contains tremendous possible insights and avenues for innovation in how we solve public problems. But because of access restrictions, privacy concerns and often limited data science capacity, their vast potential often goes untapped.

Data Collaboratives offer a way around this limitation.

Data Collaboratives: A new form of Public-Private Partnership for a Data Age

Data Collaboratives are an emerging form of partnership, typically between the private and public sectors, but often also involving civil society groups and the education sector. Now in use across various countries and sectors, from health to agriculture to economic development, they allow for the opening and sharing of information held in the private sector, in the process freeing data silos up to serve public ends.

Although still fledgling, we have begun to see instances of Data Collaboratives implemented toward solving specific challenges within the broad and complex refugee and migrant space. As the examples we describe below suggest (which we examine in more detail Stanford Social Innovation Review), the use of such Collaboratives is geographically dispersed and diffuse; there is an urgent need to pull together a cohesive body of knowledge to more systematically analyze what works, and what doesn’t.

This is something we have started to do at the GovLab. We have analyzed a wide variety of Data Collaborative efforts, across geographies and sectors, with a goal of understanding when and how they are most effective.

The benefits of Data Collaboratives in the migration field

As part of our research, we have identified four main value propositions for the use of Data Collaboratives in addressing different elements of the multi-faceted migration issue. …(More)”,

Blockchain as a force for good: How this technology could transform the sharing economy


Aaron Fernando at Shareable: “The volatility in the price of cryptocurrencies doesn’t matter to restaurateur Helena Fabiankovic, who started Baba’s Pierogies in Brooklyn with her partner Robert in 2015. Yet she and her business are already positioned to reap the real-world benefits of the technology that underpins these digital currencies — the blockchain — and they will be at the forefront  of a sustainable, community-based peer-to-peer energy revolution because of it.

So what does a restaurateur have to do with the blockchain and local energy? Fabiankovic is one of the early participants in the Brooklyn Microgrid, a project of the startup LO3 Energy that uses a combination of innovative technologies — blockchain and smart meters — to operate a virtual microgrid in the borough of Brooklyn in New York City, New York. This microgrid enables residents to buy and sell green energy directly to their neighbors at much better rates than if they only interacted with centralized utility providers.

Just as we don’t pay much attention to the critical infrastructure that powers our digital world and exists just out of sight — from the Automated Clearing House (ACH), which undergirds our financial system, to the undersea cables that enable the Internet to be globally useful, blockchain is likely to change our lives in ways that will eventually be invisible. In the sharing economy, we have traditionally just used existing infrastructure and built platforms and services on top of it. Considering that those undersea cables are owned by private companies with their own motives and that the locations of ACH data centers are heavily classified, there is a lot to be desired in terms of transparency, resilience, and independence from self-interested third parties. That’s where open-source, decentralized infrastructure of the blockchain for the sharing economy offers much promise and potential.

In the case of Brooklyn Microgrid, which is part of an emerging model for shared energy use via the blockchain, this decentralized infrastructure would allow residents like Fabiankovic to save money and make sustainable choices. Shared ownership and community financing for green infrastructure like solar panels is part of the model. “Everyone can pay a different amount and you can get a proportional amount of energy that’s put off by the panel, based on how much that you own,” says Scott Kessler, director of business development at LO3. “It’s really just a way of crowdfunding an asset.”

The type of blockchain used by the Brooklyn Microgrid makes it possible to collect and communicate data from smart meters every second, so that the price of electricity can be updated in real time and users will still transact with each other using U.S. dollars. The core idea of the Brooklyn Microgrid is to utilize a tailored blockchain to align energy consumption with energy production, and to do this with rapidly-updated price information that then changes behavior around energy….(More)

Tech Platforms and the Knowledge Problem


Frank Pasquale at American Affairs: “Friedrich von Hayek, the preeminent theorist of laissez-faire, called the “knowledge problem” an insuperable barrier to central planning. Knowledge about the price of supplies and labor, and consumers’ ability and willingness to pay, is so scattered and protean that even the wisest authorities cannot access all of it. No person knows everything about how goods and services in an economy should be priced. No central decision-maker can grasp the idiosyncratic preferences, values, and purchasing power of millions of individuals. That kind of knowledge, Hayek said, is distributed.

In an era of artificial intelligence and mass surveillance, however, the possibility of central planning has reemerged—this time in the form of massive firms. Having logged and analyzed billions of transactions, Amazon knows intimate details about all its customers and suppliers. It can carefully calibrate screen displays to herd buyers toward certain products or shopping practices, or to copy sellers with its own, cheaper, in-house offerings. Mark Zuckerberg aspires to omniscience of consumer desires, by profiling nearly everyone on Facebook, Instagram, and WhatsApp, and then leveraging that data trove to track users across the web and into the real world (via mobile usage and device fingerprinting). You don’t even have to use any of those apps to end up in Facebook/Instagram/WhatsApp files—profiles can be assigned to you. Google’s “database of intentions” is legendary, and antitrust authorities around the world have looked with increasing alarm at its ability to squeeze out rivals from search results once it gains an interest in their lines of business. Google knows not merely what consumers are searching for, but also what other businesses are searching, buying, emailing, planning—a truly unparalleled matching of data-processing capacity to raw communication flows.

Nor is this logic limited to the online context. Concentration is paying dividends for the largest banks (widely assumed to be too big to fail), and major health insurers (now squeezing and expanding the medical supply chain like an accordion). Like the digital giants, these finance and insurance firms not only act as middlemen, taking a cut of transactions, but also aspire to capitalize on the knowledge they have gained from monitoring customers and providers in order to supplant them and directly provide services and investment. If it succeeds, the CVS-Aetna merger betokens intense corporate consolidations that will see more vertical integration of insurers, providers, and a baroque series of middlemen (from pharmaceutical benefit managers to group purchasing organizations) into gargantuan health providers. A CVS doctor may eventually refer a patient to a CVS hospital for a CVS surgery, to be followed up by home health care workers employed by CVS who bring CVS pharmaceuticals—allcovered by a CVS/Aetna insurance plan, which might penalize the patient for using any providers outside the CVS network. While such a panoptic firm may sound dystopian, it is a logical outgrowth of health services researchers’ enthusiasm for “integrated delivery systems,” which are supposed to provide “care coordination” and “wraparound services” more efficiently than America’s current, fragmented health care system.

The rise of powerful intermediaries like search engines and insurers may seem like the next logical step in the development of capitalism. But a growing chorus of critics questions the size and scope of leading firms in these fields. The Institute for Local Self-Reliance highlights Amazon’s manipulation of both law and contracts to accumulate unfair advantages. International antitrust authorities have taken Google down a peg, questioning the company’s aggressive use of its search engine and Android operating system to promote its own services (and demote rivals). They also question why Google and Facebook have for years been acquiring companies at a pace of more than two per month. Consumer advocates complain about manipulative advertising. Finance scholars lambaste megabanks for taking advantage of the implicit subsidies that too-big-to-fail status confers….(More)”.

Open data work: understanding open data usage from a practice lens


Paper by Emma Ruijer in the International Review of Administrative Sciences: “During recent years, the amount of data released on platforms by public administrations around the world have exploded. Open government data platforms are aimed at enhancing transparency and participation. Even though the promises of these platforms are high, their full potential has not yet been reached. Scholars have identified technical and quality barriers of open data usage. Although useful, these issues fail to acknowledge that the meaning of open data also depends on the context and people involved. In this study we analyze open data usage from a practice lens – as a social construction that emerges over time in interaction with governments and users in a specific context – to enhance our understanding of the role of context and agency in the development of open data platforms. This study is based on innovative action-based research in which civil servants’ and citizens’ initiatives collaborate to find solutions for public problems using an open data platform. It provides an insider perspective of Open Data Work. The findings show that an absence of a shared cognitive framework for understanding open data and a lack of high-quality datasets can prevent processes of collaborative learning. Our contextual approach stresses the need for open data practices that work on the basis of rich interactions with users rather than government-centric implementations….(More)”.

Smarter Crowdsourcing for Anti-Corruption: A Handbook of Innovative Legal, Technical, and Policy Proposals and a Guide to their Implementation


Paper by Noveck, Beth Simone; Koga, Kaitlin; Aceves Garcia, Rafael; Deleanu, Hannah; Cantú-Pedraza, Dinorah: “Corruption presents a fundamental threat to the stability and prosperity of Mexico and combating it demands approaches that are both principled and practical. In 2017, the Inter-American Development Bank (IDB) approved project ME-T1351 to support Mexico in its fight against corruption using Open Innovation. Thus, the IDB partnered with the Governance Lab at NYU to support Mexico’s Secretariat of Public Service (Secretaría de la Función Pública) to identify innovative ideas and then turns them into practical implementation plans for the measurement, detection, and prevention of corruption in Mexico using the GovLab’s open innovation methodology named Smarter Crowdsourcing.

The purpose of Smarter Crowdsourcing was to identify concrete solutions that include the use of data analysis and technology to tackle corruption in the public sector. This document contains 13 implementation plans laying out practical ways to address corruption. The plans emerged from “Smarter Crowdsourcing Anti-Corruption,” a method that is an agile process, which begins with robust problem definition followed by online sourcing of global expertise to surface innovative solutions. Smarter Crowdsourcing Anti-Corruption focused on six specific challenges: (i) measuring corruption and its costs, (ii) strengthening integrity in the judiciary, (iii) engaging the public in anti-corruption efforts, (iv) whistleblowing, (v) effective prosecution, and (vi) tracking and analyzing money flows…(More)”.

Using Blockchain Technology to Create Positive Social Impact


Randall Minas in Healthcare Informatics: “…Healthcare is yet another area where blockchain can make a substantial impact. Blockchain technology could be used to enable the WHO and CDC to better monitor disease outbreaks over time by creating distributed “ledgers” that are both secure and updated hundreds of times per day. Issued in near real-time, these updates would alert healthcare professionals to spikes in local cases almost immediately. Additionally, using blockchain would allow accurate diagnosis and streamline the isolation of clusters of cases as quickly as possible. Providing blocks of real-time disease information—especially in urban areas—would be invaluable.

In the United States, disease updates are provided in a Morbidity and Mortality Weekly Report (MMWR) from the CDC. This weekly report provides tables of current disease trends for hospitals and public health officials. Another disease reporting mechanism is the National Outbreak Reporting System (NORS), launched in 2009. NORS’ web-based tool provides outbreak data through 2016 and is accessible to the general public. There are two current weaknesses in the NORS reporting system and both can be addressed by blockchain technology.

The first issue lies in the number of steps required to accurately report each outbreak. A health department reports an outbreak to the NORS system, the CDC checks it for accuracy, analyzes the data, then provides a summary via the MMRW. Instantiating blockchain as the technology through which the NORS data is reported, every health department in the country could have preliminary data on disease trends at their fingertips without having to wait for the next MMRW publication.

The second issue is the inherent cybersecurity vulnerabilities using a web-based platform to monitor disease reporting. As we have seen with cyberattacks both domestic and abroad, cybersecurity vulnerabilities underlie most of our modern-day computing infrastructure. Blockchain was designed to be secure because it is decentralized across many computer networks and, since it was designed as a digital ledger, the previous data (or “blocks”) in the blockchain are difficult to alter.

While the NORS platform could be hacked with malware to gain access to our electricity and water infrastructure, instituting blockchain technology would limit the potential damage of the malware based on the inherent security of the technology. If this does not sound important, imagine the damage and ensuing panic that could be caused by a compromised NORS reporting a widespread Ebola outbreak.

The use of blockchain in monitoring epidemic outbreaks might not only apply to fast-spreading outbreaks like the flu, but also to epidemics that have lasted for decades. Since blockchain allows an unchangeable snapshot of data over time and can be anonymous, partner organizations could provide HIV test results to an individual’s “digital ledger” with a date of the test and the results.

Individuals could then exchange their HIV status securely, in an application, before engaging in high-risk behaviors. Since many municipalities provide free or low-cost, anonymous HIV testing, the use of blockchain would allow disease monitoring and exchange of status in a secure and trusted manner. The LGBTQ community and other high-risk communities could use an application to securely exchange HIV status with potential partners. With widespread adoption of this status-exchange system, an individual’s high-risk exposure could be limited, further reducing the spread of the epidemic.

While much of the creative application around blockchain has focused on supply chain-like models, including distribution of renewable energy and local sourcing of goods, it is important also to think innovatively about how blockchain can be used outside of supply chain and accounting.

In healthcare, blockchain has been discussed frequently in relation to electronic health records (EHRs), yet even that could be underappreciating the technology’s potential. Leaders in the blockchain arena should invest in application development for epidemic monitoring and disease control using blockchain technology. …(More)”.

International Data Flows and Privacy: The Conflict and its Resolution


World Bank Policy Research Working Paper by Aaditya Mattoo and Joshua P Meltzer: “The free flow of data across borders underpins today’s globalized economy. But the flow of personal dataoutside the jurisdiction of national regulators also raises concerns about the protection of privacy. Addressing these legitimate concerns without undermining international integration is a challenge. This paper describes and assesses three types of responses to this challenge: unilateral development of national or regional regulation, such as the European Union’s Data Protection Directive and forthcoming General Data Protection Regulation; international negotiation of trade disciplines, most recently in the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP); and international cooperation involving regulators, most significantly in the EU-U.S. Privacy Shield Agreement.

The paper argues that unilateral restrictions on data flows are costly and can hurt exports, especially of data-processing and other data-based services; international trade rules that limit only the importers’ freedom to regulate cannot address the challenge posed by privacy; and regulatory cooperation that aims at harmonization and mutual recognition is not likely to succeed, given the desirable divergence in national privacy regulation. The way forward is to design trade rules (as the CPTPP seeks to do) that reflect the bargain central to successful international cooperation (as in the EU-US Privacy Shield): regulators in data destination countries would assume legal obligations to protect the privacy of foreign citizens in return for obligations on data source countries not to restrict the flow of data. Existing multilateral rules can help ensure that any such arrangements do not discriminate against and are open to participation by other countries….(More)”.

The promise and peril of military applications of artificial intelligence


Michael C. Horowitz at the Bulletin of the Atomic Scientists: “Artificial intelligence (AI) is having a moment in the national security space. While the public may still equate the notion of artificial intelligence in the military context with the humanoid robots of the Terminatorfranchise, there has been a significant growth in discussions about the national security consequences of artificial intelligence. These discussions span academia, business, and governments, from Oxford philosopher Nick Bostrom’s concern about the existential risk to humanity posed by artificial intelligence to Tesla founder Elon Musk’s concern that artificial intelligence could trigger World War III to Vladimir Putin’s statement that leadership in AI will be essential to global power in the 21st century.

What does this really mean, especially when you move beyond the rhetoric of revolutionary change and think about the real world consequences of potential applications of artificial intelligence to militaries? Artificial intelligence is not a weapon. Instead, artificial intelligence, from a military perspective, is an enabler, much like electricity and the combustion engine. Thus, the effect of artificial intelligence on military power and international conflict will depend on particular applications of AI for militaries and policymakers. What follows are key issues for thinking about the military consequences of artificial intelligence, including principles for evaluating what artificial intelligence “is” and how it compares to technological changes in the past, what militaries might use artificial intelligence for, potential limitations to the use of artificial intelligence, and then the impact of AI military applications for international politics.

The potential promise of AI—including its ability to improve the speed and accuracy of everything from logistics to battlefield planning and to help improve human decision-making—is driving militaries around the world to accelerate their research into and development of AI applications. For the US military, AI offers a new avenue to sustain its military superiority while potentially reducing costs and risk to US soldiers. For others, especially Russia and China, AI offers something potentially even more valuable—the ability to disrupt US military superiority. National competition in AI leadership is as much or more an issue of economic competition and leadership than anything else, but the potential military impact is also clear. There is significant uncertainty about the pace and trajectory of artificial intelligence research, which means it is always possible that the promise of AI will turn into more hype than reality. Moreover, safety and reliability concerns could limit the ways that militaries choose to employ AI…(More)”,

Navigation by Judgment: Why and When Top Down Management of Foreign Aid Doesn’t Work


Book by Dan Honig: “Foreign aid organizations collectively spend hundreds of billions of dollars annually, with mixed results. Part of the problem in these endeavors lies in their execution. When should foreign aid organizations empower actors on the front lines of delivery to guide aid interventions, and when should distant headquarters lead?

In Navigation by Judgment, Dan Honig argues that high-quality implementation of foreign aid programs often requires contextual information that cannot be seen by those in distant headquarters. Tight controls and a focus on reaching pre-set measurable targets often prevent front-line workers from using skill, local knowledge, and creativity to solve problems in ways that maximize the impact of foreign aid. Drawing on a novel database of over 14,000 discrete development projects across nine aid agencies and eight paired case studies of development projects, Honig concludes that aid agencies will often benefit from giving field agents the authority to use their own judgments to guide aid delivery. This “navigation by judgment” is particularly valuable when environments are unpredictable and when accomplishing an aid program’s goals is hard to accurately measure.

Highlighting a crucial obstacle for effective global aid, Navigation by Judgment shows that the management of aid projects matters for aid effectiveness….(More)”.