On International Day of Democracy, International Leaders Call for More Open Public Institutions


Press Release: “As the United Nations celebrates the International Day of Democracy on September 15 with its theme of “Democracy Under Strain,” The Governance Lab (The GovLab) at the NYU Tandon School of Engineering will unveil its CrowdLaw Manifesto to strengthen public participation in lawmaking by encouraging citizens to help build, shape, and influence the laws and policies that affect their daily lives.

Among its 12 calls to action to individuals, legislatures, researchers and technology designers, the manifesto encourages the public to demand and institutions to create new mechanisms to harness collective intelligence to improve the quality of lawmaking as well as more research on what works to build a global movement for participatory democracy.

The CrowdLaw Manifesto emerged from a collaborative effort of 20 international experts and CrowdLaw community members. At a convening held earlier this year by The GovLab at The Rockefeller Foundation Bellagio Center in Italy, government leaders, academics, NGOs, and technologists formulated the CrowdLaw Manifesto to detail the initiative’s foundational principles and to encourage greater implementation of CrowdLaw practices to improve governance through 21st century technology and tools….

“The successes of the CrowdLaw concept – and its remarkably rapid adoption across the world by citizens seeking to affect change – exemplify the powerful force that academia can exert when working in concert with government and citizens,” said NYU Tandon Dean Jelena Kovačević. “On behalf of the NYU Tandon School of Engineering, I proudly sign the CrowdLaw Manifesto and congratulate The GovLab and its collaborators for creating these digital tools and momentum for good government.”…(More)”.

The Use of Regulatory Sandboxes in Europe and Asia


Claus Christensen at Regulation Aisa: “Global attention to money-laundering, terrorism financing and financial criminal practices has grown exponentially in recent years. As criminals constantly come up with new tactics, global regulations in the financial world are evolving all the time to try and keep up. At the same time, end users’ expectations are putting companies at commercial risk if they are not prepared to deliver outstanding and digital-first customer experiences through innovative solutions.

Among the many initiatives introduced by global regulators to address these two seemingly contradictory needs, regulatory sandboxes – closed environments that allow live testing of innovations by tech companies under the regulator’s supervision – are by far one of the most popular. As the CEO of a fast-growing regtech company working across both Asia and Europe, I have identified a few differences in how the regulators across different jurisdictions are engaging with the industry in general, and regulatory sandboxes in particular.

Since the launch of ‘Project Innovate’ in 2014, the UK’s FCA (Financial Conduct Authority) has won recognition for the success of its sandbox, where fintech companies can test innovative products, services and business models in a live market environment, while ensuring that appropriate safeguards are in place through temporary authorisation. The FCA advises companies, whether fintech startups or established banks, on which existing regulations might apply to their cutting-edge products.

So far, the sandbox has helped more than 500 companies, with 40+ firms receiving regulatory authorisation. Project Innovate has helped the FCA’s reputation for supporting initiatives which boost competition within financial services, which was part of the regulator’s post-financial crisis agenda. The success of the initiative in fostering a fertile fintech environment is reflected by the growing number of UK-based challenger banks that are expanding their client bases across Europe. Following its success, the sandbox approach has gone global, with regulators around the world adopting a similar strategy for fintech innovation.

Across Europe, regulators are directly working with financial services providers and taking proactive measures to not only encourage the use of innovative technology in improving their systems, but also to boost adoption by others within the ecosystem…(More)”.

An Overview of National AI Strategies


Medium Article by Tim Dutton: “The race to become the global leader in artificial intelligence (AI) has officially begun. In the past fifteen months, Canada, China, Denmark, the EU Commission, Finland, France, India, Italy, Japan, Mexico, the Nordic-Baltic region, Singapore, South Korea, Sweden, Taiwan, the UAE, and the UK have all released strategies to promote the use and development of AI. No two strategies are alike, with each focusing on different aspects of AI policy: scientific research, talent development, skills and education, public and private sector adoption, ethics and inclusion, standards and regulations, and data and digital infrastructure.

This article summarizes the key policies and goals of each strategy, as well as related policies and initiatives that have announced since the release of the initial strategies. It also includes countries that have announced their intention to develop a strategy or have related AI policies in place….(More)”.

World War Web


Special issue of Foreign Affairs: “The last few decades have witnessed the growth of an American-sponsored Internet open to all. But that was then; conditions have changed.

History is filled with supposed lost utopias, and there is no greater cliché than to see one’s own era as a lamentable decline from a previous golden age. Sometimes, however, clichés are right. And as we explored the Internet’s future for this issue’s lead package, it became clear this was one of those times. Contemplating where we have come from digitally and where we are heading, it’s hard not to feel increasingly wistful and nostalgic.

The last few decades have witnessed the growth of an American-sponsored Internet open to all, and that has helped tie the world together, bringing wide-ranging benefits to billions. But that was then; conditions have changed.

Other great powers are contesting U.S. digital leadership, pushing their own national priorities. Security threats appear and evolve constantly. Platforms that were supposed to expand and enrich the marketplace of ideas have been hijacked by trolls and bots and flooded with disinformation. And real power is increasingly concentrated in the hands of a few private tech giants, whose self-interested choices have dramatic consequences for the entire world around them.

Whatever emerges from this melee, it will be different from, and in many ways worse than, what we have now.

Adam Segal paints the big picture well. “The Internet has long been an American project,” he writes. “Yet today, the United States has ceded leadership in cyberspace to China.” What will happen if Beijing continues its online ascent? “The Internet will be less global and less open. A major part of it will run Chinese applications over Chinese-made hardware. And Beijing will reap the economic, diplomatic, national security, and intelligence benefits that once flowed to Washington.”

Nandan Nilekani, a co-founder of Infosys, outlines India’s unique approach to these issues, which is based on treating “digital infrastructure as a public good and data as something that citizens deserve access to.” Helen Dixon, Ireland’s data protection commissioner, presents a European perspective, arguing that giving individuals control over their own data—as the General Data Protection Regulation, the EU’s historic new regulatory effort, aims to do—is essential to restoring the Internet’s promise. And Karen Kornbluh, a veteran U.S. policymaker, describes how the United States dropped the digital ball and what it could do to pick it up again.

Finally, Michèle Flournoy and Michael Sulmeyer explain the new realities of cyberwarfare, and Viktor Mayer-Schönberger and Thomas Ramge consider the problems caused by Big Tech’s hoarding of data and what can be done to address it.

A generation from now, people across the globe will no doubt revel in the benefits the Internet has brought. But the more thoughtful among them will also lament the eclipse of the founders’ idealistic vision and dream of a world connected the way it could—and should— have been….(More)”.

The Risks of Dangerous Dashboards in Basic Education


Lant Pritchett at the Center for Global Development: “On June 1, 2009 Air France flight 447 from Rio de Janeiro to Paris crashed into the Atlantic Ocean killing all 228 people on board. While the Airbus 330 was flying on auto-pilot, the different speed indicators received by the on-board navigation computers started to give conflicting speeds, almost certainly because the pitot tubes responsible for measuring air speed had iced over. Since the auto-pilot could not resolve conflicting signals and hence did not know how fast the plane was actually going, it turned control of the plane over to the two first officers (the captain was out of the cockpit). Subsequent flight simulator trials replicating the conditions of the flight conclude that had the pilots done nothing at all everyone would have lived—nothing was actually wrong; only the indicators were faulty, not the actual speed. But, tragically, the pilots didn’t do nothing….

What is the connection to education?

Many countries’ systems of basic education are in “stall” condition.

A recent paper of Beatty et al. (2018) uses information from the Indonesia Family Life Survey, a representative household survey that has been carried out in several waves with the same individuals since 2000 and contains information on whether individuals can answer simple arithmetic questions. Figure 1, showing the relationship between the level of schooling and the probability of answering a typical question correctly, has two shocking results.

First, the difference in the likelihood a person can answer a simple mathematics question correctly differs by only 20 percent between individuals who have completed less than primary school (<PS)—who can answer correctly (adjusted for guessing) about 20 percent of the time—and those who have completed senior secondary school or more (>=SSS), who answer correctly only about 40 percent of the time. These are simple multiple choice questions like whether 56/84 is the same fraction as (can be reduced to) 2/3, and whether 1/3-1/6 equals 1/6. This means that in an entire year of schooling, less than 2 additional children per 100 gain the ability to answer simple arithmetic questions.

Second, this incredibly poor performance in 2000 got worse by 2014. …

What has this got to do with education dashboards? The way large bureaucracies prefer to work is to specify process compliance and inputs and then measure those as a means of driving performance. This logistical mode of managing an organization works best when both process compliance and inputs are easily “observable” in the economist’s sense of easily verifiable, contractible, adjudicated. This leads to attention to processes and inputs that are “thin” in the Clifford Geertz sense (adopted by James Scott as his primary definition of how a “high modern” bureaucracy and hence the state “sees” the world). So in education one would specify easily-observable inputs like textbook availability, class size, school infrastructure. Even if one were talking about “quality” of schooling, a large bureaucracy would want this too reduced to “thin” indicators, like the fraction of teachers with a given type of formal degree, or process compliance measures, like whether teachers were hired based on some formal assessment.

Those involved in schooling can then become obsessed with their dashboards and the “thin” progress that is being tracked and easily ignore the loud warning signals saying: Stall!…(More)”.

Searching for the Smart City’s Democratic Future


Article by Bianca Wylie at the Center for International Governance Innovation: “There is a striking blue building on Toronto’s eastern waterfront. Wrapped top to bottom in bright, beautiful artwork by Montreal illustrator Cecile Gariepy, the building — a former fish-processing plant — stands out alongside the neighbouring parking lots and a congested highway. It’s been given a second life as an office for Sidewalk Labs — a sister company to Google that is proposing a smart city development in Toronto. Perhaps ironically, the office is like the smart city itself: something old repackaged to be light, fresh and novel.

“Our mission is really to use technology to redefine urban life in the twenty-first century.”

Dan Doctoroff, CEO of Sidewalk Labs, shared this mission in an interview with Freakonomics Radio. The phrase is a variant of the marketing language used by the smart city industry at large. Put more simply, the term “smart city” is usually used to describe the use of technology and data in cities.

No matter the words chosen to describe it, the smart city model has a flaw at its core: corporations are seeking to exert influence on urban spaces and democratic governance. And because most governments don’t have the policy in place to regulate smart city development — in particular, projects driven by the fast-paced technology sector — this presents a growing global governance concern.

This is where the story usually descends into warnings of smart city dystopia or failure. Loads of recent articles have detailed the science fiction-style city-of-the-future and speculated about the perils of mass data collection, and for good reason — these are important concepts that warrant discussion. It’s time, however, to push past dystopian narratives and explore solutions for the challenges that smart cities present in Toronto and globally…(More)”.

A roadmap for restoring trust in Big Data


Mark Lawler et al in the Lancet: “The fallout from the Cambridge Analytica–Facebook scandal marks a significant inflection point in the public’s trust concerning Big Data. The health-science community must use this crisis-in-confidence to redouble its commitment to talk openly and transparently about benefits and risks and to act decisively to deliver robust effective governance frameworks, under which personal health data can be responsibly used. Activities such as the Innovative Medicines Initiative’s Big Data for Better Outcomes emphasise how a more granular data-driven understanding of human diseases including cancer could underpin innovative therapeutic intervention.
 Health Data Research UK is developing national research expertise and infrastructure to maximise the value of health data science for the National Health Service and ultimately British citizens.
Comprehensive data analytics are crucial to national programmes such as the US Cancer Moonshot, the UK’s 100 000 Genomes project, and other national genomics programmes. Cancer Core Europe, a research partnership between seven leading European oncology centres, has personal data sharing at its core. The Global Alliance for Genomics and Health recently highlighted the need for a global cancer knowledge network to drive evidence-based solutions for a disease that kills more than 8·7 million citizens annually worldwide. These activities risk being fatally undermined by the recent data-harvesting controversy.
We need to restore the public’s trust in data science and emphasise its positive contribution in addressing global health and societal challenges. An opportunity to affirm the value of data science in Europe was afforded by Digital Day 2018, which took place on April 10, 2018, in Brussels, and where European Health Ministers signed a declaration of support to link existing or future genomic databanks across the EU, through the Million European Genomes Alliance.
So how do we address evolving challenges in analysis, sharing, and storage of information, ensure transparency and confidentiality, and restore public trust? We must articulate a clear Social Contract, where citizens (as data donors) are at the heart of decision-making. We need to demonstrate integrity, honesty, and transparency as to what happens to data and what level of control people can, or cannot, expect. We must embed ethical rigour in all our data-driven processes. The Framework for Responsible Sharing of Genomic and Health Related Data represents a practical global approach, promoting effective and ethical sharing and use of research or patient data, while safeguarding individual privacy through secure and accountable data transfer…(More)”.

Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject


Nick Couldry and Ulises Mejias in Television & New Media (TVNM): “...Data colonialism combines the predatory extractive practices of historical colonialism with the abstract quantification methods of computing. Understanding Big Data from the Global South means understanding capitalism’s current dependence on this new type of appropriation that works at every point in space where people or things are attached to today’s infrastructures of connection. The scale of this transformation means that it is premature to map the forms of capitalism that will emerge from it on a global scale. Just as historical colonialism over the long-run provided the essential preconditions for the emergence of industrial capitalism, so over time, we can expect that data colonialism will provide the preconditions for a new stage of capitalism that as yet we can barely imagine, but for which the appropriation of human life through data will be central.

Right now, the priority is not to speculate about that eventual stage of capitalism, but to resist the data colonialism that is under way. This is how we understand Big Data from the South. Through what we call ‘data relations’ (new types of human relations which enable the extraction of data for commodification), social life all over the globe becomes an ‘open’ resource for extraction that is somehow ‘just there’ for capital. These global flows of data are as expansive as historic colonialism’s appropriation of land, resources, and bodies, although the epicentre has somewhat shifted. Data colonialism involves not one pole of colonial power (‘the West’), but at least two: the USA and China. This complicates our notion of the geography of the Global South, a concept which until now helped situate resistance and disidentification along geographic divisions between former colonizers and colonized. Instead, the new data colonialism works both externally — on a global scale — and internally on its own home populations. The elites of data colonialism (think of Facebook) benefit from colonization in both dimensions, and North-South, East-West divisions no longer matter in the same way.

It is important to acknowledge both the apparent similarities and the significant differences between our argument and the many preceding critical arguments about Big Data…(More)”

Regulatory Technology – Replacing Law with Computer Code


LSE Legal Studies Working Paper by Eva Micheler and Anna Whaley: “Recently both the Bank of England and the Financial Conduct Authority have carried out experiments using new digital technology for regulatory purposes. The idea is to replace rules written in natural legal language with computer code and to use artificial intelligence for regulatory purposes.

This new way of designing public law is in line with the government’s vision for the UK to become a global leader in digital technology. It is also reflected in the FCA’s business plan.

The article reviews the technology and the advantages and disadvantages of combining the technology with regulatory law. It then informs the discussion from a broader public law perspective. It analyses regulatory technology through criteria developed in the mainstream regulatory discourse. It contributes to that discourse by anticipating problems that will arise as the technology evolves. In addition, the hope is to assist the government in avoiding mistakes that have occurred in the past and creating a better system from the start…(More)”.

What top technologies should the next generation know how to use?


Lottie Waters at Devex: “Technology provides some great opportunities for global development, and a promising future. But for the next generation of professionals to succeed, it’s vital they stay up to date with the latest tech, innovations, and tools.

In a recent report produced by Devex in collaboration with the United States Agency for International Development and DAI, some 86 percent of survey respondents believe the technology, skills, and approaches development professionals will be using in 10 years’ time will be significantly different to today’s.

In fact, “technology for development” is regarded as the sector that will see the most development progress, but is also cited as the one that will see the biggest changes in skills required, according to the survey.

“As different technologies develop, new possibilities will open up that we may not even be aware of yet. These opportunities will bring new people into the development sector and require those in it to be more agile in adapting technologies to meet development challenges,” said one survey respondent.

While “blockchain,” “artificial intelligence,” and “drones” may be the current buzzwords surrounding tech in global development, geographical information systems, or GIS, and big data are actually the top technologies respondents believe the next generation of development professionals should learn how to utilize.

So, how are these technologies currently being used in development, how might this change in the near future, and what will their impact be in the next 10 years? Devex spoke with experts in the field who are already integrating these technologies into their work to find out….(More)”