Regulatory sandbox lessons learned report


Financial Conduct Authority (UK): “The sandbox allows firms to test innovative products, services or business models in a live market environment, while ensuring that appropriate protections are in place. It was established to support the FCA’s objective of promoting effective competition in the interests of consumers and opened for applications in June 2016.

The sandbox has supported 50 firms from 146 applications received across the first two cohorts. This report sets out the sandbox’s overall impact on the market including the adoption of new technologies, increasing access and improving experiences for vulnerable consumers as well as lessons learnt from individual tests that have been, or are being, conducted as part of the sandbox.

Early indications suggest the sandbox is providing the benefits it set out to achieve with evidence of the sandbox enabling new products to be tested, reducing time and cost of getting innovative ideas to market, improving access to finance for innovators, and ensuring appropriate safeguards are built into new products and services.

We will be using these learnings to inform any future sandbox developments as well as our ongoing policymaking and supervision work….(More)”.

How Universities Are Tackling Society’s Grand Challenges


Michelle Popowitz and Cristin Dorgelo in Scientific American: “…Universities embarking on Grand Challenge efforts are traversing new terrain—they are making commitments about research deliverables rather than simply committing to invest in efforts related to a particular subject. To mitigate risk, the universities that have entered this space are informally consulting with others regarding effective strategies, but the entire community would benefit from a more formal structure for identifying and sharing “what works.” To address this need, the new Community of Practice for University-Led Grand Challenges—launched at the October 2017 workshop—aims to provide peer support to leaders of university Grand Challenge programs, and to accelerate the adoption of Grand Challenge approaches at more universities supported by cross-sector partnerships.

The university community has identified extensive opportunities for collaboration on these Grand Challenge programs with other sectors:

  • Philanthropy can support the development of new Grand Challenge programs at more universities by establishing planning and administration grant programs, convening experts, and providing funding support for documenting these models through white papers and other publications and for evaluation of these programs over time.
  • Relevant associations and professional development organizations can host learning sessions about Grand Challenges for university leaders and professionals.
  • Companies can collaborate with universities on Grand Challenges research, act as sponsors and hosts for university-led programs and activities, and offer leaders, experts, and other personnel for volunteer advisory roles and tours of duties at universities.
  • Federal, State, and local governments and elected officials can provide support for collaboration among government agencies and offices and the research community on Grand Challenges.

Today’s global society faces pressing, complex challenges across many domains—including health, environment, and social justice. Science (including social sciences), technology, the arts, and humanities have critical roles to play in addressing these challenges and building a bright and prosperous future. Universities are hubs for discovery, building new knowledge, and changing understanding of the world. The public values the role universities play in education; yet as a sector, universities are less effective at highlighting their roles as the catalysts of new industries, homes for the fundamental science that leads to new treatments and products, or sources of the evidence on which policy decisions should be made.

By coming together as universities, collaborating with partners, and aiming for ambitious goals to address problems that might seem unsolvable, universities can show commitment to their communities and become beacons of hope….(More)”.

World’s biggest city database shines light on our increasingly urbanised planet


EU Joint Research Centers: “The JRC has launched a new tool with data on all 10,000 urban centres scattered across the globe. It is the largest and most comprehensive database on cities ever published.

With data derived from the JRC’s Global Human Settlement Layer (GHSL), researchers have discovered that the world has become even more urbanised than previously thought.

Populations in urban areas doubled in Africa and grew by 1.1 billion in Asia between 1990 and 2015.

Globally, more than 400 cities have a population between 1 and 5 million. More than 40 cities have 5 to 10 million people, and there are 32 ‘megacities’ with above 10 million inhabitants.

There are some promising signs for the environment: Cities became 25% greener between 2000 and 2015. And although air pollution in urban centres was increasing from 1990, between 2000 and 2015 the trend was reversed.

With every high density area of at least 50,000 inhabitants covered, the city centres database shows growth in population and built-up areas over the past 40 years.  Environmental factors tracked include:

  • ‘Greenness’: the estimated amount of healthy vegetation in the city centre
  • Soil sealing: the covering of the soil surface with materials like concrete and stone, as a result of new buildings, roads and other public and private spaces
  • Air pollution: the level of polluting particles such as PM2.5 in the air
  • Vicinity to protected areas: the percentage of natural protected space within 30 km distance from the city centre’s border
  • Disaster risk-related exposure of population and buildings in low lying areas and on steep slopes.

The data is free to access and open to everyone. It applies big data analytics and a global, people-based definition of cities, providing support to monitor global urbanisation and the 2030 Sustainable Development Agenda.

The information gained from the GHSL is used to map out population density and settlement maps. Satellite, census and local geographic information are used to create the maps….(More)”.

Republics of Makers: From the Digital Commons to a Flat Marginal Cost Society


Mario Carpo at eFlux: “…as the costs of electronic computation have been steadily decreasing for the last forty years at least, many have recently come to the conclusion that, for most practical purposes, the cost of computation is asymptotically tending to zero. Indeed, the current notion of Big Data is based on the assumption that an almost unlimited amount of digital data will soon be available at almost no cost, and similar premises have further fueled the expectation of a forthcoming “zero marginal costs society”: a society where, except for some upfront and overhead costs (the costs of building and maintaining some facilities), many goods and services will be free for all. And indeed, against all odds, an almost zero marginal cost society is already a reality in the case of many services based on the production and delivery of electricity: from the recording, transmission, and processing of electrically encoded digital information (bits) to the production and consumption of electrical power itself. Using renewable energies (solar, wind, hydro) the generation of electrical power is free, except for the cost of building and maintaining installations and infrastructure. And given the recent progress in the micro-management of intelligent electrical grids, it is easy to imagine that in the near future the cost of servicing a network of very small, local hydro-electric generators, for example, could easily be devolved to local communities of prosumers who would take care of those installations as their tend to their living environment, on an almost voluntary, communal basis.4 This was already often the case during the early stages of electrification, before the rise of AC (alternate current, which, unlike DC, or direct current, could be carried over long distances): AC became the industry’s choice only after Galileo Ferraris’s and Nikola Tesla’s developments in AC technologies in the 1880s.

Likewise, at the micro-scale of the electronic production and processing of bits and bytes of information, the Open Source movement and the phenomenal surge of some crowdsourced digital media (including some so-called social media) in the first decade of the twenty-first century has already proven that a collaborative, zero cost business model can effectively compete with products priced for profit on a traditional marketplace. As the success of Wikipedia, Linux, or Firefox proves, many are happy to volunteer their time and labor for free when all can profit from the collective work of an entire community without having to pay for it. This is now technically possible precisely because the fixed costs of building, maintaining, and delivering these service are very small; hence, from the point of view of the end-user, negligible.

Yet, regardless of the fixed costs of the infrastructure, content—even user-generated content—has costs, albeit for the time being these are mostly hidden, voluntarily born, or inadvertently absorbed by the prosumers themselves. For example, the wisdom of Wikipedia is not really a wisdom of crowds: most Wikipedia entries are de facto curated by fairly traditional scholar communities, and these communities can contribute their expertise for free only because their work has already been paid for by others—often by universities. In this sense, Wikipedia is only piggybacking on someone else’s research investments (but multiplying their outreach, which is one reason for its success). Ditto for most Open Source software, as training a software engineer, coder, or hacker, takes time and money—an investment for future returns that in many countries around the world is still born, at least in part, by public institutions….(More)”.

Mobile Devices as Stigmatizing Security Sensors: The GDPR and a Future of Crowdsourced ‘Broken Windows’


Paper by Oskar Josef Gstrein and Gerard Jan Ritsema van Eck: “Various smartphone apps and services are available which encourage users to report where and when they feel they are in an unsafe or threatening environment. This user generated content may be used to build datasets, which can show areas that are considered ‘bad,’ and to map out ‘safe’ routes through such neighbourhoods.

Despite certain advantages, this data inherently carries the danger that streets or neighbourhoods become stigmatized and already existing prejudices might be reinforced. Such stigmas might also result in negative consequences for property values and businesses, causing irreversible damage to certain parts of a municipality. Overcoming such an “evidence-based stigma” — even if based on biased, unreviewed, outdated, or inaccurate data — becomes nearly impossible and raises the question how such data should be managed….(More)”.

Eight great applications of simulation in the policymaking process


Florence Engasser and Sonia Nasser at Nesta: “In a context where complexity and unpredictability increasingly form part of the decision-making process, our policymakers need new tools to help them experiment, explore different scenarios and weigh the trade-offs of a decision in a safe, pressure-free environment.

Simulation brings the potential for more creative, efficient and effective policymaking

The best way to understand how simulation can be used as a policy method is to look at examples. We’ve found eight really great examples from around the world, giving us a sense of the broad range of applications simulation can have in the policymaking process, from board games through to more traditional modelling techniques applied to new fields, and all the way to virtual reality….(More)”.

Open Data Risk Assessment


Report by the Future of Privacy Forum: “The transparency goals of the open data movement serve important social, economic, and democratic functions in cities like Seattle. At the same time, some municipal datasets about the city and its citizens’ activities carry inherent risks to individual privacy when shared publicly. In 2016, the City of Seattle declared in its Open Data Policy that the city’s data would be “open by preference,” except when doing so may affect individual privacy. To ensure its Open Data Program effectively protects individuals, Seattle committed to performing an annual risk assessment and tasked the Future of Privacy Forum (FPF) with creating and deploying an initial privacy risk assessment methodology for open data.

This Report provides tools and guidance to the City of Seattle and other municipalities navigating the complex policy, operational, technical, organizational, and ethical standards that support privacyprotective open data programs. Although there is a growing body of research regarding open data privacy, open data managers and departmental data owners need to be able to employ a standardized methodology for assessing the privacy risks and benefits of particular datasets internally, without access to a bevy of expert statisticians, privacy lawyers, or philosophers. By optimizing its internal processes and procedures, developing and investing in advanced statistical disclosure control strategies, and following a flexible, risk-based assessment process, the City of Seattle – and other municipalities – can build mature open data programs that maximize the utility and openness of civic data while minimizing privacy risks to individuals and addressing community concerns about ethical challenges, fairness, and equity.

This Report first describes inherent privacy risks in an open data landscape, with an emphasis on potential harms related to re-identification, data quality, and fairness. To address these risks, the Report includes a Model Open Data Benefit-Risk Analysis (“Model Analysis”). The Model Analysis evaluates the types of data contained in a proposed open dataset, the potential benefits – and concomitant risks – of releasing the dataset publicly, and strategies for effective de-identification and risk mitigation. This holistic assessment guides city officials to determine whether to release the dataset openly, in a limited access environment, or to withhold it from publication (absent countervailing public policy considerations). …(More)”.

It’s the (Democracy-Poisoning) Golden Age of Free Speech


Zeynep Tufekci in Wired: “…In today’s networked environment, when anyone can broadcast live or post their thoughts to a social network, it would seem that censorship ought to be impossible. This should be the golden age of free speech.

And sure, it is a golden age of free speech—if you can believe your lying eyes….

The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase. They can also make the big platforms a terrible place to interact with other people.

Even when the big platforms themselves suspend or boot someone off their networks for violating “community standards”—an act that doeslook to many people like old-fashioned censorship—it’s not technically an infringement on free speech, even if it is a display of immense platform power. Anyone in the world can still read what the far-right troll Tim “Baked Alaska” Gionet has to say on the internet. What Twitter has denied him, by kicking him off, is attention.

Many more of the most noble old ideas about free speech simply don’t compute in the age of social media. John Stuart Mill’s notion that a “marketplace of ideas” will elevate the truth is flatly belied by the virality of fake news. And the famous American saying that “the best cure for bad speech is more speech”—a paraphrase of Supreme Court justice Louis Brandeis—loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?

This is not a call for nostalgia. In the past, marginalized voices had a hard time reaching a mass audience at all. They often never made it past the gatekeepers who put out the evening news, who worked and lived within a few blocks of one another in Manhattan and Washington, DC. The best that dissidents could do, often, was to engineer self-sacrificing public spectacles that those gatekeepers would find hard to ignore—as US civil rights leaders did when they sent schoolchildren out to march on the streets of Birmingham, Alabama, drawing out the most naked forms of Southern police brutality for the cameras.

But back then, every political actor could at least see more or less what everyone else was seeing. Today, even the most powerful elites often cannot effectively convene the right swath of the public to counter viral messages. …(More)”.

Artificial intelligence and smart cities


Essay by Michael Batty at Urban Analytics and City Sciences: “…The notion of the smart city of course conjures up these images of such an automated future. Much of our thinking about this future, certainly in the more popular press, is about everything ranging from the latest App on our smart phones to driverless cars while somewhat deeper concerns are about efficiency gains due to the automation of services ranging from transit to the delivery of energy. There is no doubt that routine and repetitive processes – algorithms if you like – are improving at an exponential rate in terms of the data they can process and the speed of execution, faithfully following Moore’s Law.

Pattern recognition techniques that lie at the basis of machine learning are highly routinized iterative schemes where the pattern in question – be it a signature, a face, the environment around a driverless car and so on – is computed as an elaborate averaging procedure which takes a series of elements of the pattern and weights them in such a way that the pattern can be reproduced perfectly by the combinations of elements of the original pattern and the weights. This is in essence the way neural networks work. When one says that they ‘learn’ and that the current focus is on ‘deep learning’, all that is meant is that with complex patterns and environments, many layers of neurons (elements of the pattern) are defined and the iterative procedures are run until there is a convergence with the pattern that is to be explained. Such processes are iterative, additive and not much more than sophisticated averaging but using machines that can operate virtually at the speed of light and thus process vast volumes of big data. When these kinds of algorithm can be run in real time and many already can be, then there is the prospect of many kinds of routine behaviour being displaced. It is in this sense that AI might herald in an era of truly disruptive processes. This according to Brynjolfsson and McAfee is beginning to happen as we reach the second half of the chess board.

The real issue in terms of AI involves problems that are peculiarly human. Much of our work is highly routinized and many of our daily actions and decisions are based on relatively straightforward patterns of stimulus and response. The big questions involve the extent to which those of our behaviours which are not straightforward can be automated. In fact, although machines are able to beat human players in many board games and there is now the prospect of machines beating the very machines that were originally designed to play against humans, the real power of AI may well come from collaboratives of man and machine, working together, rather than ever more powerful machines working by themselves. In the last 10 years, some of my editorials have tracked what is happening in the real-time city – the smart city as it is popularly called – which has become key to many new initiatives in cities. In fact, cities – particularly big cities, world cities – have become the flavour of the month but the focus has not been on their long-term evolution but on how we use them on a minute by minute to week by week basis.

Many of the patterns that define the smart city on these short-term cycles can be predicted using AI largely because they are highly routinized but even for highly routine patterns, there are limits on the extent to which we can explain them and reproduce them. Much advancement in AI within the smart city will come from automation of the routine, such as the use of energy, the delivery of location-based services, transit using information being fed to operators and travellers in real time and so on. I think we will see some quite impressive advances in these areas in the next decade and beyond. But the key issue in urban planning is not just this short term but the long term and it is here that the prospects for AI are more problematic….(More)”.

Using new data sources for policymaking


Technical report by the Joint Research Centre (JRC) of the European Commission: “… synthesises the results of our work on using new data sources for policy-making. It reflects a recent shift from more general considerations in the area of Big Data to a more dedicated investigation of Citizen Science, and it summarizes the state of play. With this contribution, we start promoting Citizen Science as an integral component of public participation in policy in Europe.

The particular need to focus on the citizen dimension emerged due to (i) the increasing interest in the topic from policy Directorate-Generals (DGs) of the European Commission (EC); (ii) the considerable socio-economic impact policy making has on citizens’ life and society as a whole; and (iii) the clear potentiality of citizens’ contributions to increase the relevance of policy making and the effectiveness of policies when addressing societal challenges.

We explicitly concentrate on Citizen Science (or public participation in scientific research) as a way to engage people in practical work, and to develop a mutual understanding between the participants from civil society, research institutions and the public sector by working together on a topic that is of common interest.

Acknowledging this new priority, this report concentrates on the topic of Citizen Science and presents already ongoing collaborations and recent achievements. The presented work particularly addresses environment-related policies, Open Science and aspects of Better Regulation. We then introduce the six phases of the ‘cyclic value chain of Citizen Science’ as a concept to frame citizen engagement in science for policy. We use this structure in order to detail the benefits and challenges of existing approaches – building on the lessons that we learned so far from our own practical work and thanks to the knowledge exchange from third parties. After outlining additional related policy areas, we sketch the future work that is required in order to overcome the identified challenges, and translate them into actions for ourselves and our partners.

Next steps include the following:

 Develop a robust methodology for data collection, analysis and use of Citizen Science for EU policy;

 Provide a platform as an enabling framework for applying this methodology to different policy areas, including the provision of best practices;

 Offer guidelines for policy DGs in order to promote the use of Citizen Science for policy in Europe;

 Experiment and evaluate possibilities of overarching methodologies for citizen engagement in science and policy, and their case specifics; and

 Continue to advance interoperability and knowledge sharing between currently disconnected communities of practise. …(More)”.