Explore our articles
View All Results

Stefaan Verhulst

Paper by Tarun Ramadorai, Antoine Uettwiller and Ansgar Walther: “We scrape a comprehensive set of US firms’ privacy policies to facilitate research on the supply of data privacy. We analyze these data with the help of expert legal evaluations, and also acquire data on firms’ web tracking activities. We find considerable and systematic variation in privacy policies along multiple dimensions including ease of access, length, readability, and quality, both within and between industries. Motivated by a simple theory of big data acquisition and usage, we analyze the relationship between firm size, knowledge capital intensity, and privacy supply. We find that large firms with intermediate data intensity have longer, legally watertight policies, but are more likely to share user data with third parties….(More)”.

The Market for Data Privacy

Devin Coldewey at TechCrunch: “A new map of nearly all of Africa shows exactly where the continent’s 1.3 billion people live, down to the meter, which could help everyone from local governments to aid organizations. The map joins others like it from Facebook  created by running satellite imagery through a machine learning model.

It’s not exactly that there was some mystery about where people live, but the degree of precision matters. You may know that a million people live in a given region, and that about half are in the bigger city and another quarter in assorted towns. But that leaves hundreds of thousands only accounted for in the vaguest way.

Fortunately, you can always inspect satellite imagery and pick out the spots where small villages and isolated houses and communities are located. The only problem is that Africa is big. Really big. Manually labeling the satellite imagery even from a single mid-sized country like Gabon or Malawi would take a huge amount of time and effort. And for many applications of the data, such as coordinating the response to a natural disaster or distributing vaccinations, time lost is lives lost.

Better to get it all done at once then, right? That’s the idea behind Facebook’s Population Density Maps project, which had already mapped several countries over the last couple of years before the decision was made to take on the entire African continent….

“The maps from Facebook ensure we focus our volunteers’ time and resources on the places they’re most needed, improving the efficacy of our programs,” said Tyler Radford, executive director of the Humanitarian OpenStreetMap Team, one of the project’s partners.

The core idea is straightforward: Match census data (how many people live in a region) with structure data derived from satellite imagery to get a much better idea of where those people are located.

“With just the census data, the best you can do is assume that people live everywhere in the district – buildings, fields, and forests alike,” said Facebook engineer James Gill. “But once you know the building locations, you can skip the fields and forests and only allocate the population to the buildings. This gives you very detailed 30 meter by 30 meter population maps.”

That’s several times more accurate than any extant population map of this size. The analysis is done by a machine learning agent trained on OpenStreetMap data from all over the world, where people have labeled and outlined buildings and other features.

First the huge amount of Africa’s surface that obviously has no structure had to be removed from consideration, reducing the amount of space the team had to evaluate by a factor of a thousand or more. Then, using a region-specific algorithm (because things look a lot different in coastal Morocco than they do in central Chad), the model identifies patches that contain a building….(More)”.

Facebook’s AI team maps the whole population of Africa

Helen Margetts and Cosmina Dorobantu at Nature: “People produce more than 2.5 quintillion bytes of data each day. Businesses are harnessing these riches using artificial intelligence (AI) to add trillions of dollars in value to goods and services each year. Amazon dispatches items it anticipates customers will buy to regional hubs before they are purchased. Thanks to the vast extractive might of Google and Facebook, every bakery and bicycle shop is the beneficiary of personalized targeted advertising.

But governments have been slow to apply AI to hone their policies and services. The reams of data that governments collect about citizens could, in theory, be used to tailor education to the needs of each child or to fit health care to the genetics and lifestyle of each patient. They could help to predict and prevent traffic deaths, street crime or the necessity of taking children into care. Huge costs of floods, disease outbreaks and financial crises could be alleviated using state-of-the-art modelling. All of these services could become cheaper and more effective.

This dream seems rather distant. Governments have long struggled with much simpler technologies. Flagship policies that rely on information technology (IT) regularly flounder. The Affordable Care Act of former US president Barack Obama nearly crumbled in 2013 when HealthCare.gov, the website enabling Americans to enrol in health insurance plans, kept crashing. Universal Credit, the biggest reform to the UK welfare state since the 1940s, is widely regarded as a disaster because of its failure to pay claimants properly. It has also wasted £837 million (US$1.1 billion) on developing one component of its digital system that was swiftly decommissioned. Canada’s Phoenix pay system, introduced in 2016 to overhaul the federal government’s payroll process, has remunerated 62% of employees incorrectly in each fiscal year since its launch. And My Health Record, Australia’s digital health-records system, saw more than 2.5 million people opt out by the end of January this year over privacy, security and efficacy concerns — roughly 1 in 10 of those who were eligible.

Such failures matter. Technological innovation is essential for the state to maintain its position of authority in a data-intensive world. The digital realm is where citizens live and work, shop and play, meet and fight. Prices for goods are increasingly set by software. Work is mediated through online platforms such as Uber and Deliveroo. Voters receive targeted information — and disinformation — through social media.

Thus the core tasks of governments, such as enforcing regulation, setting employment rights and ensuring fair elections require an understanding of data and algorithms. Here we highlight the main priorities, drawn from our experience of working with policymakers at The Alan Turing Institute in London….(More)”.

Rethink government with AI

Caroline Nickerson at SciStarter: “Citizen science has been around as long as science, but innovative approaches are opening doors to more and deeper forms of public participation.

Below, our editors spotlight a few projects that feature new approaches, novel research, or low-cost instruments. …

Colony B: Unravel the secrets of microscopic life! Colony B is a mobile gaming app developed at McGill University that enables you to contribute to research on microbes. Collect microbes and grow your colony in a fast-paced puzzle game that advances important scientific research.

AirCasting: AirCasting is an open-source, end-to-end solution for collecting, displaying, and sharing health and environmental data using your smartphone. The platform consists of wearable sensors, including a palm-sized air quality monitor called the AirBeam, that detect and report changes in your environment. (Android only.)

LingoBoingo: Getting computers to understand language requires large amounts of linguistic data and “correct” answers to language tasks (what researchers call “gold standard annotations”). Simply by playing language games online, you can help archive languages and create the linguistic data used by researchers to improve language technologies. These games are in English, French, and a new “multi-lingual” category.

TreeSnap: Help our nation’s trees and protect human health in the process. Invasive diseases and pests threaten the health of America’s forests. With the TreeSnap app, you can record the location and health of particular tree species–those unharmed by diseases that have wiped out other species. Scientists then use the collected information to locate candidates for genetic sequencing and breeding programs. Tag trees you find in your community, on your property, or out in the wild to help scientists understand forest health….(More)”.

Innovation Meets Citizen Science

Ben Paynter at FastCompany: “Several years ago, one of the eventual founders of One Concern nearly died in a tragic flood. Today, the company specializes in using artificial intelligence to predict how natural disasters are unfolding in real time on a city-block-level basis, in order to help disaster responders save as many lives as possible….

To fix that, One Concern debuted Flood Concern in late 2018. It creates map-based visualizations of where water surges may hit hardest, up to five days ahead of an impending storm. For cities, that includes not just time-lapse breakdowns of how the water will rise, how fast it could move, and what direction it will be flowing, but also what structures will get swamped or washed away, and how differing mitigation efforts–from levy building to dam releases–will impact each scenario. It’s the winner of Fast Company’s 2019 World Changing Ideas Awards in the AI and Data category.

[Image: One Concern]

So far, Flood Concern has been retroactively tested against events like Hurricane Harvey to show that it could have predicted what areas would be most impacted well ahead of the storm. The company, which was founded in Silicon Valley in 2015, started with one of that region’s pressing threats: earthquakes. It’s since earned contracts with cities like San Francisco, Los Angeles, and Cupertino, as well as private insurance companies….

One Concern’s first offering, dubbed Seismic Concern, takes existing information from satellite images and building permits to figure out what kind of ground structures are built on, and what might happen if they started shaking. If a big one hits, the program can extrapolate from the epicenter to suggest the likeliest places for destruction, and then adjust as more data from things like 911 calls and social media gets factored in….(More)”.


This tech tells cities when floods are coming–and what they will destroy
Community banner

Open Access Book by Ben Green: “Smart cities, where technology is used to solve every problem, are hailed as futuristic urban utopias. We are promised that apps, algorithms, and artificial intelligence will relieve congestion, restore democracy, prevent crime, and improve public services. In The Smart Enough City, Ben Green warns against seeing the city only through the lens of technology; taking an exclusively technical view of urban life will lead to cities that appear smart but under the surface are rife with injustice and inequality. He proposes instead that cities strive to be “smart enough”: to embrace technology as a powerful tool when used in conjunction with other forms of social change—but not to value technology as an end in itself….(More)”.

The Smart Enough City

European Commission: “Following the publication of the draft ethics guidelines in December 2018 to which more than 500 comments were received, the independent expert group presents today their ethics guidelines for trustworthy artificial intelligence.

Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

In summer 2019, the Commission will launch a pilot phase involving a wide range of stakeholders. Already today, companies, public administrations and organisations can sign up to the European AI Alliance and receive a notification when the pilot starts.

Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the Commission will evaluate the outcome and propose any next steps….(More)”.

Ethics guidelines for trustworthy AI

Sophie Haigney at The New York Times: “Taja Lindley, a Brooklyn-based interdisciplinary artist and activist, will spend the next year doing an unconventional residency — she’ll be collaborating with the New York City Department of Health and Mental Hygiene, working on a project that deals with unequal birth outcomes and maternal mortality for pregnant and parenting black people in the Bronx.

Ms. Lindley is one of four artists who were selected this year for the City’s Public Artists in Residence program, or PAIR, which is managed by New York City’s Department of Cultural Affairs. The program, which began in 2015, matches artists and public agencies, and the artists are tasked with developing creative projects around social issues.

Ms. Lindley will be working with the Tremont Neighborhood Health Action Center, part of the department of health, in the Bronx. “People who are black are met with skepticism, minimized and dismissed when they seek health care,” Ms. Lindley said, “and the voices of black people can really shift medical practices and city practices, so I’ll really be centering those voices.” She said that performance, film and storytelling are likely to be incorporated in her project.

The other three artists selected this year are the artist Laura Nova, who will be in residence with the Department for the Aging; the artist Julia Weist, who will be in residence with the Department of Records and Information Services; and the artist Janet Zweig, who will be in residence with the Mayor’s Office of Sustainability. Each will receive $40,000. There is a three-month-long research phase and then the artists will spend a minimum of nine months creating and producing their work….(More)”.

Artists as ‘Creative Problem-Solvers’ at City Agencies

Editorial by David Murakami Wood and Torin Monahan of Special Issue of Surveillance and Society: “This editorial introduces this special responsive issue on “platform surveillance.” We develop the term platform surveillance to account for the manifold and often insidious ways that digital platforms fundamentally transform social practices and relations, recasting them as surveillant exchanges whose coordination must be technologically mediated and therefore made exploitable as data. In the process, digital platforms become dominant social structures in their own right, subordinating other institutions, conjuring or sedimenting social divisions and inequalities, and setting the terms upon which individuals, organizations, and governments interact.

Emergent forms of platform capitalism portend new governmentalities, as they gradually draw existing institutions into alignment or harmonization with the logics of platform surveillance while also engendering subjectivities (e.g., the gig-economy worker) that support those logics. Because surveillance is essential to the operations of digital platforms, because it structures the forms of governance and capital that emerge, the field of surveillance studies is uniquely positioned to investigate and theorize these phenomena….(More)”.

Platform Surveillance

Case Study by Cities of Service: “Mexico City was faced with a massive task: drafting a constitution. Mayor Miguel Ángel Mancera, who oversaw the drafting and adoption of the 212-page document, hoped to democratize the process. He appointed a drafting committee made up of city residents and turned to the Laboratório para la Ciudad (LabCDMX) to engage everyday citizens. LabCDMX conducted a comprehensive survey and employed the online platform Change.org to solicit ideas for the new constitution. Several petitioners without a legal or political background seized on the opportunity and made their voices heard with successful proposals on topics like green space, waterway recuperation, and LGBTI rights in a document that will have a lasting impact on Mexico City’s governance….(More)”.

Crowdsourcing a Constitution

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday