Explore our articles
View All Results

Stefaan Verhulst

Kevin Guo at TechCrunch: “…Anyone with experience in the artificial intelligence space will tell you that quality and quantity of training data is one of the most important inputs in building real-world-functional AI. This is why today’s large technology companies continue to collect and keep detailed consumer data, despite recent public backlash. From search engines, to social media, to self driving cars, data — in some cases even more than the underlying technology itself — is what drives value in today’s technology companies.

It should be no surprise then that autonomous vehicle companies do not publicly share data, even in instances of deadly crashes. When it comes to autonomous vehicles, the public interest (making safe self-driving cars available as soon as possible) is clearly at odds with corporate interests (making as much money as possible on the technology).

We need to create industry and regulatory environments in which autonomous vehicle companies compete based upon the quality of their technology — not just upon their ability to spend hundreds of millions of dollars to collect and silo as much data as possible (yes, this is how much gathering this data costs). In today’s environment the inverse is true: autonomous car manufacturers are focusing on are gathering as many miles of data as possible, with the intention of feeding more information into their models than their competitors, all the while avoiding working together….

The complexity of this data is diverse, yet public — I am not suggesting that people hand over private, privileged data, but actively pool and combine what the cars are seeing. There’s a reason that many of the autonomous car companies are driving millions of virtual miles — they’re attempting to get as much active driving data as they can. Beyond the fact that they drove those miles, what truly makes that data something that they have to hoard? By sharing these miles, by seeing as much of the world in as much detail as possible, these companies can focus on making smarter, better autonomous vehicles and bring them to market faster.

If you’re reading this and thinking it’s deeply unfair, I encourage you to once again consider 40,000 people are preventably dying every year in America alone. If you are not compelled by the massive life-saving potential of the technology, consider that publicly licenseable self-driving data sets would accelerate innovation by removing a substantial portion of the capital barrier-to-entry in the space and increasing competition….(More)”

Driven to safety — it’s time to pool our data

Nir Kshetri at The Conversation: “When a Chinese consumer buys a package labeled “Australian beef,” there’s only a 50-50 chance the meat inside is, in fact, Australian beef. It could just as easily contain rat, dog, horse or camel meat – or a mixture of them all. It’s gross and dangerous, but also costly.

Fraud in the global food industry is a multi-billion-dollar problem that has lingered for years, duping consumers and even making them ill. Food manufacturers around the world are concerned – as many as 39 percent of them are worried that their products could be easily counterfeited, and 40 percent say food fraud is hard to detect.

In researching blockchain for more than three years, I have become convinced that this technology’s potential to prevent fraud and strengthen security could fight agricultural fraud and improve food safety. Many companies agree, and are already running various tests, including tracking wine from grape to bottle and even following individual coffee beans through international trade.

Tracing food items

An early trial of a blockchain system to track food from farm to consumer was in 2016, when Walmart collected information about pork being raised in China, where consumers are rightly skeptical about sellers’ claims of what their food is and where it’s from. Employees at a pork farm scanned images of farm inspection reports and livestock health certificates, storing them in a secure online database where the records could not be deleted or modified – only added to.

As the animals moved from farm to slaughter to processing, packaging and then to stores, the drivers of the freight trucks played a key role. At each step, they would collect documents detailing the shipment, storage temperature and other inspections and safety reports, and official stamps as authorities reviewed them – just as they did normally. In Walmart’s test, however, the drivers would photograph those documents and upload them to the blockchain-based database. The company controlled the computers running the database, but government agencies’ systems could also be involved, to further ensure data integrity.

As the pork was packaged for sale, a sticker was put on each container, displaying a smartphone-readable code that would link to that meat’s record on the blockchain. Consumers could scan the code right in the store and assure themselves that they were buying exactly what they thought they were. More recent advances in the technology of the stickers themselves have made them more secure and counterfeitresistant.

Walmart did similar tests on mangoes imported to the U.S. from Latin America. The company found that it took only 2.2 seconds for consumers to find out an individual fruit’s weight, variety, growing location, time it was harvested, date it passed through U.S. customs, when and where it was sliced, which cold-storage facility the sliced mango was held in and for how long it waited before being delivered to a store….(More)”.

Blockchain systems are tracking food safety and origins

Aliya Ram and Madhumita Murgia at the Financial Times: “The UK government should force Google, Apple, Uber and others to share their mapping data so that other companies can develop autonomous cars, drones and transport apps, according to an influential campaign group. The Open Data Institute, co-founded by Tim Berners-Lee at MIT and Nigel Shadbolt, artificial intelligence professor at the University of Oxford, warned on Tuesday that big tech companies had become “data monopolies”.

The group said the UK’s Geospatial Commission should ask the companies to share map data with rivals and the public sector in a collaborative database or else force them to do so with legislation.

“Google along with all of the other companies like Apple and Uber are trying to deliver an excellent service to their clients and customers,” said Jeni Tennison, chief executive of the Open Data Institute. “The status quo is not optimal because all of the organisations we are talking about are replicating effort. This means that people are overall not getting the best service from the data that is being collected and maintained. “The large companies are becoming more like data monopolies and that doesn’t give us the best value from our data.”

On Tuesday, the UK government said its Office for Artificial Intelligence had teamed up with the ODI to pilot two new “data trusts” — legal structures that allow multiple groups to share anonymised information. Data trusts have been described as a good way for small business to compete with large rivals that have lots of data, but only a handful have been set up so far.

The trusts will be designed over the next few months and could be used to share data, for example, about cities, the environment, biodiversity and transport. Ms Tennison said the ODI was also working on a data trust with the mayor of London, Sadiq Khan, and local authorities in Greenwich to see how real time data from the internet of things and sensors could be shared with start-ups to solve problems in the city. London’s transport authority has said ride hailing apps would be forced to turn over travel data to the government. Uber now provides public access to its data on traffic and travel conditions in the UK….(More) (Full Report)”.

Force Google, Apple and Uber to share mapping data, UK advised

Introductory paper by Wenhong Chen and Anabel Quan-Haase of Special Issue of the Social Science Computer Review:  “The hype around big data does not seem to abate nor do the scandals. Privacy breaches in the collection, use, and sharing of big data have affected all the major tech players, be it Facebook, Google, Apple, or Uber, and go beyond the corporate world including governments, municipalities, and educational and health institutions. What has come to light is that enabled by the rapid growth of social media and mobile apps, various stakeholders collect and use large amounts of data, disregarding the ethics and politics.

As big data touch on many realms of daily life and have profound impacts in the social world, the scrutiny around big data practice becomes increasingly relevant. This special issue investigates the ethics and politics of big data using a wide range of theoretical and methodological approaches. Together, the articles provide new understandings of the many dimensions of big data ethics and politics, showing it is important to understand and increase awareness of the biases and limitations inherent in big data analysis and practices….(More)”

Big Data Ethics and Politics: Toward New Understandings

Springwise“Safe & the City is a free app designed to help users identify which streets are safe for them. Sexual harassment and violent crimes against women in particular are a big problem in many urban environments. This app uses crowdsourced data and crime statistics to help female pedestrians stay safe.

It is a development of traditional navigation apps but instead of simply providing the fastest route, it also has information on what is the safest. The Live Map relies on user data. Victims can report harassment or assault on the app. The information will then be available to other users to warn them of a potential threat in the area. Incidents can be ranked from a feeling of discomfort or threat, verbal harassment, or a physical assault. Whilst navigating, the Live Map can also alert users to potentially dangerous intersections coming. This reminds people to stay alert and not only focus on their phone while walking.

The Safe Sites feature is also a way of incorporating the community. Businesses and organisations can register to be Safe Sites. They will then receive training from SafeSeekers in how to provide the best support and assistance in emergency situations. The locations of such Sites will be available on the app, should a user need one.

The IOS app launched in March 2018 on International Women’s Day. It is currently only available for London…(More)”

Crowdsourced data informs women which streets are safe

Richard Harris at NPR: “A startup genetics company says it’s now offering to sequence your entire genome at no cost to you. In fact, you would own the data and may even be able to make money off it.

Nebula Genomics, created by the prominent Harvard geneticist George Church and his lab colleagues, seeks to upend the usual way genomic information is owned.

Today, companies like 23andMe make some of their money by scanning your genetic patterns and then selling that information to drug companies for use in research. (You choose whether to opt in.)

Church says his new enterprise leaves ownership and control of the data in an individual’s hands. And the genomic analysis Nebula will perform is much more detailed than what 23andMe and similar companies offer.

Nebula will do a full genome sequence, rather than a snapshot of key gene variants. That wider range of genetic information would makes the data more appealing to biologists and biotech and pharmaceutical companies….

Church’s approach is part of a trend that’s pushing back against the multibillion-dollar industry to buy and sell medical information. Right now, companies reap those profits and control the data.

“Patients should have the right to decide for themselves whether they want to share their medical data, and, if so, with whom,” Adam Tanner, at Harvard’s Institute for Quantitative Social Science, says in an email. “Efforts to empower people to fine-tune the fate of their medical information are a step in the right direction.” Tanner, author of a book on the subject of the trade in medical data, isn’t involved in Nebula.

The current system is “very paternalistic,” Church says. He aims to give people complete control over who gets access to their data, and let individuals decide whether or not to sell the information, and to whom.

“In this case, everything is private information, stored on your computer or a computer you designate,” Church says. It can be encrypted so nobody can read it, even you, if that’s what you want.

Drug companies interested in studying, say, diabetes patients would ask Nebula to identify people in their system who have the disease. Nebula would then identify those individuals by launching an encrypted search of participants.

People who have indicated they’re interested in selling their genetic data to a company would then be given the option of granting access to the information, along with medical data that person has designated.

Other companies are also springing up to help people control — and potentially profit from — their medical data. EncrypGen lets people offer up their genetic data, though customers have to provide their own DNA sequence. Hu-manity.co is also trying to establish a system in which people can sell their medical data to pharmaceutical companies….(More)”.

Startup Offers To Sequence Your Genome Free Of Charge, Then Let You Profit From It

Blog by Giovanni Buttarelli: “…There are few authorities monitoring the impact of new technologies on fundamental rights so closely and intensively as data protection and privacy commissioners. At the International Conference of Data Protection and Privacy Commissioners, the 40th ICDPPC (which the EDPS had the honour to host), they continued the discussion on AI which began in Marrakesh two years ago with a reflection paper prepared by EDPS experts. In the meantime, many national data protection authorities have invested considerable efforts and provided important contributions to the discussion. To name only a few, the data protection authorities from NorwayFrance, the UK and Schleswig-Holstein have published research and reflections on AI, ethics and fundamental rights. We all see that some applications of AI raise immediate concerns about data protection and privacy; but it also seems generally accepted that there are far wider-reaching ethical implications, as a group of AI researchers also recently concluded. Data protection and privacy commissioners have now made a forceful intervention by adopting a declaration on ethics and data protection in artificial intelligence which spells out six principles for the future development and use of AI – fairness, accountability, transparency, privacy by design, empowerment and non-discrimination – and demands concerted international efforts  to implement such governance principles. Conference members will contribute to these efforts, including through a new permanent working group on Ethics and Data Protection in Artificial Intelligence.

The ICDPPC was also chosen by an alliance of NGOs and individuals, The Public Voice, as the moment to launch its own Universal Guidelines on Artificial Intelligence (UGAI). The twelve principles laid down in these guidelines extend and complement those of the ICDPPC declaration.

We are only at the beginning of this debate. More voices will be heard: think tanks such as CIPL are coming forward with their suggestions, and so will many other organisations.

At international level, the Council of Europe has invested efforts in assessing the impact of AI, and has announced a report and guidelines to be published soon. The European Commission has appointed an expert group which will, among other tasks, give recommendations on future-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges.

As I already pointed out in an earlier blogpost, it is our responsibility to ensure that the technologies which will determine the way we and future generations communicate, work and live together, are developed in such a way that the respect for fundamental rights and the rule of law are supported and not undermined….(More)”.

What do we learn from Machine Learning?

Gabriella Capone at apolitical (a winner of the 2018 Apolitical Young Thought Leaders competition): “Rain ravaged Gdańsk in 2016, taking the lives of two residents and causing millions of euros in damage. Despite its 700-year history of flooding the city was overwhelmed by these especially devastating floods. Also, Gdańsk is one of the European coasts most exposed to rising sea levels. It needed a new approach to avoid similar outcomes for the next, inevitable encounter with this worsening problem.

Bringing in citizens to tackle such a difficult issue was not the obvious course of action. Yet this was the proposal of Dr. Marcin Gerwin, an advocate from a neighbouring town who paved the way for Poland’s first participatory budgeting experience.

Mayor Adamowicz of Gdańsk agreed and, within a year, they welcomed about 60 people to the first Citizens Assembly on flood mitigation. Implemented by Dr. Gerwin and a team of coordinators, the Assembly convened over four Saturdays, heard expert testimony, and devised solutions.

The Assembly was not only deliberative and educational, it was action-oriented. Mayor Adamowicz committed to implement proposals on which 80% or more of participants agreed. The final 16 proposals included the investment of nearly $40 million USD in monitoring systems and infrastructure, subsidies to incentivise individuals to improve water management on their property, and an educational “Do Not Flood” campaign to highlight emergency resources.

It may seem risky to outsource the solving of difficult issues to citizens. Yet, when properly designed, public problem-solving can produce creative resolutions to formidable challenges. Beyond Poland, public problem-solving initiatives in Mexico and the United States are making headway on pervasive issues, from flooding to air pollution, to technology in public spaces.

The GovLab, with support from the Tinker Foundation, is analysing what makes for more successful public problem-solving as part of its City Challenges program. Below, I provide a glimpse into the types of design choices that can amplify the impact of public problem-solving….(More)

It’s time to let citizens tackle the wickedest public problems

Aaron Smith at the Pew Research Center: “Algorithms are all around us, utilizing massive stores of data and complex analytics to make decisions with often significant impacts on humans. They recommend books and movies for us to read and watch, surface news stories they think we might find relevant, estimate the likelihood that a tumor is cancerous and predict whether someone might be a criminal or a worthwhile credit risk. But despite the growing presence of algorithms in many aspects of daily life, a Pew Research Center survey of U.S. adults finds that the public is frequently skeptical of these tools when used in various real-life situations.

This skepticism spans several dimensions. At a broad level, 58% of Americans feel that computer programs will always reflect some level of human bias – although 40% think these programs can be designed in a way that is bias-free. And in various contexts, the public worries that these tools might violate privacy, fail to capture the nuance of complex situations, or simply put the people they are evaluating in an unfair situation. Public perceptions of algorithmic decision-making are also often highly contextual. The survey shows that otherwise similar technologies can be viewed with support or suspicion depending on the circumstances or on the tasks they are assigned to do….

The following are among the major findings.

The public expresses broad concerns about the fairness and acceptability of using computers for decision-making in situations with important real-world consequences

Majorities of Americans find it unacceptable to use algorithms to make decisions with real-world consequences for humans

By and large, the public views these examples of algorithmic decision-making as unfair to the people the computer-based systems are evaluating. Most notably, only around one-third of Americans think that the video job interview and personal finance score algorithms would be fair to job applicants and consumers. When asked directly whether they think the use of these algorithms is acceptable, a majority of the public says that they are not acceptable. Two-thirds of Americans (68%) find the personal finance score algorithm unacceptable, and 67% say the computer-aided video job analysis algorithm is unacceptable….

Attitudes toward algorithmic decision-making can depend heavily on context

Despite the consistencies in some of these responses, the survey also highlights the ways in which Americans’ attitudes toward algorithmic decision-making can depend heavily on the context of those decisions and the characteristics of the people who might be affected….

When it comes to the algorithms that underpin the social media environment, users’ comfort level with sharing their personal information also depends heavily on how and why their data are being used. A 75% majority of social media users say they would be comfortable sharing their data with those sites if it were used to recommend events they might like to attend. But that share falls to just 37% if their data are being used to deliver messages from political campaigns.

Across age groups, social media users are comfortable with their data being used to recommend events - but wary of that data being used for political messaging

In other instances, different types of users offer divergent views about the collection and use of their personal data. For instance, about two-thirds of social media users younger than 50 find it acceptable for social media platforms to use their personal data to recommend connecting with people they might want to know. But that view is shared by fewer than half of users ages 65 and older….(More)”.

Public Attitudes Toward Computer Algorithms

Book by Rob Reich: “Is philanthropy, by its very nature, a threat to today’s democracy? Though we may laud wealthy individuals who give away their money for society’s benefit, Just Giving shows how such generosity not only isn’t the unassailable good we think it to be but might also undermine democratic values and set back aspirations of justice. Big philanthropy is often an exercise of power, the conversion of private assets into public influence. And it is a form of power that is largely unaccountable, often perpetual, and lavishly tax-advantaged. The affluent—and their foundations—reap vast benefits even as they influence policy without accountability. And small philanthropy, or ordinary charitable giving, can be problematic as well. Charity, it turns out, does surprisingly little to provide for those in need and sometimes worsens inequality.

These outcomes are shaped by the policies that define and structure philanthropy. When, how much, and to whom people give is influenced by laws governing everything from the creation of foundations and nonprofits to generous tax exemptions for donations of money and property. Rob Reich asks: What attitude and what policies should democracies have concerning individuals who give money away for public purposes? Philanthropy currently fails democracy in many ways, but Reich argues that it can be redeemed. Differentiating between individual philanthropy and private foundations, the aims of mass giving should be the decentralization of power in the production of public goods, such as the arts, education, and science. For foundations, the goal should be what Reich terms “discovery,” or long-time-horizon innovations that enhance democratic experimentalism. Philanthropy, when properly structured, can play a crucial role in supporting a strong liberal democracy.

Just Giving investigates the ethical and political dimensions of philanthropy and considers how giving might better support democratic values and promote justice….(More)”

Just Giving: Why Philanthropy Is Failing Democracy and How It Can Do Better

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday