Explore our articles
View All Results

Stefaan Verhulst

CCCBLab: “In recent years we have been witnessing a constant trickle of news on artificial intelligence, machine learning and computer vision. We are told that machines learn, see, create… and all this builds up a discourse based on novelty, on a possible future and on a series of worries and hopes. It is difficult, sometimes, to figure out in this landscape which are real developments, and which are fantasies or warnings. And, undoubtedly, this fog that surrounds it forms part of the power that we grant, both in the present and on credit, to these tools, and of the negative and positive concerns that they arouse in us. Many of these discourses may fall into the field of false debates or, at least, of the return of old debates. Thus, in the classical artistic field, associated with the discourse on creation and authorship, there is discussion regarding the entity to be awarded to the images created with these tools. (Yet wasn’t the argument against photography in art that it was an image created automatically and without human participation? And wasn’t that also an argument in favour of taking it and using it to put an end to a certain idea of art?)

Metaphors are essential in the discourse on all digital tools and the power that they have. Are expressions such as “intelligence”, “vision”, “learning”, “neural” and the entire range of similar words the most adequate for defining these types of tools? Probably not, above all if their metaphorical nature is sidestepped. We would not understand them in the same way if we called them tools of probabilistic classification or if instead of saying that an artificial intelligence “has painted” a Rembrandt, we said that it has produced a statistical reproduction of his style (something which is still surprising, and to be celebrated, of course). These names construct an entity for these tools that endows them with a supposed autonomy and independence upon which their future authority is based.

Because that is what it’s about in many discourses: constructing a characterisation that legitimises an objective or non-human capacity in data analysis….

We now find ourselves in what is, probably, the point of the first cultural reception of these tools. Of their development in fields of research and applications that have already been derived, we are moving on to their presence in the public discourse. It is in this situation and context, where we do not fully know the breadth and characteristics of these technologies (meaning fears are more abstract and diffuse and, thus, more present and powerful), when it is especially important to understand what we are talking about, to appropriate the tools and to intervene in the discourses. Before their possibilities are restricted and solidified until they seem indisputable, it is necessary to experiment with them and reflect on them; taking advantage of the fact that we can still easily perceive them as in creation, malleable and open.

In our projects The Bad Pupil. Critical pedagogy for artificial intelligences and Latent Spaces. Machinic Imaginations we have tried to approach to these tools and their imaginary. In the statement of intentions of the former, we expressed our desire, in the face of the regulatory context and the metaphor of machine learning, to defend the bad pupil as one who escapes the norm. And also how, faced with an artificial intelligence that seeks to replicate the human on inhuman scales, it is necessary to defend and construct a non-mimetic one that produces unexpected relations and images.

Fragment of De zeven werken van barmhartigheid, Meester van Alkmaar, 1504 (Rijksmuseum, Amsterdam) analysed with YOLO9000 | The Bad Pupil - Estampa

Fragment of De zeven werken van barmhartigheid, Meester van Alkmaar, 1504 (Rijksmuseum, Amsterdam) analysed with YOLO9000 | The Bad Pupil – Estampa

Both projects are also attempts to appropriate these tools, which means, first of all, escaping industrial barriers and their standards. In this field in which mass data are an asset within reach of big companies, employing quantitively poor datasets and non-industrial calculation potentials is not just a need but a demand….(More)”.

The Bad Pupil

Report by Roberto Mangabeira Unger, Isaac Stanley, Madeleine Gabriel, and Geoff Mulgan: “If economic eras are defined by their most advanced form of production, then we live in a knowledge economy – one where knowledge plays a decisive role in the organisation of production, distribution and consumption.

The era of Fordist mass production that preceded it transformed almost every part of the economy. But the knowledge economy hasn’t spread in the same way. Only some people and places are reaping the benefits.

This is a big problem: it contributes to inequality, stagnation and political alienation. And traditional policy solutions are not sufficient to tackle it. We can’t expect benefits simply to trickle down to the rest of the population, and redistribution alone will not solve the inequalities we are facing.

What’s the alternative? Nesta has been working with Roberto Mangabeira Unger to convene discussions with politicians, researchers, and activists from member countries of the Organisation for Economic Co-operation and Development, to explore policy options for an inclusive knowledge economy. This report presents the results of that collaboration.

We argue that an inclusive knowledge economy requires action to democratise the economy – widening access to capital and productive opportunity, transforming models of ownership, addressing new concentrations of power, and democratising the direction of innovation.

It demands that we establish a social inheritance by reforming education and social security.

And it requires us to create a high-energy democracy, promoting experimental government, and independent and empowered civil society.

Recommendations

This is a broad ranging agenda. In practice, it focuses on:

  • SMEs and their capacity and skills – greatly accelerating the adoption of new methods and technologies at every level of the economy, including new clean technologies that reduce carbon emissions
  • Transforming industrial policy to cope with the new concentrations of power and to prevent monopoly and predatory behaviours
  • Transforming and disaggregating property rights so that more people can have a stake in productive resources
  • Reforming education to prepare the next generation for the labour market of the future not the past – cultivating the mindsets, skills and cultures relevant to future jobs
  • Reforming social policy to respond to new patterns of work and need – creating more flexible systems that can cope with rapid change in jobs and skills, with a greater emphasis on reskilling
  • Reforming government and democracy to achieve new levels of participation, agility, experimentation and effectiveness…(More)”
Imagination unleashed: Democratising the knowledge economy

Study by Michael L. Barnett et al in JAMA: “Is a collective intelligence approach of pooling multiple clinician and medical student diagnoses associated with improvement in diagnostic accuracy in online, structured clinical cases?

Findings  This cross-sectional study analyzing data from the Human Diagnosis Project found that, across a broad range of medical cases and common presenting symptoms, independent differential diagnoses of multiple physicians combined into a weighted list significantly outperformed diagnoses of individual physicians with groups as small as 2, and accuracy increased with larger groups up to 9 physicians. Groups of nonspecialists also significantly outperformed individual specialists solving cases matched to the individual specialist’s specialty….

Main Outcomes and Measures  The primary outcome was diagnostic accuracy, assessed as a correct diagnosis in the top 3 ranked diagnoses for an individual; for groups, the top 3 diagnoses were a collective differential generated using a weighted combination of user diagnoses with a variety of approaches. A version of the McNemar test was used to account for clustering across repeated solvers to compare diagnostic accuracy.

Conclusions and Relevance  A collective intelligence approach was associated with higher diagnostic accuracy compared with individuals, including individual specialists whose expertise matched the case diagnosis, across a range of medical cases. Given the few proven strategies to address misdiagnosis, this technique merits further study in clinical settings….(More)”.

Comparative Accuracy of Diagnosis by Collective Intelligence of Multiple Physicians vs Individual Physicians

Department of Commerce: “These are the real stories of the people that recently participated in the Census Bureau initiative called The Opportunity Project—a novel, collaborative effort between government agencies, technology companies, and nongovernment organizations to translate government open data into user-friendly tools that solve real world problems for families, communities, and businesses nationwide.  On March 1, they came together to share their projects at The Opportunity Project’s Demo Day. Projects like theirs help veterans, aspiring technologists, and all Americans connect with the career and educational opportunities, like Bryan and Olivia did.

One barrier for many American students and workers is the lack of clear data to help match them with educational opportunities and jobs.  Students want information on the best courses that lead to high paying and high demand jobs. Job seekers want to find the jobs that best match their skills, or where to find new skills that open up career development opportunities.  Despite the increasing availability of big data and the long-standing, highly regarded federal statistical system, there remain significant data gaps about basic labor market questions.

  • What is the payoff of a bachelor’s degree versus an apprenticeship, 2-year degree, industry certification, or other credential?
  • What are the jobs of the future?  Which jobs of today also will be the jobs of the future? What skills and experience do companies value most?

The Opportunity Project brings government, communities, and companies like IBM, the veteran-led Shift.org, and Nepris together to create tools to answer simple questions related to education, employment, health, transportation, housing, and many other matters that are critical to helping Americans advance in their lives and careers….(More)”.

New Data Tools Connect American Workers to Education and Job Opportunities

Press Release: “Nearly half of Canadian consumers would be willing to share significant personal information, such as location data and lifestyle information, with their bank and insurer in exchange for lower pricing on products and services, according to a new report from Accenture (NYSE: ACN).

Consumers willing to share personal data in select scenarios. (CNW Group/Accenture)
Consumers willing to share personal data in select scenarios. (CNW Group/Accenture)

Accenture’s global Financial Services Consumer Study, based on a survey of 47,000 consumers in 28 countries which included 2,000 Canadians, found that more than half of consumers would share that data for benefits including more-rapid loan approvals, discounts on gym memberships and personalized offers based on current location.

At the same time, however, Canadian consumers believe that privacy is paramount, with nearly three quarters (72 per cent) saying they are very cautious about the privacy of their personal data. In fact, data security breaches were the second-biggest concern for consumers, behind only increasing costs, when asked what would make them leave their bank or insurer.

“Canadian consumers are willing to sharing their personal data in instances where it makes their lives easier but remain cautious of exactly how their information is being used,” said Robert Vokes, managing director of financial services at Accenture in Canada. “With this in mind, banks and insurers need to deliver hyper-relevant and highly convenient experience in order to remain relevant, retain trust and win customer loyalty in a digital economy.”

Consumers globally showed strong support for personalized insurance premiums, with 64 per cent interested in receiving adjusted car insurance premiums based on safe driving and 52 per cent in exchange for life insurance premiums tied to a healthy lifestyle. Four in five consumers (79 per cent) would provide personal data, including income, location and lifestyle habits, to their insurer if they believe it would help reduce the possibility of injury or loss.

In banking, 81 per cent of consumers would be willing to share income, location and lifestyle habit data for rapid loan approval, and 76 per cent would do so to receive personalized offers based on their location, such as discounts from a retailer. Approximately two-fifths (42 per cent) of Canadian consumers specifically, want their bank to provide updates on how much money they have based on spending that month and 46 per cent want savings tips based on their spending habits.  

Appetite for data sharing differs around the world

Appetite for sharing significant personal data with financial firms was highest in China, with 67 per cent of consumers there willing to share more data for personalized services. Half (50 per cent) of consumers in the U.S. said they were willing to share more data for personalized services, and in Europe — where the General Data Protection Regulation took effect in May — consumers were more skeptical. For instance, only 40 per cent of consumers in both the U.K. and Germany said they would be willing to share more data with banks and insurers in return for personalized services…(More)”,

Nearly Half of Canadian Consumers Willing to Share Significant Personal Data with Banks and Insurers in Exchange for Lower Pricing, Accenture Study Finds

Alex Pasternack in Fast Company: “In the face of all the data abuse, many of us have, quite reasonably, thrown up our hands. But privacy didn’t die. It’s just been beaten up, sold, obscured, diffused unevenly across society. What privacy is and why it matters increasingly depends upon who you are, your age, your income, gender, ethnicity, where you’re from, and where you live. To borrow William Gibson’s famous quote about the future and its unevenness and inequalities, privacy is alive—it’s just not evenly distributed. And while we don’t all care about it the same way—we’re even divided on what exactly privacy is—its harms are still real. Even when our own privacy isn’t violated, privacy violations can still hurt us.

Privacy is personal, from the creepy feeling that our phones are literally listening to the endless parade of data breaches that test our ability to care anymore. It’s the unsettling feeling of giving “consent” without knowing what that means, “agreeing” to contracts we didn’t read with companies we don’t really trust. (Forget about understanding all the details; researchers have shown that most privacy policies surpass the reading level of the average person.)

It’s the data about us that’s harvested, bought, sold, and traded by an obscure army of data brokers without our knowledge, feeding marketers, landlords, employers, immigration officialsinsurance companies, debt collectors, as well as stalkers and who knows who else. It’s the body camera or the sports arena or the social network capturing your face for who knows what kind of analysis. Don’t think of personal data as just “data.” As it gets more detailed and more correlated, increasingly, our data is us.

And “privacy” isn’t just privacy. It’s also tied up with security, freedom, social justice, free speech, and free thought. Privacy harms aren’t only personal, but societal. It’s not just the multibillion-dollar industry that aims to nab you and nudge you, but the multibillion-dollar spyware industry that helps governments nab dissidents and send them to prison or worse. It’s the supposedly fair and transparent algorithms that aren’t, turning our personal data into risk scores that can help perpetuate race, class, and gender divides, often without our knowing it.

Privacy is about dark ads bought with dark money and the micro-targeting of voters by overseas propagandists or by political campaigns at home. That kind of influence isn’t just the promise of a shadowy Cambridge Analytica or state-run misinformation campaigns, but also the premise of modern-day digital ad campaigns. (Note that Facebook’s research division later hired one of the researchers behind the Cambridge app.) And as the micro-targeting gets more micro, the tech giants that deal in ads are only getting more macro….(More)”

(This story is part of The Privacy Divide, a series that explores the fault lines and disparities–economic, cultural, philosophical–that have developed around digital privacy and its impact on society.)

Privacy’s not dead. It’s just not evenly distributed

Presentation by Shipi Dhanorkar and Mary Beth Rosson: “The internet connects people who are spatially and temporally separated. One result is new modes of reaching out to, organizing and mobilizing people, including online activism. Internet platforms can be used to mobilize people around specific concerns, short-circuiting structures such as organizational hierarchies or elected officials. These online processes allow consumers and concerned citizens to voice their opinions, often to businesses, other times to civic groups or other authorities. Not surprisingly, this opportunity has encouraged a steady rise in specialized platforms dedicated to online petitioning; eg., Change.org, Care2 Petitions, MoveOn.org, etc.

These platforms are open to everyone; any individual or group who is affected by a problem or disappointed with the status quo, can raise awareness for or against corporate or government policies. Such platforms can empower ordinary citizens to bring about social change, by leveraging support from the masses. In this sense, the platforms allow citizens to “crowdsource change”. In this paper, we offer a comparative analysis of the affordances of four online petitioning platforms, and use this analysis to propose ideas for design enhancements to online petitioning platforms….(More)”.

Crowdsourcing Change: A Novel Vantage Point for Investigating Online Petitioning Platforms

Paper by David Atkins: “Interest in behavioral economics has grown steadily within health care. Policy makers, payers, and providers now recognize that the decisions of patients and of their doctors frequently deviate from the strictly “rational” choices that classical economic theory would predict. For example, patients rarely adhere to the medication regimens or health behaviors that would optimize their health outcomes, and clinicians often make decisions that conflict with evidence-based recommendations or even the practices they profess to endorse. The groundbreaking work of psychologist Daniel Kahneman and his collaborator Amos Tversky raised attention to this field, which was accelerated by Kahneman’s 2002 Nobel Prize in economics and his popular 2011 book “Thinking Fast and Slow” which reached a much broader audience.

Behavioral economics examines cognitive, psychological, and cultural factors that may influence how we make decisions, resulting in behavior that another Nobel laureate, economist Richard Thaler, has termed “predictably irrational.” Principles from behavioral economics have been adopted to health care, including the role of heuristics (rules of thumb), the importance of framing, and the effects of specific cognitive biases (for example, overconfidence and status quo bias).

These principles have been incorporated into interventions that seek to use these insights to change health-related behaviors—these include nudges, where systems are redesigned to make the preferred choice the default choice (for example, making generic versions the default in electronic prescribing); incentive programs that reward patients for taking their medications on schedule or getting preventive interventions like immunizations; and specific interventions aimed at how clinicians respond to information or make decisions….(More)”.

So Many Nudges, So Little Time: Can Cost-effectiveness Tell Us When It Is Worthwhile to Try to Change Provider Behavior?

Paper by Lina Khan and David Pozen: “The concept of “information fiduciaries” has surged to the forefront of debates on online platform regulation. Developed by Professor Jack Balkin, the concept is meant to rebalance the relationship between ordinary individuals and the digital companies that accumulate, analyze, and sell their personal data for profit. Just as the law imposes special duties of care, confidentiality, and loyalty on doctors, lawyers, and accountants vis-à-vis their patients and clients, Balkin argues, so too should it impose special duties on corporations such as Facebook, Google, and Twitter vis-à-vis their end users. Over the past several years, this argument has garnered remarkably broad support and essentially zero critical pushback.

This Essay seeks to disrupt the emerging consensus by identifying a number of lurking tensions and ambiguities in the theory of information fiduciaries, as well as a number of reasons to doubt the theory’s capacity to resolve them satisfactorily. Although we agree with Balkin that the harms stemming from dominant online platforms call for legal intervention, we question whether the concept of information fiduciaries is an adequate or apt response to the problems of information insecurity that he stresses, much less to more fundamental problems associated with outsized market share and business models built on pervasive surveillance. We also call attention to the potential costs of adopting an information-fiduciary framework—a framework that, we fear, invites an enervating complacency toward online platforms’ structural power and a premature abandonment of more robust visions of public regulation….(More)”.

A Skeptical View of Information Fiduciaries

Book by Carl Benedikt Frey: “From the Industrial Revolution to the age of artificial intelligence, The Technology Trap takes a sweeping look at the history of technological progress and how it has radically shifted the distribution of economic and political power among society’s members. As Carl Benedikt Frey shows, the Industrial Revolution created unprecedented wealth and prosperity over the long run, but the immediate consequences of mechanization were devastating for large swaths of the population. Middle-income jobs withered, wages stagnated, the labor share of income fell, profits surged, and economic inequality skyrocketed. These trends, Frey documents, broadly mirror those in our current age of automation, which began with the Computer Revolution.

Just as the Industrial Revolution eventually brought about extraordinary benefits for society, artificial intelligence systems have the potential to do the same. But Frey argues that this depends on how the short term is managed. In the nineteenth century, workers violently expressed their concerns over machines taking their jobs. The Luddite uprisings joined a long wave of machinery riots that swept across Europe and China. Today’s despairing middle class has not resorted to physical force, but their frustration has led to rising populism and the increasing fragmentation of society. As middle-class jobs continue to come under pressure, there’s no assurance that positive attitudes to technology will persist.
The Industrial Revolution was a defining moment in history, but few grasped its enormous consequences at the time. The Technology Trap demonstrates that in the midst of another technological revolution, the lessons of the past can help us to more effectively face the present….(More)”.

The Technology Trap: Capital, Labor, and Power in the Age of Automation

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday