Potential competition impacts from the data asymmetry between Big Tech firms and firms in financial services


Report by the UK Financial Conduct Authority: “Big Tech firms in the UK and around the world have been, and continue to be, under active scrutiny by competition and regulatory authorities. This is because some of these large technology firms may have both the ability and the incentive to shape digital markets by protecting existing market power and extending it into new markets.
Concentration in some digital markets, and Big Tech firms’ key role, has been widely discussed, including in our DP22/05. This reflects both the characteristics of digital markets and the characteristics and behaviours of Big Tech firms themselves. Although Big Tech firms have different business models, common characteristics include their global scale and access to a large installed user base, rich data about their users, advanced data analytics and technology, influence over decision making and defaults, ecosystems of complementary products and strategic behaviours, including acquisition strategies.
Through our work, we aim to mitigate the risk of competition in retail financial markets evolving in a way that results in some Big Tech firms gaining entrenched market power, as seen in other sectors and jurisdictions, while enabling the potential competition benefits that come from Big Tech firms providing challenge to incumbent financial services firms…(More)”.

How do you accidentally run for President of Iceland?


Blog by Anna Andersen: “Content design can have real consequences — for democracy, even…

To run for President of Iceland, you need to be an Icelandic citizen, at least 35 years old, and have 1,500 endorsements.

For the first time in Icelandic history, this endorsement process is digital. Instead of collecting all their signatures on paper the old-fashioned way, candidates can now send people to https://island.is/forsetaframbod to submit their endorsement.

This change has, also for the first time in Icelandic history, given the nation a clear window into who is trying to run — and it’s a remarkably large number. To date, 82 people are collecting endorsements, including a comedian, a model, the world’s first double-arm transplant receiver, and my aunt Helga.

Many of these people are seriously vying for president (yep, my aunt Helga), some of them have undoubtedly signed up as a joke (nope, not the comedian), and at least 11 of them accidentally registered and had no idea that they were collecting endorsements for their candidacy.

“I’m definitely not about to run for president, this was just an accident,” one person told a reporter after having a good laugh about it.

“That’s hilarious!” another person said, thanking the reporter for letting them know that they were in the running.

As a content designer, I was intrigued. How could so many people accidentally start a campaign for President of Iceland?

It turns out, the answer largely has to do with content design.Presidential hopefuls were sending people a link to a page where they could be endorsed, but instead of endorsing the candidate, some people accidentally registered to be a candidate…(More)”.

Russia Clones Wikipedia, Censors It, Bans Original


Article by Jules Roscoe: “Russia has replaced Wikipedia with a state-sponsored encyclopedia that is a clone of the original Russian Wikipedia but which conveniently has been edited to omit things that could cast the Russian government in poor light. Real Russian Wikipedia editors used to refer to the real Wikipedia as Ruwiki; the new one is called Ruviki, has “ruwiki” in its url, and has copied all Russian-language Wikipedia articles and strictly edited them to comply with Russian laws. 

The new articles exclude mentions of “foreign agents,” the Russian government’s designation for any person or entity which expresses opinions about the government and is supported, financially or otherwise, by an outside nation. Prominent “foreign agents” have included a foundation created by Alexei Navalny, a famed Russian opposition leader who died in prison in February, and Memorial, an organization dedicated to preserving the memory of Soviet terror victims, which was liquidated in 2022. The news was first reported by Novaya Gazeta, an independent Russian news outlet that relocated to Latvia after Russia invaded Ukraine in 2022. It was also picked up by Signpost, a publication that follows Wikimedia goings-on.

Both Ruviki articles about these agents include disclaimers about their status as foreign agents. Navalny’s article states he is a “video blogger” known for “involvement in extremist activity or terrorism.” It is worth mentioning that his wife, Yulia Navalnaya, firmly believes he was killed. …(More)”.

The Crime Data Handbook


Book edited by Laura Huey and David Buil-Gil: “Crime research has grown substantially over the past decade, with a rise in evidence-informed approaches to criminal justice, statistics-driven decision-making and predictive analytics. The fuel that has driven this growth is data – and one of its most pressing challenges is the lack of research on the use and interpretation of data sources.

This accessible, engaging book closes that gap for researchers, practitioners and students. International researchers and crime analysts discuss the strengths, perils and opportunities of the data sources and tools now available and their best use in informing sound public policy and criminal justice practice…(More)”.

Debugging Tech Journalism


Essay by Timothy B. Lee: “A huge proportion of tech journalism is characterized by scandals, sensationalism, and shoddy research. Can we fix it?

In November, a few days after Sam Altman was fired — and then rehired — as CEO of OpenAI, Reuters reported on a letter that may have played a role in Altman’s ouster. Several staffers reportedly wrote to the board of directors warning about “a powerful artificial intelligence discovery that they said could threaten humanity.”

The discovery: an AI system called Q* that can solve grade-school math problems.

“Researchers consider math to be a frontier of generative AI development,” the Reuters journalists wrote. Large language models are “good at writing and language translation,” but “conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence.”

This was a bit of a head-scratcher. Computers have been able to perform arithmetic at superhuman levels for decades. The Q* project was reportedly focused on word problems, which have historically been harder than arithmetic for computers to solve. Still, it’s not obvious that solving them would unlock human-level intelligence.

The Reuters article left readers with a vague impression that Q could be a huge breakthrough in AI — one that might even “threaten humanity.” But it didn’t provide readers with the context to understand what Q actually was — or to evaluate whether feverish speculation about it was justified.

For example, the Reuters article didn’t mention research OpenAI published last May describing a technique for solving math problems by breaking them down into small steps. In a December article, I dug into this and other recent research to help to illuminate what OpenAI is likely working on: a framework that would enable AI systems to search through a large space of possible solutions to a problem…(More)”.

AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem


Article by Jordi Calvet-Bademunt and Jacob Mchangama: “Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly’s image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

The discussions over AI’s political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry’s approach to free speech, and does it embrace international free speech standards?…In a recent report, we found that generative AI has important shortcomings regarding freedom of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times…(More)”.

What is ‘lived experience’?


Article by Patrick J Casey: “Everywhere you turn, there is talk of lived experience. But there is little consensus about what the phrase ‘lived experience’ means, where it came from, and whether it has any value. Although long used by academics, it has become ubiquitous, leaping out of the ivory tower and showing up in activism, government, consulting, as well as popular culture. The Lived Experience Leaders Movement explains that those who have lived experiences have ‘[d]irect, first-hand experience, past or present, of a social issue(s) and/or injustice(s)’. A recent brief from the US Department of Health and Human Services suggests that those who have lived experience have ‘valuable and unique expertise’ that should be consulted in policy work, since engaging those with ‘knowledge based on [their] perspective, personal identities, and history’ can ‘help break down power dynamics’ and advance equity. A search of Twitter reveals a constant stream of use, from assertions like ‘Your research doesn’t override my lived experience,’ to ‘I’m pretty sure you’re not allowed to question someone’s lived experience.’

A recurring theme is a connection between lived experience and identity. A recent nominee for the US Secretary of Labor, Julie Su, is lauded as someone who will ‘bring her lived experience as a daughter of immigrants, a woman of color, and an Asian American to the role’. The Human Rights Campaign asserts that ‘[l]aws and legislation must reflect the lived experiences of LGBTQ people’. An editorial in Nature Mental Health notes that incorporation of ‘people with lived experience’ has ‘taken on the status of a movement’ in the field.

Carried a step further, the notion of lived experience is bound up with what is often called identity politics, as when one claims to be speaking from the standpoint of an identity group – ‘in my lived experience as a…’ or, simply, ‘speaking as a…’ Here, lived experience is often invoked to establish authority and prompt deference from others since, purportedly, only members of a shared identity know what it’s like to have certain kinds of experience or to be a member of that group. Outsiders sense that they shouldn’t criticise what is said because, grounded in lived experience, ‘people’s spoken truths are, in and of themselves, truths.’ Criticism of lived experience might be taken to invalidate or dehumanise others or make them feel unsafe.

So, what is lived experience? Where did it come from? And what does it have to do with identity politics?…(More)”.

On the Manipulation of Information by Governments


Paper by Ariel Karlinsky and Moses Shayo: “Governmental information manipulation has been hard to measure and study systematically. We hand-collect data from official and unofficial sources in 134 countries to estimate misreporting of Covid mortality during 2020-21. We find that between 45%–55% of governments misreported the number of deaths. The lion’s share of misreporting cannot be attributed to a country’s capacity to accurately diagnose and report deaths. Contrary to some theoretical expectations, there is little evidence of governments exaggerating the severity of the pandemic. Misreporting is higher where governments face few social and institutional constraints, in countries holding elections, and in countries with a communist legacy…(More)”

The economic research policymakers actually need


Blog by Jed Kolko: “…The structure of academia just isn’t set up to produce the kind of research many policymakers need. Instead, top academic journal editors and tenure committees reward research that pushes the boundaries of the discipline and makes new theoretical or empirical contributions. And most academic papers presume familiarity with the relevant academic literature, making it difficult for anyone outside of academia to make the best possible use of them.

The most useful research often came instead from regional Federal Reserve banks, non-partisan think-tanks, the corporate sector, and from academics who had the support, freedom, or job security to prioritize policy relevance. It generally fell into three categories:

  1. New measures of the economy
  2. Broad literature reviews
  3. Analyses that directly quantify or simulate policy decisions.

If you’re an economic researcher and you want to do work that is actually helpful for policymakers — and increases economists’ influence in government — aim for one of those three buckets.

The pandemic and its aftermath brought an urgent need for data at higher frequency, with greater geographic and sectoral detail, and about ways the economy suddenly changed. Some of the most useful research contributions during that period were new data and measures of the economy: they were valuable as ingredients rather than as recipes or finished meals. Here are some examples:

The Formalization of Social Precarities


Anthology edited by Murali Shanmugavelan and Aiha Nguyen: “…explores platformization from the point of view of precarious gig workers in the Majority World. In countries like Bangladesh, Brazil, and India — which reinforce social hierarchies via gender, race, and caste — precarious workers are often the most marginalized members of society. Labor platforms made familiar promises to workers in these countries: work would be democratized, and people would have the opportunity to be their own boss. Yet even as platforms have upended the legal relationship between worker and employer, they have leaned into social structures to keep workers precarious — and in fact formalized those social precarities through surveillance and data collection…(More)”.