Suspense and surprise in the book of technology: Understanding innovation dynamics


Paper by Oh-Hyun Kwon, Jisung Yoon, Lav R. Varshney, Woo-Sung Jung, Hyejin Youn: “We envision future technologies through science fiction, strategic planning, or academic research. Yet, our expectations do not always match with what actually unfolds, much like navigating a story where some events align with expectations while others surprise us. This gap indicates the inherent uncertainty of innovation-how technologies emerge and evolve in unpredictable ways. Here, we elaborate on this inherent uncertainty of innovation in the way technologies emerge and evolve. We define suspense captures accumulated uncertainty and describing events anticipated before their realization, while surprise represents a dramatic shift in understanding when an event occurs unexpectedly. We identify those connections in U.S. patents and show that suspenseful innovations tend to integrate more smoothly into society, achieving higher citations and market value. In contrast, surprising innovations, though often disruptive and groundbreaking, face challenges in adoption due to their extreme novelty. We further show that these categories allow us to identify distinct stages of technology life cycles, suggesting a way to identify the systematic trajectory of technologies and anticipate their future paths…(More)”.

Big data for decision-making in public transport management: A comparison of different data sources


Paper by Valeria Maria Urbano, Marika Arena, and Giovanni Azzone: “The conventional data used to support public transport management have inherent constraints related to scalability, cost, and the potential to capture space and time variability. These limitations underscore the importance of exploring innovative data sources to complement more traditional ones.

For public transport operators, who are tasked with making pivotal decisions spanning planning, operation, and performance measurement, innovative data sources are a frontier that is still largely unexplored. To fill this gap, this study first establishes a framework for evaluating innovative data sources, highlighting the specific characteristics that data should have to support decision-making in the context of transportation management. Second, a comparative analysis is conducted, using empirical data collected from primary public transport operators in the Lombardy region, with the aim of understanding whether and to what extent different data sources meet the above requirements.

The findings of this study support transport operators in selecting data sources aligned with different decision-making domains, highlighting related benefits and challenges. This underscores the importance of integrating different data sources to exploit their complementarities…(More)”.

Developing a Framework for Collective Data Rights


Report by Jeni Tennison: “Are collective data rights really necessary? Or, do people and communities already have sufficient rights to address harms through equality, public administration or consumer law? Might collective data rights even be harmful by undermining individual data rights or creating unjust collectivities? If we did have collective data rights, what should they look like? And how could they be introduced into legislation?

Data protection law and policy are founded on the notion of individual notice and consent, originating from the handling of personal data gathered for medical and scientific research. However, recent work on data governance has highlighted shortcomings with the notice-and-consent approach, especially in an age of big data and artificial intelligence. This special reports considers the need for collective data rights by examining legal remedies currently available in the United Kingdom in three scenarios where the people affected by algorithmic decision making are not data subjects and therefore do not have individual data protection rights…(More)”.

You Be the Judge: How Taobao Crowdsourced Its Courts


Excerpt from Lizhi Liu’s new book, “From Click to Boom”: “When disputes occur, Taobao encourages buyers and sellers to negotiate with each other first. If the feuding parties cannot reach an agreement and do not want to go to court, they can use one of Taobao’s two judicial channels: asking a Taobao employee to adjudicate or using an online jury panel to arbitrate. This section discusses the second channel, a unique Chinese institutional innovation.

Alibaba’s Public Jury was established in 2012 to crowdsource justice. It uses a Western-style jury-voting mechanism to solve online disputes and controversial issues. These jurors are termed “public assessors” by Taobao. Interestingly, the name “public assessor” was drawn from the Chinese talent show “Super Girl” (similar to “American Idol”), which, after the authority shut down its mass voting system, transitioned to using a small panel of audience representatives (or “public assessors”) to vote for the show’s winner. The public jury was widely used by the main Taobao site by 2020 and is now frequently used by Xianyu, Taobao’s used-goods market.

Why did Taobao introduce the jury system? Certainly, as Taobao expanded, the volume of online disputes surged, posing challenges for the platform to handle all disputes by itself. However, according to a former platform employee responsible for designing this institution, the primary motivation was not the caseload. Instead, it was propelled by the complexity of online disputes that proved challenging for the platform to resolve alone. Consequently, they opted to involve users in adjudicating these cases to ensure a fairer process rather than solely relying on platform intervention.

To form a jury, Taobao randomly chooses each panel of 13 jurors from 4 million volunteer candidates; each juror may participate in up to 40 cases per day. The candidate needs to be an experienced Taobao user (i.e., those who have registered for more than a year) with a good online reputation (i.e., those who have a sufficiently high credit rating, as discussed below). This requirement is high enough to prevent most dishonest traders from manipulating votes, but low enough to be inclusive and keep the juror pool large. These jurors are unpaid yet motivated to participate. They gain experience points that can translate into different virtual titles or that can be donated to charity by Taobao as real money…(More)”

Un-Plateauing Corruption Research?Perhaps less necessary, but more exciting than one might think


Article by Dieter Zinnbauer: “There is a sense in the anti-corruption research community that we may have reached some plateau (or less politely, hit a wall). This article argues – at least partly – against this claim.

We may have reached a plateau with regard to some recurring (staid?) scholarly and policy debates that resurface with eerie regularity, tend to suck all oxygen out of the room, yet remain essentially unsettled and irresolvable. Questions aimed at arriving closure on what constitutes corruption, passing authoritative judgements  on what works and what does not and rather grand pronouncements on whether progress has or has not been all fall into this category.

 At the same time, there is exciting work often in unexpected places outside the inner ward of the anti-corruption castle,  contributing new approaches and fresh-ish insights and there are promising leads for exciting research on the horizon. Such areas include the underappreciated idiosyncrasies of corruption in the form of inaction rather than action, the use of satellites and remote sensing techniques to better understand and measure corruption, the overlooked role of short-sellers in tackling complex forms of corporate corruption and the growing phenomena of integrity capture, the anti-corruption apparatus co-opted for sinister, corrupt purposes.

These are just four examples of the colourful opportunity tapestry for (anti)corruption research moving forward, not in form of a great unified project and overarching new idea  but as little stabs of potentiality here and  there and somewhere else surprisingly unbeknownst…(More)”

The Case for Local and Regional Public Engagement in Governing Artificial Intelligence


Article by Stefaan Verhulst and Claudia Chwalisz: “As the Paris AI Action Summit approaches, the world’s attention will once again turn to the urgent questions surrounding how we govern artificial intelligence responsibly. Discussions will inevitably include calls for global coordination and participation, exemplified by several proposals for a Global Citizens’ Assembly on AI. While such initiatives aim to foster inclusivity, the reality is that meaningful deliberation and actionable outcomes often emerge most effectively at the local and regional levels.

Building on earlier reflections in “AI Globalism and AI Localism,” we argue that to govern AI for public benefit, we must prioritize building public engagement capacity closer to the communities where AI systems are deployed. Localized engagement not only ensures relevance to specific cultural, social, and economic contexts but also equips communities with the agency to shape both policy and product development in ways that reflect their needs and values.

While a Global Citizens’ Assembly sounds like a great idea on the surface, there is no public authority with teeth or enforcement mechanisms at that level of governance. The Paris Summit represents an opportunity to rethink existing AI governance frameworks, reorienting them toward an approach that is grounded in lived, local realities and mutually respectful processes of co-creation. Toward that end, we elaborate below on proposals for: local and regional AI assemblies; AI citizens’ assemblies for EU policy; capacity-building programs, and localized data governance models…(More)”.

Reimagining data for Open Source AI: A call to action


Report by Open Source Initiative: “Artificial intelligence (AI) is changing the world at a remarkable pace, with Open Source AI playing a pivotal role in shaping its trajectory. Yet, as AI advances, a fundamental challenge emerges: How do we create a data ecosystem that is not only robust but also equitable and sustainable?

The Open Source Initiative (OSI) and Open Future have taken a significant step toward addressing this challenge by releasing a white paper: “Data Governance in Open Source AI: Enabling Responsible and Systematic Access.” This document is the culmination of a global co-design process, enriched by insights from a vibrant two-day workshop held in Paris in October 2024….

The white paper offers a blueprint for a data ecosystem rooted in fairness, inclusivity and sustainability. It calls for two transformative shifts:

  1. From Open Data to Data Commons: Moving beyond the notion of unrestricted data to a model that balances openness with the rights and needs of all stakeholders.
  2. Broadening the stakeholder universe: Creating collaborative frameworks that unite communities, stewards and creators in equitable data-sharing practices.

To bring these shifts to life, the white paper delves into six critical focus areas:

  • Data preparation
  • Preference signaling and licensing
  • Data stewards and custodians
  • Environmental sustainability
  • Reciprocity and compensation
  • Policy interventions…(More)”

To Bot or Not to Bot? How AI Companions Are Reshaping Human Services and Connection


Essay by Julia Freeland Fisher: “Last year, a Harvard study on chatbots drew a startling conclusion: AI companions significantly reduce loneliness. The researchers found that “synthetic conversation partners,” or bots engineered to be caring and friendly, curbed loneliness on par with interacting with a fellow human. The study was silent, however, on the irony behind these findings: synthetic interaction is not a real, lasting connection. Should the price of curing loneliness really be more isolation?

Missing that subtext is emblematic of our times. Near-term upsides often overshadow long-term consequences. Even with important lessons learned about the harms of social media and big tech over the past two decades, today, optimism about AI’s potential is soaring, at least in some circles.

Bots present an especially tempting fix to long-standing capacity constraints across education, health care, and other social services. AI coaches, tutors, navigators, caseworkers, and assistants could overcome the very real challenges—like cost, recruitment, training, and retention—that have made access to vital forms of high-quality human support perennially hard to scale.

But scaling bots that simulate human support presents new risks. What happens if, across a wide range of “human” services, we trade access to more services for fewer human connections?…(More)”.

Wikenigma – an Encyclopedia of Unknowns


About: “Wikenigma is a unique wiki-based resource specifically dedicated to documenting fundamental gaps in human knowledge.

Listing scientific and academic questions to which no-one, anywhere, has yet been able to provide a definitive answer. [ 1141 so far ]

That’s to say, a compendium of so-called ‘Known Unknowns’.

The idea is to inspire and promote interest in scientific and academic research by highlighting opportunities to investigate problems which no-one has yet been able to solve.

You can start browsing the content via the main menu on the left (or in the ‘Main Menu’ section if you’re using a small-screen device) Alternatively, the search box (above right) will find any articles with details that match your search terms…(More)”.

Overcoming challenges associated with broad sharing of human genomic data


Paper by Jonathan E. LoTempio Jr & Jonathan D. Moreno: “Since the Human Genome Project, the consensus position in genomics has been that data should be shared widely to achieve the greatest societal benefit. This position relies on imprecise definitions of the concept of ‘broad data sharing’. Accordingly, the implementation of data sharing varies among landmark genomic studies. In this Perspective, we identify definitions of broad that have been used interchangeably, despite their distinct implications. We further offer a framework with clarified concepts for genomic data sharing and probe six examples in genomics that produced public data. Finally, we articulate three challenges. First, we explore the need to reinterpret the limits of general research use data. Second, we consider the governance of public data deposition from extant samples. Third, we ask whether, in light of changing concepts of broad, participants should be encouraged to share their status as participants publicly or not. Each of these challenges is followed with recommendations…(More)”.