The Legal Singularity


Book by Abdi Aidid and Benjamin Alarie: “…argue that the proliferation of artificial intelligence–enabled technology – and specifically the advent of legal prediction – is on the verge of radically reconfiguring the law, our institutions, and our society for the better.

Revealing the ways in which our legal institutions underperform and are expensive to administer, the book highlights the negative social consequences associated with our legal status quo. Given the infirmities of the current state of the law and our legal institutions, the silver lining is that there is ample room for improvement. With concerted action, technology can help us to ameliorate the problems of the law and improve our legal institutions. Inspired in part by the concept of the “technological singularity,” The Legal Singularity presents a future state in which technology facilitates the functional “completeness” of law, where the law is at once extraordinarily more complex in its specification than it is today, and yet operationally, the law is vastly more knowable, fairer, and clearer for its subjects. Aidid and Alarie describe the changes that will culminate in the legal singularity and explore the implications for the law and its institutions…(More)”.

Data can help decarbonize cities – let us explain


Article by Stephen Lorimer and Andrew Collinge: “The University of Birmingham, Alan Turing Institute and Centre for Net Zero are working together, using a tool developed by the Centre, called Faraday, to model a more detailed understanding of energy flows within the district and between it and the neighbouring 8,000 residents. Faraday is a generative AI model trained on one of the UK’s largest smart metre datasets. The model is helping to unlock a more granular view of energy sources and changing energy usage, providing the basis for modelling future energy consumption and local smart grid management.

The partners are investigating the role that trusted data aggregators can play if they can take raw data and desensitize it to a point where it can be shared without eroding consumer privacy or commercial advantage.

Data is central to both initiatives and all cities seeking a renewable energy transition. But there are issues to address, such as common data standards, governance and data competency frameworks (especially across the built environment supply chain)…

Building the governance, standards and culture that delivers confidence in energy data exchange is essential to maximizing the potential of carbon reduction technologies. This framework will ultimately support efficient supply chains and coordinate market activity. There are lessons from the Open Banking initiative, which provided the framework for traditional financial institutions, fintech and regulators to deliver innovation in financial products and services with carefully shared consumer data.

In the energy domain, there are numerous advantageous aspects to data sharing. It helps overcome barriers in the product supply chain, from materials to low-carbon technologies (heat pumps, smart thermostats, electric vehicle chargers etc). Free and Open-Source Software (FOSS) providers can use data to support installers and property owners.

Data interoperability allows third-party products and services to communicate with any end-user device through open or proprietary Internet of Things gateway platforms such as Tuya or IFTTT. A growing bank of post-installation data on the operation of buildings (such as energy efficiency and air quality) will boost confidence in the future quality of retrofits and make for easier decisions on planning approval and grid connections. Finally, data is increasingly considered key in securing the financing and private sector investment crucial to the net zero effort.

None of the above is easy. Organizational and technical complexity can slow progress but cities must be at the forefront of efforts to coordinate the energy data ecosystem and make the case for “data for decarbonization.”…(More)”.

Health Data Sharing to Support Better Outcomes: Building a Foundation of Stakeholder Trust


A Special Publication from the National Academy of Medicine: “The effective use of data is foundational to the concept of a learning health system—one that leverages and shares data to learn from every patient experience, and feeds the results back to clinicians, patients and families, and health care executives to transform health, health care, and health equity. More than ever, the American health care system is in a position to harness new technologies and new data sources to improve individual and population health.

Learning health systems are driven by multiple stakeholders—patients, clinicians and clinical teams, health care organizations, academic institutions, government, industry, and payers. Each stakeholder group has its own sources of data, its own priorities, and its own goals and needs with respect to sharing that data. However, in America’s current health system, these stakeholders operate in silos without a clear understanding of the motivations and priorities of other groups. The three stakeholder working groups that served as the authors of this Special Publication identified many cultural, ethical, regulatory, and financial barriers to greater data sharing, linkage, and use. What emerged was the foundational role of trust in achieving the full vision of a learning health system.

This Special Publication outlines a number of potentially valuable policy changes and actions that will help drive toward effective, efficient, and ethical data sharing, including more compelling and widespread communication efforts to improve awareness, understanding, and participation in data sharing. Achieving the vision of a learning health system will require eliminating the artificial boundaries that exist today among patient care, health system improvement, and research. Breaking down these barriers will require an unrelenting commitment across multiple stakeholders toward a shared goal of better, more equitable health.

We can improve together by sharing and using data in ways that produce trust and respect. Patients and families deserve nothing less…(More)”.

Data Governance and Policy in Africa


This open access book edited by Bitange Ndemo, Njuguna Ndung’u, Scholastica Odhiambo and Abebe Shimeles: “…examines data governance and its implications for policymaking in Africa. Bringing together economists, lawyers, statisticians, and technology experts, it assesses gaps in both the availability and use of existing data across the continent, and argues that data creation, management and governance need to improve if private and public sectors are to reap the benefits of big data and digital technologies. It also considers lessons from across the globe to assess principles, norms and practices that can guide the development of data governance in Africa….(More)”.

The Early History of Counting


Essay by Keith Houston: “Figuring out when humans began to count systematically, with purpose, is not easy. Our first real clues are a handful of curious, carved bones dating from the final few millennia of the three-​million-​year expanse of the Old Stone Age, or Paleolithic era. Those bones are humanity’s first pocket calculators: For the prehistoric humans who carved them, they were mathematical notebooks and counting aids rolled into one. For the anthropologists who unearthed them thousands of years later, they were proof that our ability to count had manifested itself no later than 40,000 years ago.

In 1973, while excavating a cave in the Lebombo Mountains, near South Africa’s border with Swaziland, Peter Beaumont found a small, broken bone with twenty-​nine notches carved across it. The so-​called Border Cave had been known to archaeologists since 1934, but the discovery during World War II of skeletal remains dating to the Middle Stone Age heralded a site of rare importance. It was not until Beaumont’s dig in the 1970s, however, that the cave gave up its most significant treasure: the earliest known tally stick, in the form of a notched, three-​inch long baboon fibula.

On the face of it, the numerical instrument known as the tally stick is exceedingly mundane. Used since before recorded history—​still used, in fact, by some cultures—​to mark the passing days, or to account for goods or monies given or received, most tally sticks are no more than wooden rods incised with notches along their length. They help their users to count, to remember, and to transfer ownership. All of which is reminiscent of writing, except that writing did not arrive until a scant 5,000 years ago—​and so, when the Lebombo bone was determined to be some 42,000 years old, it instantly became one of the most intriguing archaeological artifacts ever found. Not only does it put a date on when Homo sapiens started counting, it also marks the point at which we began to delegate our memories to external devices, thereby unburdening our minds so that they might be used for something else instead. Writing in 1776, the German historian Justus Möser knew nothing of the Lebombo bone, but his musings on tally sticks in general are strikingly apposite:

The notched tally stick itself testifies to the intelligence of our ancestors. No invention is simpler and yet more significant than this…(More)”.

What if You Knew What You Were Missing on Social Media?


Article by Julia Angwin: “Social media can feel like a giant newsstand, with more choices than any newsstand ever. It contains news not only from journalism outlets, but also from your grandma, your friends, celebrities and people in countries you have never visited. It is a bountiful feast.

But so often you don’t get to pick from the buffet. On most social media platforms, algorithms use your behavior to narrow in on the posts you are shown. If you send a celebrity’s post to a friend but breeze past your grandma’s, it may display more posts like the celebrity’s in your feed. Even when you choose which accounts to follow, the algorithm still decides which posts to show you and which to bury.

There are a lot of problems with this model. There is the possibility of being trapped in filter bubbles, where we see only news that confirms our existing beliefs. There are rabbit holes, where algorithms can push people toward more extreme content. And there are engagement-driven algorithms that often reward content that is outrageous or horrifying.

Yet not one of those problems is as damaging as the problem of who controls the algorithms. Never has the power to control public discourse been so completely in the hands of a few profit-seeking corporations with no requirements to serve the public good.

Elon Musk’s takeover of Twitter, which he renamed X, has shown what can happen when an individual pushes a political agenda by controlling a social media company.

Since Mr. Musk bought the platform, he has repeatedly declared that he wants to defeat the “woke mind virus” — which he has struggled to define but largely seems to mean Democratic and progressive policies. He has reinstated accounts that were banned because of the white supremacist and antisemitic views they espoused. He has banned journalists and activists. He has promoted far-right figures such as Tucker Carlson and Andrew Tate, who were kicked off other platforms. He has changed the rules so that users can pay to have some posts boosted by the algorithm, and has purportedly changed the algorithm to boost his own posts. The result, as Charlie Warzel said in The Atlantic, is that the platform is now a “far-right social network” that “advances the interests, prejudices and conspiracy theories of the right wing of American politics.”

The Twitter takeover has been a public reckoning with algorithmic control, but any tech company could do something similar. To prevent those who would hijack algorithms for power, we need a pro-choice movement for algorithms. We, the users, should be able to decide what we read at the newsstand…(More)”.

An AI Model Tested In The Ukraine War Is Helping Assess Damage From The Hawaii Wildfires


Article by Irene Benedicto: “On August 7, 2023, the day before the Maui wildfires started in Hawaii, a constellation of earth-observing satellites took multiple pictures of the island at noon, local time. Everything was quiet, still. The next day, at the same, the same satellites captured images of fires consuming the island. Planet, a San Francisco-based company that owns the largest fleet of satellites taking pictures of the Earth daily, provided this raw imagery to Microsoft engineers, who used it to train an AI model designed to analyze the impact of disasters. Comparing before and after the fire photographs, the AI model created maps that highlighted the most devastated areas of the island.

With this information, the Red Cross rearranged its work on the field that same day to respond to the most urgent priorities first, helping evacuate thousands of people who’ve been affected by one of the deadliest fires in over a century. The Hawaii wildfires have already killed over a hundred people, a hundred more remain missing and at least 11,000 people have been displaced. The relief efforts are ongoing 10 days after the start of the fire, which burned over 3,200 acres. Hawaii Governor Josh Green estimated the recovery efforts could cost $6 billion.

Planet and Microsoft AI were able to pull and analyze the satellite imagery so quickly because they’d struggled to do so the last time they deployed their system: during the Ukraine war. The successful response in Maui is the result of a year and a half of building a new AI tool that corrected fundamental flaws in the previous system, which didn’t accurately recognize collapsed buildings in a background of concrete.

“When Ukraine happened, all the AI models failed miserably,” Juan Lavista, chief scientist at Microsoft AI, told Forbes.

The problem was that the company’s previous AI models were mainly trained with natural disasters in the U.S. and Africa. But devastation doesn’t look the same when it is caused by war and in an Eastern European city. “We learned that having one single model that would adapt to every single place on earth was likely impossible,” Lavista said…(More)”.

What is the value of data? A review of empirical methods


Paper by Diane Coyle and Annabel Manley: “With the growing use of digital technologies, data have become core to many organizations’ decisions, with its value widely acknowledged across public and private sectors. Yet few comprehensive empirical approaches to establishing the value of data exist, and there is no consensus about which methods should be applied to specific data types or purposes. This paper examines a range of data valuation methodologies proposed in the existing literature. We propose a typology linking methods to different data types and purposes…(More)”.

Driving Excellence in Official Statistics: Unleashing the Potential of Comprehensive Digital Data Governance


Paper by Hossein Hassani and Steve McFeely: “With the ubiquitous use of digital technologies and the consequent data deluge, official statistics faces new challenges and opportunities. In this context, strengthening official statistics through effective data governance will be crucial to ensure reliability, quality, and access to data. This paper presents a comprehensive framework for digital data governance for official statistics, addressing key components, such as data collection and management, processing and analysis, data sharing and dissemination, as well as privacy and ethical considerations. The framework integrates principles of data governance into digital statistical processes, enabling statistical organizations to navigate the complexities of the digital environment. Drawing on case studies and best practices, the paper highlights successful implementations of digital data governance in official statistics. The paper concludes by discussing future trends and directions, including emerging technologies and opportunities for advancing digital data governance…(More)”.

The Urgent Need to Reimagine Data Consent


Article by Stefaan G. Verhulst, Laura Sandor & Julia Stamm: “Recognizing the significant benefits that can arise from the use and reuse of data to tackle contemporary challenges such as migration, it is worth exploring new approaches to collect and utilize data that empower individuals and communities, granting them the ability to determine how their data can be utilized for various personal, community, and societal causes. This need is not specific to migrants alone. It applies to various regions, populations, and fields, ranging from public health and education to urban mobility. There is a pressing demand to involve communities, often already vulnerable, to establish responsible access to their data that aligns with their expectations, while simultaneously serving the greater public good.

We believe the answer lies through a reimagination of the concept of consent. Traditionally, consent has been the tool of choice to secure agency and individual rights, but that concept, we would suggest, is no longer sufficient to today’s era of datafication. Instead, we should strive to establish a new standard of social license. Here, we’ll define what we mean by a social license and outline some of the limitations of consent (as it is typically defined and practiced today). Then we’ll describe one possible means of securing social license—through participatory decision -making…(More)”.