Computer Science and the Law


Article by Steven M. Bellovin: “There were three U.S. technical/legal developments occurring in approximately 1993 that had a profound effect on the technology industry and on many technologists. More such developments are occurring with increasing frequency.

The three developments were, in fact, technically unrelated. One was a bill before the U.S. Congress for a standardized wiretap interface in phone switches, a concept that spread around the world under the generic name of “lawful intercept.” The second was an update to the copyright statute to adapt to the digital age. While there were some useful changes—caching proxies and ISPs transmitting copyrighted material were no longer to be held liable for making illegal copies of protected content—it also provided an easy way for careless or unscrupulous actors—including bots—to request takedown of perfectly legal material. The third was the infamous Clipper chip, an encryption device that provided a backdoor for the U.S.—and only the U.S.—government.

All three of these developments could be and were debated on purely legal or policy grounds. But there were also technical issues. Thus, one could argue on legal grounds that the Clipper chip granted the government unprecedented powers, powers arguably in violation of the Fourth Amendment to the U.S. Constitution. That, of course, is a U.S. issue—but technologists, including me, pointed out the technical risks of deploying a complex cryptographic protocol, anywhere in the world (and many other countries have since expressed similar desires). Sure enough, Matt Blaze showed how to abuse the Clipper chip to let it do backdoor-free encryption, and at least two other mechanisms for adding backdoors to encryption protocols were shown to have flaws that allowed malefactors to read data that others had encrypted.

These posed a problem: debating some issues intelligently required not just a knowledge of law or of technology, but of both. That is, some problems cannot be discussed purely on technical grounds or purely on legal grounds; the crux of the matter lies in the intersection.

Consider, for example, the difference between content and metadata in a communication. Metadata alone is extremely powerful; indeed, Michael Hayden, former director of both the CIA and the NSA, once said, “We kill people based on metadata.” The combination of content and metadata is of course even more powerful. However, under U.S. law (and the legal reasoning is complex and controversial), the content of a phone call is much more strongly protected than the metadata: who called whom, when, and for how long they spoke. But how does this doctrine apply to the Internet, a network that provides far more powerful abilities to the endpoints in a conversation? (Metadata analysis is not an Internet-specific phenomenon. The militaries of the world have likely been using it for more than a century.) You cannot begin to answer that question without knowing not just how the Internet actually works, but also the legal reasoning behind the difference. It took more than 100 pages for some colleagues and I, three computer scientists and a former Federal prosecutor, to show how the line between content and metadata can be drawn in some cases (and that the Department of Justice’s manuals and some Federal judges got the line wrong), but that in other cases, there is no possible line1 

Newer technologies pose the same sorts of risks…(More)”.

When data disappear: public health pays as US policy strays


Paper by Thomas McAndrew, Andrew A Lover, Garrik Hoyt, and Maimuna S Majumder: “Presidential actions on Jan 20, 2025, by President Donald Trump, including executive orders, have delayed access to or led to the removal of crucial public health data sources in the USA. The continuous collection and maintenance of health data support public health, safety, and security associated with diseases such as seasonal influenza. To show how public health data surveillance enhances public health practice, we analysed data from seven US Government-maintained sources associated with seasonal influenza. We fit two models that forecast the number of national incident influenza hospitalisations in the USA: (1) a data-rich model incorporating data from all seven Government data sources; and (2) a data-poor model built using a single Government hospitalisation data source, representing the minimal required information to produce a forecast of influenza hospitalisations. The data-rich model generated reliable forecasts useful for public health decision making, whereas the predictions using the data-poor model were highly uncertain, rendering them impractical. Thus, health data can serve as a transparent and standardised foundation to improve domestic and global health. Therefore, a plan should be developed to safeguard public health data as a public good…(More)”.

What Happens When AI-Generated Lies Are More Compelling than the Truth?


Essay by Nicholas Carr: “…In George Orwell’s 1984, the functionaries in Big Brother’s Ministry of Truth spend their days rewriting historical records, discarding inconvenient old facts and making up new ones. When the truth gets hazy, tyrants get to define what’s true. The irony here is sharp. Artificial intelligence, perhaps humanity’s greatest monument to logical thinking, may trigger a revolution in perception that overthrows the shared values of reason and rationality we inherited from the Enlightenment.

In 1957, a Russian scientist-turned-folklorist named Yuri Mirolyubov published a translation of an ancient manuscript—a thousand years old, he estimated—in a Russian-language newspaper in San Francisco. Mirolyubov’s Book of Veles told stirring stories of the god Veles, a prominent deity in pre-Christian Slavic mythology. A shapeshifter, magician, and trickster, Veles would visit the mortal world in the form of a bear, sowing mischief wherever he went.

Mirolyubov claimed that the manuscript, written on thin wooden boards bound with leather straps, had been discovered by a Russian soldier in a bombed-out Ukrainian castle in 1919. The soldier had photographed the boards and given the pictures to Mirolyubov, who translated the work into modern Russian. Mirolyubov illustrated his published translation with one of the photographs, though the original boards, he said, had disappeared mysteriously during the Second World War. Though historians and linguists soon dismissed the folklorist’s Book of Veles as a hoax, its renown spread. Today, it’s revered as a holy text by certain neo-pagan and Slavic nationalist cults.

Mythmaking, more than truth seeking, is what seems likely to define the future of media and of the public square.

Myths are works of art. They provide a way of understanding the world that appeals not to reason but to emotion, not to the conscious mind but to the subconscious one. What is most pleasing to our sensibilities—what is most beautiful to us—is what feels most genuine, most worthy of belief. History and psychology both suggest that, in politics as in art, generative AI will succeed in fulfilling the highest aspiration of its creators: to make the virtual feel more authentic than the real…(More)”

Accounting for State Capacity


Essay by Kevin Hawickhorst: “The debates over the Department of Government Efficiency have revealed, if nothing else, that the federal budget is obscure even to the political combatants ostensibly responsible for developing and overseeing it. In the executive branch, Elon Musk highlights that billions of dollars of payments are processed by the Treasury without even a memo line. Meanwhile, in Congress, Republican politicians highlight the incompleteness of the bureaucracy’s spending records, while Democrats bemoan the Trump administration’s dissimulation in ceasing to share budgetary guidance documents. The camp followers of these obscure programs are thousands of federal contractors, pursuing vague goals with indefinite timelines. As soon as the ink on a bill is dry, it seems, Congress loses sight of its initiatives until their eventual success or their all-too-frequent failure.

Contrast this with the 1930s, when the Roosevelt administration provided Congress with hundreds of pages of spending reports every ten days, outlining how tax dollars were being put to use in minute detail. The speed and thoroughness with which these reports were produced is hard to fathom, and yet the administration was actually holding its best information back. FDR’s Treasury had itemized information on hundreds of thousands of projects, down to the individual checks that were written. Incredibly, politicians had better dashboards in the era of punch cards than we have in the era of AI. The decline in government competence runs deeper than our inability to match the speed and economy of New Deal construction: even their accounting was better. What happened?

Political scientists discuss the decline in government competence in terms of “state capacity,” which describes a government’s ability to achieve the goals it pursues. Most political scientists agree that the United States not only suffers from degraded state capacity in absolute terms, but has less state capacity today than in the early twentieth century. A popular theory for this decline blames the excessive proceduralism of the U.S. government: the “cascade of rigidity” or the “procedure fetish.”

But reformers need more than complaints. To rebuild state capacity, reformers need an affirmative vision of what good procedure should look like and, in order to enact it, knowledge of how government procedure is changed. The history of government budgeting and accounting reform illustrates both. There were three major eras of reform to federal accounting in the twentieth century: New Deal reforms of the 1930s, conservative reforms of the 1940s and 1950s, and liberal reforms of the 1960s. This history tells the story of how accounting reforms first built up American state capacity and how later reforms contributed to its gradual decline. These reforms thus offer lessons on rebuilding state capacity today…(More)”.

“R&D” Means Something Different on Capitol Hill


Article by Sheril Kirshenbaum: “My first morning as a scientist-turned-Senate-staffer began with a misunderstanding that would become a metaphor for my impending immersion into the complex world of policymaking. When my new colleagues mentioned “R&D,” I naively assumed they were discussing critical topics related to research and development. After 10 or so confused minutes, I realized they were referring to Republicans and Democrats—my first lesson in the distinctive language and unique dynamics of congressional work. The “R&D” at the center of their world was vastly different than that of mine.In the 20 years since, I’ve moved between academic science positions and working on science policy in the Senate, under both Republican and Democratic majorities. My goal during these two decades has remained the same—to promote evidence-based policymaking that advances science and serves the public, regardless of the political landscape. But the transition from scientist to staffer has transformed my understanding of why so many efforts by scientists to influence policy falter. Despite generations of scholarly research to understand how information informs political decisions, scientists and other academics consistently overlook a crucial part of the process: the role of congressional staffers.

The staff hierarchy shapes how scientific information flows to elected officials. Chiefs of staff manage office operations and serve as the member’s closest advisors. Legislative directors oversee all policy matters, while legislative assistants (LAs) handle specific issue portfolios. One or two LAs may be designated as the office “science people,” although they often lack formal scientific training. Committee staffers provide deeper expertise and institutional knowledge on topics within their jurisdiction. In this ecosystem, few dedicated science positions exist, and science-related topics are distributed among staff already juggling multiple responsibilities…(More)”

Farmers win legal fight to bring climate resources back to federal websites


Article by Justine Calma: “After farmers filed suit, the US Department of Agriculture (USDA) has agreed to restore climate information to webpages it took down soon after President Donald Trump took office this year.

The US Department of Justice filed a letter late last night on behalf of the USDA that says the agency “will restore the climate-change-related web content that was removed post-inauguration, including all USDA webpages and interactive tools” that were named in the plaintiffs’ complaint. It says the work is already “underway” and should be mostly done in about two weeks.

If the Trump administration fulfills that commitment, it’ll be a significant victory for farmers and other Americans who rely on scientific data that has disappeared from federal websites since January…(More)”.

Indiana Faces a Data Center Backlash


Article by Matthew Zeitlin: “Indiana has power. Indiana has transmission. Indiana has a business-friendly Republican government. Indiana is close to Chicago but — crucially — not in Illinois. All of this has led to a huge surge of data center development in the “Crossroads of America.” It has also led to an upswell of local opposition.

There are almost 30 active data center proposals in Indiana, plus five that have already been rejected in the past year, according to data collected by the environmentalist group Citizens Action Coalition. GoogleAmazon, and Meta have all announced projects in the state since the beginning of 2024.

Nipsco, one of the state’s utilities, has projected 2,600 megawatts worth of new load by the middle of the next decade as its base scenario, mostly attributable to “large economic development projects.” In a more aggressive scenario, it sees 3,200 megawatts of new load — that’s three large nuclear reactors’ worth — by 2028 and 8,600 megawatts by 2035. While short of, say, the almost 36,500 megawatts worth of load growth planned in Georgia for the next decade, it’s still a vast range of outcomes that requires some kind of advanced planning.

That new electricity consumption will likely be powered by fossil fuels. Projected load growth in the state has extended a lifeline to Indiana’s coal-fired power plants, with retirement dates for some of the fleet being pushed out to late in the 2030s. It’s also created a market for new natural gas-fired plants that utilities say are necessary to power the expected new load.

State and local political leaders have greeted these new data center projects with enthusiasm, Ben Inskeep, the program director at CAC, told me. “Economic development is king here,” he said. “That is what all the politicians and regulators say their number one concern is: attracting economic development.”..(More)”.

The Importance of Co-Designing Questions: 10 Lessons from Inquiry-Driven Grantmaking


Article by Hannah Chafetz and Stefaan Verhulst: “How can a question-based approach to philanthropy enable better learning and deeper evaluation across both sides of the partnership and help make progress towards long-term systemic change? That’s what Siegel Family Endowment (Siegel), a family foundation based in New York City, sought to answer by creating an Inquiry-Driven Grantmaking approach

While many philanthropies continue to follow traditional practices that focus on achieving a set of strategic objectives, Siegel employs an inquiry-driven approach, which focuses on answering questions that can accelerate insights and iteration across the systems they seek to change. By framing their goal as “learning” rather than an “outcome” or “metric,” they aim to generate knowledge that can be shared across the whole field and unlock impact beyond the work on individual grants. 

The Siegel approach centers on co-designing and iteratively refining questions with grantees to address evolving strategic priorities, using rapid iteration and stakeholder engagement to generate insights that inform both grantee efforts and the foundation’s decision-making.

Their approach was piloted in 2020, and refined and operationalized the years that followed. As of 2024, it was applied across the vast majority of their grantmaking portfolio. Laura Maher, Chief of Staff and Director of External Engagement at Siegel Family Endowment, notes: “Before our Inquiry-Driven Grantmaking approach we spent roughly 90% of our time on the grant writing process and 10% checking in with grantees, and now that’s balancing out more.”

Screenshot 2025 05 08 at 4.29.24 Pm

Image of the Inquiry-Driven Grantmaking Process from the Siegel Family Endowment

Earlier this year, the DATA4Philanthropy team conducted two in-depth discussions with Siegel’s Knowledge and Impact team to discuss their Inquiry-Driven Grantmaking approach and what they learned thus far from applying their new methodology. While the Siegel team notes that there is still much to be learned, there are several takeaways that can be applied to others looking to initiate a questions-led approach. 

Below we provide 10 emerging lessons from these discussions…(More)”.

Building Community-Centered AI Collaborations


Article by Michelle Flores Vryn and Meena Das: “AI can only boost the under-resourced nonprofit world if we design it to serve the communities we care about. But as nonprofits consider how to incorporate AI into their work, many look to expertise from tech sector, expecting tools and implementation advice as well as ethical guidance. Yet when mission-driven entities—with a strong focus on people, communities, and equity—partner solely with tech companies, they may encounter a variety of obstacles, such as:

  1. Limited understanding of community needs: Sector-specific knowledge is essential for aligning AI with nonprofit missions, something many tech companies lack.
  2. Bias in AI models: Without diverse input, AI models may exacerbate biases or misrepresent the communities that nonprofits serve.
  3. Resource constraints: Tech solutions often presume budgets or capacity beyond what nonprofits can bring to bear, creating a reliance on tools that fit the nonprofit context.

We need creative, diverse collaborations across various fields to ensure that technology is deployed in ways that align with nonprofit values, build trust, and serve the greater good. Seeking partners outside of the tech world helps nonprofits develop AI solutions that are context-aware, equitable, and resource-sensitive. Most importantly, nonprofit practitioners must deeply consider our ideal future state: What does an AI-empowered nonprofit sector look like when it truly centers human well-being, community agency, and ethical technology?

Imagining this future means not just reacting to emerging technology but proactively shaping its trajectory. Instead of simply adapting to AI’s capabilities, nonprofits should ask:

  • What problems do we truly need AI to solve?
  • Whose voices must be centered in AI decision-making?
  • How do we ensure AI remains a tool for empowerment rather than control?..(More)”.

Policy Implications of DeepSeek AI’s Talent Base


Brief by Amy Zegart and Emerson Johnston: “Chinese startup DeepSeek’s highly capable R1 and V3 models challenged prevailing beliefs about the United States’ advantage in AI innovation, but public debate focused more on the company’s training data and computing power than human talent. We analyzed data on the 223 authors listed on DeepSeek’s five foundational technical research papers, including information on their research output, citations, and institutional affiliations, to identify notable talent patterns. Nearly all of DeepSeek’s researchers were educated or trained in China, and more than half never left China for schooling or work. Of the quarter or so that did gain some experience in the United States, most returned to China to work on AI development there. These findings challenge the core assumption that the United States holds a natural AI talent lead. Policymakers need to reinvest in competing to attract and retain the world’s best AI talent while bolstering STEM education to maintain competitiveness…(More)”.