Indiana Faces a Data Center Backlash


Article by Matthew Zeitlin: “Indiana has power. Indiana has transmission. Indiana has a business-friendly Republican government. Indiana is close to Chicago but — crucially — not in Illinois. All of this has led to a huge surge of data center development in the “Crossroads of America.” It has also led to an upswell of local opposition.

There are almost 30 active data center proposals in Indiana, plus five that have already been rejected in the past year, according to data collected by the environmentalist group Citizens Action Coalition. GoogleAmazon, and Meta have all announced projects in the state since the beginning of 2024.

Nipsco, one of the state’s utilities, has projected 2,600 megawatts worth of new load by the middle of the next decade as its base scenario, mostly attributable to “large economic development projects.” In a more aggressive scenario, it sees 3,200 megawatts of new load — that’s three large nuclear reactors’ worth — by 2028 and 8,600 megawatts by 2035. While short of, say, the almost 36,500 megawatts worth of load growth planned in Georgia for the next decade, it’s still a vast range of outcomes that requires some kind of advanced planning.

That new electricity consumption will likely be powered by fossil fuels. Projected load growth in the state has extended a lifeline to Indiana’s coal-fired power plants, with retirement dates for some of the fleet being pushed out to late in the 2030s. It’s also created a market for new natural gas-fired plants that utilities say are necessary to power the expected new load.

State and local political leaders have greeted these new data center projects with enthusiasm, Ben Inskeep, the program director at CAC, told me. “Economic development is king here,” he said. “That is what all the politicians and regulators say their number one concern is: attracting economic development.”..(More)”.

Technical Tiers: A New Classification Framework for Global AI Workforce Analysis


Report by Siddhi Pal, Catherine Schneider and Ruggero Marino Lazzaroni: “… introduces a novel three-tiered classification system for global AI talent that addresses significant methodological limitations in existing workforce analyses, by distinguishing between different skill categories within the existing AI talent pool. By distinguishing between non-technical roles (Category 0), technical software development (Category 1), and advanced deep learning specialization (Category 2), our framework enables precise examination of AI workforce dynamics at a pivotal moment in global AI policy.

Through our analysis of a sample of 1.6 million individuals in the AI talent pool across 31 countries, we’ve uncovered clear patterns in technical talent distribution that significantly impact Europe’s AI ambitions. Asian nations hold an advantage in specialized AI expertise, with South Korea (27%), Israel (23%), and Japan (20%) maintaining the highest proportions of Category 2 talent. Within Europe, Poland and Germany stand out as leaders in specialized AI talent. This may be connected to their initiatives to attract tech companies and investments in elite research institutions, though further research is needed to confirm these relationships.

Our data also reveals a shifting landscape of global talent flows. Research shows that countries employing points-based immigration systems attract 1.5 times more high-skilled migrants than those using demand-led approaches. This finding takes on new significance in light of recent geopolitical developments affecting scientific research globally. As restrictive policies and funding cuts create uncertainty for researchers in the United States, one of the big destinations for European AI talent, the way nations position their regulatory environments, scientific freedoms, and research infrastructure will increasingly determine their ability to attract and retain specialized AI talent.

The gender analysis in our study illuminates another dimension of competitive advantage. Contrary to the overall AI talent pool, EU countries lead in female representation in highly technical roles (Category 2), occupying seven of the top ten global rankings. Finland, Czechia, and Italy have the highest proportion of female representation in Category 2 roles globally (39%, 31%, and 28%, respectively). This gender diversity represents not merely a social achievement but a potential strategic asset in AI innovation, particularly as global coalitions increasingly emphasize the importance of diverse perspectives in AI development…(More)”

Hundreds of scholars say U.S. is swiftly heading toward authoritarianism


Article by Frank Langfitt: “A survey of more than 500 political scientists finds that the vast majority think the United States is moving swiftly from liberal democracy toward some form of authoritarianism.

In the benchmark survey, known as Bright Line Watch, U.S.-based professors rate the performance of American democracy on a scale from zero (complete dictatorship) to 100 (perfect democracy). After President Trump’s election in November, scholars gave American democracy a rating of 67. Several weeks into Trump’s second term, that figure plummeted to 55.

“That’s a precipitous drop,” says John Carey, a professor of government at Dartmouth and co-director of Bright Line Watch. “There’s certainly consensus: We’re moving in the wrong direction.”…Not all political scientists view Trump with alarm, but many like Carey who focus on democracy and authoritarianism are deeply troubled by Trump’s attempts to expand executive power over his first several months in office.

“We’ve slid into some form of authoritarianism,” says Steven Levitsky, a professor of government at Harvard, and co-author of How Democracies Die. “It is relatively mild compared to some others. It is certainly reversible, but we are no longer living in a liberal democracy.”…Kim Lane Scheppele, a Princeton sociologist who has spent years tracking Hungary, is also deeply concerned: “We are on a very fast slide into what’s called competitive authoritarianism.”

When these scholars use the term “authoritarianism,” they aren’t talking about a system like China’s, a one-party state with no meaningful elections. Instead, they are referring to something called “competitive authoritarianism,” the kind scholars say they see in countries such as Hungary and Turkey.

In a competitive authoritarian system, a leader comes to power democratically and then erodes the system of checks and balances. Typically, the executive fills the civil service and key appointments — including the prosecutor’s office and judiciary — with loyalists. He or she then attacks the media, universities and nongovernmental organizations to blunt public criticism and tilt the electoral playing field in the ruling party’s favor…(More)”.

UAE set to use AI to write laws in world first


Article by Chloe Cornish: “The United Arab Emirates aims to use AI to help write new legislation and review and amend existing laws, in the Gulf state’s most radical attempt to harness a technology into which it has poured billions.

The plan for what state media called “AI-driven regulation” goes further than anything seen elsewhere, AI researchers said, while noting that details were scant. Other governments are trying to use AI to become more efficient, from summarising bills to improving public service delivery, but not to actively suggest changes to current laws by crunching government and legal data.

“This new legislative system, powered by artificial intelligence, will change how we create laws, making the process faster and more precise,” said Sheikh Mohammad bin Rashid Al Maktoum, the Dubai ruler and UAE vice-president, quoted by state media.

Ministers last week approved the creation of a new cabinet unit, the Regulatory Intelligence Office, to oversee the legislative AI push. 

Rony Medaglia, a professor at Copenhagen Business School, said the UAE appeared to have an “underlying ambition to basically turn AI into some sort of co-legislator”, and described the plan as “very bold”.

Abu Dhabi has bet heavily on AI and last year opened a dedicated investment vehicle, MGX, which has backed a $30bn BlackRock AI-infrastructure fund among other investments. MGX has also added an AI observer to its own board.

The UAE plans to use AI to track how laws affect the country’s population and economy by creating a massive database of federal and local laws, together with public sector data such as court judgments and government services.

The AI will “regularly suggest updates to our legislation,” Sheikh Mohammad said, according to state media. The government expects AI to speed up lawmaking by 70 per cent, according to the cabinet meeting readout…(More)”

So You Want to Be a Dissident?


Essay by Julia Angwin and Ami Fields-Meyer: “…Heimans points to an increasingly hostile digital landscape as one barrier to effective grassroots campaigns. At the dawn of the digital era, in the two-thousands, e-mail transformed the field of political organizing, enabling groups like MoveOn.org to mobilize huge campaigns against the Iraq War, and allowing upstart candidates like Howard Dean and Barack Obama to raise money directly from people instead of relying on Party infrastructure. But now everyone’s e-mail inboxes are overflowing. The tech oligarchs who control the social-media platforms are less willing to support progressive activism. Globally, autocrats have more tools to surveil and disrupt digital campaigns. And regular people are burned out on actions that have failed to remedy fundamental problems in society.

It’s not clear what comes next. Heimans hopes that new tactics will be developed, such as, perhaps, a new online platform that would help organizing, or the strengthening of a progressive-media ecosystem that will engage new participants. “Something will emerge that kind of revitalizes the space.”

There’s an oft-told story about Andrei Sakharov, the celebrated twentieth-century Soviet activist. Sakharov made his name working as a physicist on the development of the U.S.S.R.’s hydrogen bomb, at the height of the Cold War, but shot to global prominence after Leonid Brezhnev’s regime punished him for speaking publicly about the dangers of those weapons, and also about Soviet repression.

When an American friend was visiting Sakharov and his wife, the activist Yelena Bonner, in Moscow, the friend referred to Sakharov as a dissident. Bonner corrected him: “My husband is a physicist, not a dissident.”

This is a fundamental tension of building a principled dissident culture—it risks wrapping people up in a kind of negative identity, a cloak of what they are not. The Soviet dissidents understood their work as a struggle to uphold the laws and rights that were enshrined in the Soviet constitution, not as a fight against a regime.

“They were fastidious about everything they did being consistent with Soviet law,” Benjamin Nathans, a history professor at the University of Pennsylvania and the author of a book on Soviet dissidents, said. “I call it radical civil obedience.”

An affirmative vision of what the world should be is the inspiration for many of those who, in these tempestuous early months of Trump 2.0, have taken meaningful risks—acts of American dissent.

Consider Mariann Budde, the Episcopal bishop who used her pulpit before Trump on Inauguration Day to ask the President’s “mercy” for two vulnerable groups for whom he has reserved his most visceral disdain. For her sins, a congressional ally of the President called for the pastor to be “added to the deportation list.”..(More)”.

Artificial Intelligence and National Security


CRS Report: “Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles.

Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI. In addition, many commercial AI applications must undergo significant modification prior to being functional for the military.

A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes.

Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics.

Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges. AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations. While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionary—if not revolutionary—effect….(More)”.

Why Data Is Not the New Oil


Blogpost by Alec Stapp: “Data is the new oil,” said Jaron Lanier in a recent op-ed for The New York Times. Lanier’s use of this metaphor is only the latest instance of what has become the dumbest meme in tech policy. As the digital economy becomes more prominent in our lives, it is not unreasonable to seek to understand one of its most important inputs. But this analogy to the physical economy is fundamentally flawed. Worse, introducing regulations premised upon faulty assumptions like this will likely do far more harm than good. Here are seven reasons why “data is the new oil” misses the mark:

1. Oil is rivalrous; data is non-rivalrous

If someone uses a barrel of oil, it can’t be consumed again. But, as Alan McQuinn, a senior policy analyst at the Information Technology and Innovation Foundation, noted, “when consumers ‘pay with data’ to access a website, they still have the same amount of data after the transaction as before. As a result, users have an infinite resource available to them to access free online services.” Imposing restrictions on data collection makes this infinite resource finite. 

2. Oil is excludable; data is non-excludable

Oil is highly excludable because, as a physical commodity, it can be stored in ways that prevent use by non-authorized parties. However, as my colleagues pointed out in a recent comment to the FTC: “While databases may be proprietary, the underlying data usually is not.” They go on to argue that this can lead to under-investment in data collection:

[C]ompanies that have acquired a valuable piece of data will struggle both to prevent their rivals from obtaining the same data as well as to derive competitive advantage from the data. For these reasons, it also  means that firms may well be more reluctant to invest in data generation than is socially optimal. In fact, to the extent this is true there is arguably more risk of companies under-investing in data  generation than of firms over-investing in order to create data troves with which to monopolize a market. This contrasts with oil, where complete excludability is the norm.

3. Oil is fungible; data is non-fungible

Oil is a commodity, so, by definition, one barrel of oil of a given grade is equivalent to any other barrel of that grade. Data, on the other hand, is heterogeneous. Each person’s data is unique and may consist of a practically unlimited number of different attributes that can be collected into a profile. This means that oil will follow the law of one price, while a dataset’s value will be highly contingent on its particular properties and commercialization potential.

4. Oil has positive marginal costs; data has zero marginal costs

There is a significant expense to producing and distributing an additional barrel of oil (as low as $5.49 per barrel in Saudi Arabia; as high as $21.66 in the U.K.). Data is merely encoded information (bits of 1s and 0s), so gathering, storing, and transferring it is nearly costless (though, to be clear, setting up systems for collecting and processing can be a large fixed cost). Under perfect competition, the market clearing price is equal to the marginal cost of production (hence why data is traded for free services and oil still requires cold, hard cash)….(More)”.

We are finally getting better at predicting organized conflict


Tate Ryan-Mosley at MIT Technology Review: “People have been trying to predict conflict for hundreds, if not thousands, of years. But it’s hard, largely because scientists can’t agree on its nature or how it arises. The critical factor could be something as apparently innocuous as a booming population or a bad year for crops. Other times a spark ignites a powder keg, as with the assassination of Archduke Franz Ferdinand of Austria in the run-up to World War I.

Political scientists and mathematicians have come up with a slew of different methods for forecasting the next outbreak of violence—but no single model properly captures how conflict behaves. A study published in 2011 by the Peace Research Institute Oslo used a single model to run global conflict forecasts from 2010 to 2050. It estimated a less than .05% chance of violence in Syria. Humanitarian organizations, which could have been better prepared had the predictions been more accurate, were caught flat-footed by the outbreak of Syria’s civil war in March 2011. It has since displaced some 13 million people.

Bundling individual models to maximize their strengths and weed out weakness has resulted in big improvements. The first public ensemble model, the Early Warning Project, launched in 2013 to forecast new instances of mass killing. Run by researchers at the US Holocaust Museum and Dartmouth College, it claims 80% accuracy in its predictions.

Improvements in data gathering, translation, and machine learning have further advanced the field. A newer model called ViEWS, built by researchers at Uppsala University, provides a huge boost in granularity. Focusing on conflict in Africa, it offers monthly predictive readouts on multiple regions within a given state. Its threshold for violence is a single death.

Some researchers say there are private—and in some cases, classified—predictive models that are likely far better than anything public. Worries that making predictions public could undermine diplomacy or change the outcome of world events are not unfounded. But that is precisely the point. Public models are good enough to help direct aid to where it is needed and alert those most vulnerable to seek safety. Properly used, they could change things for the better, and save lives in the process….(More)”.

Information Wars: How We Lost the Global Battle Against Disinformation and What We Can Do About It


Book by Richard Stengel: “Disinformation is as old as humanity. When Satan told Eve nothing would happen if she bit the apple, that was disinformation. But the rise of social media has made disinformation even more pervasive and pernicious in our current era. In a disturbing turn of events, governments are increasingly using disinformation to create their own false narratives, and democracies are proving not to be very good at fighting it.

During the final three years of the Obama administration, Richard Stengel, the former editor of Time and an Under Secretary of State, was on the front lines of this new global information war. At the time, he was the single person in government tasked with unpacking, disproving, and combating both ISIS’s messaging and Russian disinformation. Then, in 2016, as the presidential election unfolded, Stengel watched as Donald Trump used disinformation himself, weaponizing the grievances of Americans who felt left out by modernism. In fact, Stengel quickly came to see how all three players had used the same playbook: ISIS sought to make Islam great again; Putin tried to make Russia great again; and we all know about Trump.

In a narrative that is by turns dramatic and eye-opening, Information Wars walks readers through of this often frustrating battle. Stengel moves through Russia and Ukraine, Saudi Arabia and Iraq, and introduces characters from Putin to Hillary Clinton, John Kerry and Mohamed bin Salman to show how disinformation is impacting our global society. He illustrates how ISIS terrorized the world using social media, and how the Russians launched a tsunami of disinformation around the annexation of Crimea – a scheme that became the model for their interference with the 2016 presidential election. An urgent book for our times, Information Wars stresses that we must find a way to combat this ever growing threat to democracy….(More)”.

The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation


Report by Philip Howard and Samantha Bradshaw: “…The report explores the tools, capacities, strategies and resources employed by global ‘cyber troops’, typically government agencies and political parties, to influence public opinion in 70 countries.

Key findings include:

  • Organized social media manipulation has more than doubled since 2017, with 70 countries using computational propaganda to manipulate public opinion.
  • In 45 democracies, politicians and political parties have used computational propaganda tools by amassing fake followers or spreading manipulated media to garner voter support.
  • In 26 authoritarian states, government entities have used computational propaganda as a tool of information control to suppress public opinion and press freedom, discredit criticism and oppositional voices, and drown out political dissent.
  • Foreign influence operations, primarily over Facebook and Twitter, have been attributed to cyber troop activities in seven countries: China, India, Iran, Pakistan, Russia, Saudi Arabia and Venezuela.
  • China has now emerged as a major player in the global disinformation order, using social media platforms to target international audiences with disinformation.
  • 25 countries are working with private companies or strategic communications firms offering a computational propaganda as a service.
  • Facebook remains the platform of choice for social media manipulation, with evidence of formally organised campaigns taking place in 56 countries….

The report explores the tools and techniques of computational propaganda, including the use of fake accounts – bots, humans, cyborgs and hacked accounts – to spread disinformation. The report finds:

  • 87% of countries used human accounts
  • 80% of countries used bot accounts
  • 11% of countries used cyborg accounts
  • 7% of countries used hacked or stolen accounts…(More)”.