How to craft fair, transparent data-sharing agreements


Article by Stephanie Kanowitz: “Data collaborations are critical to government decision-making, but actually sharing data can be difficult—not so much the mechanics of the collaboration, but hashing out the rules and policies governing it. A new report offers three resources that will make data sharing more straightforward, foster accountability and build trust among the parties.

“We’ve heard over and over again that one of the biggest barriers to collaboration around data turns out to be data sharing agreements,” said Stefaan Verhulst, co-founder of the Governance Lab at New York University and an author of the November report, “Moving from Idea to Practice.” It’s sometimes a lot to ask stakeholders “to provide access to some of their data,” he said.

To help, Verhulst and other researchers identified three components of successful data-sharing agreements: conducting principled negotiations, establishing the elements of a data-sharing agreement and assessing readiness.

To address the first, the report breaks the components of negotiation into a framework with four tenets: separating people from the problem, focusing on interests rather than positions, identifying options and using objective criteria. From discussions with stakeholders in data sharing agreement workshops that GovLab held through its Open Data Policy Lab, three principles emerged—fairness, transparency and reciprocity…(More)”.

The new star wars over satellites


Article by Peggy Hollinger: “There is a battle brewing in space. In one corner you have the billionaires building giant satellite broadband constellations in low earth orbit (LEO) — Elon Musk with SpaceX’s Starlink and Jeff Bezos with Project Kuiper. 

In the other corner stand the traditional fixed satellite operators such as ViaSat and SES — but also a number of nations increasingly uncomfortable with the way in which the new space economy is evolving. In other words, with the dominance of US mega constellations in a strategic region of space.

The first shots were fired in late November at the World Radiocommunications Conference in Dubai. Every four years, global regulators and industry meet to review international regulations on the use of radio spectrum. 

For those who have only a vague idea of what spectrum is, it is the name for the radio airwaves that carry data wirelessly to enable a vast range of services — from television broadcasting to WiFi, navigation to mobile communications.

Most people are inclined to think that the airwaves have infinite capacity to connect us. But, like water, spectrum is a finite resource and much of it has already been allocated to specific uses. So operators have to transmit signals on shared bands of spectrum — on the promise that their transmissions will not interfere with others. 

Now SpaceX, Kuiper and others operating in LEO are pushing to loosen rules designed to prevent their signals from interfering with those of traditional operators in higher orbits. These rules impose caps on the power used to transmit signals, which facilitate spectrum sharing but also constrain the amount of data they can send. LEO operators say the rules, designed 25 years ago, are outdated. They argue that new technology would allow higher power levels — and greater capacity for customers — without degrading networks of the traditional fixed satellite systems operating in geostationary orbit, at altitudes of 36,000km.

It is perhaps not a surprise that a proposal to make LEO constellations more competitive drew protests from geo operators. Some, such as US-based Hughes Network Systems, have admitted they are already losing customers to Starlink.

What was surprising, however, was the strong opposition from countries such as Brazil, Indonesia, Japan and others…(More)”.

How Tracking and Technology in Cars Is Being Weaponized by Abusive Partners


Article by Kashmir Hill: “After almost 10 years of marriage, Christine Dowdall wanted out. Her husband was no longer the charming man she had fallen in love with. He had become narcissistic, abusive and unfaithful, she said. After one of their fights turned violent in September 2022, Ms. Dowdall, a real estate agent, fled their home in Covington, La., driving her Mercedes-Benz C300 sedan to her daughter’s house near Shreveport, five hours away. She filed a domestic abuse report with the police two days later.

Her husband, a Drug Enforcement Administration agent, didn’t want to let her go. He called her repeatedly, she said, first pleading with her to return, and then threatening her. She stopped responding to him, she said, even though he texted and called her hundreds of times.

Ms. Dowdall, 59, started occasionally seeing a strange new message on the display in her Mercedes, about a location-based service called “mbrace.” The second time it happened, she took a photograph and searched for the name online.

“I realized, oh my God, that’s him tracking me,” Ms. Dowdall said.

“Mbrace” was part of “Mercedes me” — a suite of connected services for the car, accessible via a smartphone app. Ms. Dowdall had only ever used the Mercedes Me app to make auto loan payments. She hadn’t realized that the service could also be used to track the car’s location. One night, when she visited a male friend’s home, her husband sent the man a message with a thumbs-up emoji. A nearby camera captured his car driving in the area, according to the detective who worked on her case.

Ms. Dowdall called Mercedes customer service repeatedly to try to remove her husband’s digital access to the car, but the loan and title were in his name, a decision the couple had made because he had a better credit score than hers. Even though she was making the payments, had a restraining order against her husband and had been granted sole use of the car during divorce proceedings, Mercedes representatives told her that her husband was the customer so he would be able to keep his access. There was no button she could press to take away the app’s connection to the vehicle.

“This is not the first time that I’ve heard something like this,” one of the representatives told Ms. Dowdall…(More)”.

The Rise of Cyber-Physical Systems


Article by Chandrakant D. Patel: “Cyber-physical systems are a systemic integration of physical and cyber technologies. To name one example, a self-driving car is an integration of physical technologies, such as motors, batteries, actuators, and sensors, and cyber technologies, like communication, computation, inference, and closed-loop control. Data flow from physical to cyber technologies results in systemic integration and the desired driving experience. Cyber-physical systems are becoming prevalent in a range of sectors, such as power, water, waste, transportation, healthcare, agriculture, and manufacturing. We have entered the cyber-physical age. However, we stand unprepared for this moment due to systemic under-allocation in the physical sciences and the lack of a truly multidisciplinary engineering curriculum.  While there are many factors that contribute to the rise of cyber-physical systems, societal challenges stemming from imbalances between supply and demand are becoming a very prominent one. These imbalances are caused by social, economic, and ecological trends that hamper the delivery of basic goods and services. Examples of trends leading to imbalances between supply and demand are resource constraints, aging population, human capital constraints, a lack of subject matter experts in critical fields, physical security risks, supply-chain and supply-side resiliency, and externalities such as pandemics and environmental pollution. With respect to the lack of subject matter experts, consider the supply of cardiothoracic surgeons. The United States has about 4000 cardiothoracic surgeons, a sub-specialization that takes 20 years of education and hands-on training, for a population of 333 million. Similar imbalances in subject matter experts in healthcare, power, water, waste, and transport systems are occurring as a result of aging population. Compounding this challenge is the market-driven pay discrepancy that has attracted our youth to software jobs, such as those in social media, which pay much more relative to the salaries for a resident in general surgery or an early-career civil engineer. While it is possible that the market will shift to value infrastructure- and healthcare-related jobs, the time it takes to train “hands-on” contributors (e.g., engineers and technicians) in physical sciences and life sciences is substantial, ranging from 5 years (technicians requiring industry training) to 20 years (sub-specialized personnel like cardiothoracic surgeons)…(More)”.

Where Did the Open Access Movement Go Wrong?


An Interview with Richard Poynder by Richard Anderson: “…Open access was intended to solve three problems that have long blighted scholarly communication – the problems of accessibilityaffordability, and equity. 20+ years after the Budapest Open Access Initiative (BOAI) we can see that the movement has signally failed to solve the latter two problems. And with the geopolitical situation deteriorating solving the accessibility problem now also looks to be at risk. The OA dream of “universal open access” remains a dream and seems likely to remain one.

What has been the essence of the OA movement’s failure?

The fundamental problem was that OA advocates did not take ownership of their own movement. They failed, for instance, to establish a central organization (an OA foundation, if you like) in order to organize and better manage the movement; and they failed to publish a single, canonical definition of open access. This is in contrast to the open source movement, and is an omission I drew attention to in 2006

This failure to take ownership saw responsibility for OA pass to organizations whose interests are not necessarily in sync with the objectives of the movement.

It did not help that the BOAI definition failed to specify that to be classified as open access, scholarly works needed to be made freely available immediately on publication and that they should remain freely available in perpetuity. Nor did it give sufficient thought to how OA would be funded (and OA advocates still fail to do that).

This allowed publishers to co-opt OA for their own purposes, most notably by introducing embargoes and developing the pay-to-publish gold OA model, with its now infamous article processing charge (APC).

Pay-to-publish OA is now the dominant form of open access and looks set to increase the cost of scholarly publishing and so worsen the affordability problem. Amongst other things, this has disenfranchised unfunded researchers and those based in the global south (notwithstanding APC waiver promises).

What also did not help is that OA advocates passed responsibility for open access over to universities and funders. This was contradictory, because OA was conceived as something that researchers would opt into. The assumption was that once the benefits of open access were explained to them, researchers would voluntarily embrace it – primarily by self-archiving their research in institutional or preprint repositories. But while many researchers were willing to sign petitions in support of open access, few (outside disciplines like physics) proved willing to practice it voluntarily.

In response to this lack of engagement, OA advocates began to petition universities, funders, and governments to introduce OA policies recommending that researchers make their papers open access. When these policies also failed to have the desired effect, OA advocates demanded their colleagues be forced to make their work OA by means of mandates requiring them to do so.

Most universities and funders (certainly in the global north) responded positively to these calls, in the belief that open access would increase the pace of scientific development and allow them to present themselves as forward-thinking, future-embracing organizations. Essentially, they saw it as a way of improving productivity and ROI while enhancing their public image.

While many researchers were willing to sign petitions in support of open access, few proved willing to practice it voluntarily.

But in light of researchers’ continued reluctance to make their works open access, universities and funders began to introduce increasingly bureaucratic rules, sanctions, and reporting tools to ensure compliance, and to manage the more complex billing arrangements that OA has introduced.

So, what had been conceived as a bottom-up movement founded on principles of voluntarism morphed into a top-down system of command and control, and open access evolved into an oppressive bureaucratic process that has failed to address either the affordability or equity problems. And as the process, and the rules around that process, have become ever more complex and oppressive, researchers have tended to become alienated from open access.

As a side benefit for universities and funders OA has allowed them to better micromanage their faculty and fundees, and to monitor their publishing activities in ways not previously possible. This has served to further proletarianize researchers and today they are becoming the academic equivalent of workers on an assembly line. Philip Mirowski has predicted that open access will lead to the deskilling of academic labor. The arrival of generative AI might seem to make that outcome the more likely…

Can these failures be remedied by means of an OA reset? With this aim in mind (and aware of the failures of the movement), OA advocates are now devoting much of their energy to trying to persuade universities, funders, and philanthropists to invest in a network of alternative nonprofit open infrastructures. They envisage these being publicly owned and focused on facilitating a flowering of new diamond OA journals, preprint servers, and Publish, Review, Curate (PRC) initiatives. In the process, they expect commercial publishers will be marginalized and eventually dislodged.

But it is highly unlikely that the large sums of money that would be needed to create these alternative infrastructures will be forthcoming, certainly not at sufficient levels or on anything other than a temporary basis.

While it is true that more papers and preprints are being published open access each year, I am not convinced this is taking us down the road to universal open access, or that there is a global commitment to open access.

Consequently, I do not believe that a meaningful reset is possible: open access has reached an impasse and there is no obvious way forward that could see the objectives of the OA movement fulfilled.

Partly for this reason, we are seeing attempts to rebrand, reinterpret, and/or reimagine open access and its objectives…(More)”.

Rebalancing AI


Article by Daron Acemoglu and Simon Johnson: “Optimistic forecasts regarding the growth implications of AI abound. AI adoption could boost productivity growth by 1.5 percentage points per year over a 10-year period and raise global GDP by 7 percent ($7 trillion in additional output), according to Goldman Sachs. Industry insiders offer even more excited estimates, including a supposed 10 percent chance of an “explosive growth” scenario, with global output rising more than 30 percent a year.

All this techno-optimism draws on the “productivity bandwagon”: a deep-rooted belief that technological change—including automation—drives higher productivity, which raises net wages and generates shared prosperity.

Such optimism is at odds with the historical record and seems particularly inappropriate for the current path of “just let AI happen,” which focuses primarily on automation (replacing people). We must recognize that there is no singular, inevitable path of development for new technology. And, assuming that the goal is to sustainably improve economic outcomes for more people, what policies would put AI development on the right path, with greater focus on enhancing what all workers can do?…(More)”

What Will AI Do to Elections?


Article by Rishi Iyengar: “…Requests to X’s press team on how the platform was preparing for elections in 2024 yielded an automated response: “Busy now, please check back later”—a slight improvement from the initial Musk-era change where the auto-reply was a poop emoji.

X isn’t the only major social media platform with fewer content moderators. Meta, which owns Facebook, Instagram, and WhatsApp, has laid off more than 20,000 employees since November 2022—several of whom worked on trust and safety—while many YouTube employees working on misinformation policy were impacted by layoffs at parent company Google.

There could scarcely be a worse time to skimp on combating harmful content online. More than 50 countries, including the world’s three biggest democracies and Taiwan, an increasingly precarious geopolitical hot spot, are expected to hold national elections in 2024. Seven of the world’s 10 most populous countries—Bangladesh, India, Indonesia, Mexico, Pakistan, Russia, and the United States—will collectively send a third of the world’s population to the polls.

Elections, with their emotionally charged and often tribal dynamics, are where misinformation missteps come home to roost. If social media misinformation is the equivalent of yelling “fire” in a crowded theater, election misinformation is like doing so when there’s a horror movie playing and everyone’s already on edge.

Katie Harbath prefers a different analogy, one that illustrates how nebulous and thorny the issues are and the sheer uncertainty surrounding them. “The metaphor I keep using is a kaleidoscope because there’s so many different aspects to this but depending how you turn the kaleidoscope, the pattern changes of what it’s going to look like,” she said in an interview in October. “And that’s how I feel about life post-2024. … I don’t know where in the kaleidoscope it’s going to land.”

Harbath has become something of an election whisperer to the tech industry, having spent a decade at Facebook from 2011 building the company’s election integrity efforts from scratch. She left in 2021 and founded Anchor Change, a public policy consulting firm that helps other platforms combat misinformation and prepare for elections in particular.

Had she been in her old job, Harbath said, her team would have completed risk assessments of global elections by late 2022 or early 2023 and then spent the rest of the year tailoring Meta’s products to them as well as setting up election “war rooms” where necessary. “Right now, we would be starting to move into execution mode.” She cautions against treating the resources that companies are putting into election integrity as a numbers game—“once you build some of those tools, maintaining them doesn’t take as many people”—but acknowledges that the allocation of resources reveals a company leadership’s priorities.

The companies insist they remain committed to election integrity. YouTube has “heavily invested in the policies and systems that help us successfully support elections around the world,” spokesperson Ivy Choi said in a statement. TikTok said it has a total of 40,000 safety professionals and works with 16 fact-checking organizations across 50 global languages. Meta declined to comment for this story, but a company representative directed Foreign Policy to a recent blog post by Nick Clegg, a former U.K. deputy prime minister who now serves as Meta’s head of global affairs. “We have around 40,000 people working on safety and security, with more than $20 billion invested in teams and technology in this area since 2016,” Clegg wrote in the post.

But there are other troubling signs. YouTube announced last June that it would stop taking down content spreading false claims about the 2020 U.S. election or past elections, and Meta quietly made a similar policy change to its political ad rules in 2022. And as past precedent has shown, the platforms tend to have even less cover outside the West, with major blind spots in local languages and context making misinformation and hate speech not only more pervasive but also more dangerous…(More)”.

Forget technology — politicians pose the gravest misinformation threat


Article by Rasmus Nielsen: “This is set to be a big election year, including in India, Mexico, the US, and probably the UK. People will rightly be on their guard for misinformation, but much of the policy discussion on the topic ignores the most important source: members of the political elite.

As a social scientist working on political communication, I have spent years in these debates — which continue to be remarkably disconnected from what we know from research. Academic findings repeatedly underline the actual impact of politics, while policy documents focus persistently on the possible impact of new technologies.

Most recently, Britain’s National Cyber Security Centre (NCSC) has warned of how “AI-created hyper-realistic bots will make the spread of disinformation easier and the manipulation of media for use in deepfake campaigns will likely become more advanced”. This is similar to warnings from many other public authorities, which ignore the misinformation from the most senior levels of domestic politics. In the US, the Washington Post stopped counting after documenting at least 30,573 false or misleading claims made by Donald Trump as president. In the UK, the non-profit FullFact has reported that as many as 50 MPs — including two prime ministers, cabinet ministers and shadow cabinet ministers — failed to correct false, unevidenced or misleading claims in 2022 alone, despite repeated calls to do so.

These are actual problems of misinformation, and the phenomenon is not new. Both George W Bush and Barack Obama’s administrations obfuscated on Afghanistan. Bush’s government and that of his UK counterpart Tony Blair advanced false and misleading claims in the run-up to the Iraq war. Prominent politicians have, over the years, denied the reality of human-induced climate change, proposed quack remedies for Covid-19, and so much more. These are examples of misinformation, and, at their most egregious, of disinformation — defined as spreading false or misleading information for political advantage or profit.

This basic point is strikingly absent from many policy documents — the NCSC report, for example, has nothing to say about domestic politics. It is not alone. Take the US Surgeon General’s 2021 advisory on confronting health misinformation which calls for a “whole-of-society” approach — and yet contains nothing on politicians and curiously omits the many misleading claims made by the sitting president during the pandemic, including touting hydroxychloroquine as a potential treatment…(More)”.

Eat, Click, Judge: The Rise of Cyber Jurors on China’s Food Apps


Article from Ye Zhanhang: “From unwanted ingredients in takeaway meals and negative restaurant reviews to late deliveries and poor product quality, digital marketplaces teem with minor frustrations. 

But because they affect customer satisfaction and business reputations, several Chinese online shopping platforms have come up with a unique solution: Ordinary users can become “cyber jurors” to deliberate and cast decisive votes in resolving disputes between buyers and sellers.

Though introduced in 2020, the concept has surged in popularity among young Chinese in recent months, primarily fueled by viral cases that users eagerly follow, scrutinizing every detail and deliberation online…

To be eligible for the role, a user must meet certain criteria, including having a verified account, maintaining consumption records within the past three months, and successfully navigating five mock cases as part of an entry test. Cyber jurors don’t receive any money for completing cases but may be rewarded with coupons.

Xianyu, an online secondhand shopping platform, has also introduced a “court” system that assembles a jury of 17 volunteer users to adjudicate disputes between buyers and sellers. 

Miao Mingyu, a law professor at the University of Chinese Academy of Social Sciences, told China Youth Daily that this public jury function, with its impartial third-party perspective, has the potential to enhance transaction transparency and the fairness of the platform’s evaluation system.

Despite Chinese law prohibiting platforms from removing user reviews of products, Miao noted that this feature has enabled the platform to effectively address unfair negative reviews without violating legal constraints…(More)”.

Charting the Emerging Geography of AI


Article by Bhaskar Chakravorti, Ajay Bhalla, and Ravi Shankar Chaturvedi: “Given the high stakes of this race, which countries are in the lead? Which are gaining on the leaders? How might this hierarchy shape the future of AI? Identifying AI-leading countries is not straightforward, as data, knowledge, algorithms, and models can, in principle, cross borders. Even the U.S.–China rivalry is complicated by the fact that AI researchers from the two countries cooperate — and more so than researchers from any other pair of countries. Open-source models are out there for everyone to use, with licensing accessible even for cutting-edge models. Nonetheless, AI development benefits from scale economies and, as a result, is geographically clustered as many significant inputs are concentrated and don’t cross borders that easily….

Rapidly accumulating pools of data in digital economies around the world are clearly one of the critical drivers of AI development. In 2019, we introduced the idea of “gross data product” of countries determined by the volume, complexity, and accessibility of data consumed alongside the number of active internet users in the country. For this analysis, we recognized that gross data product is an essential asset for AI development — especially for generative AI, which requires massive, diverse datasets — and updated the 2019 analyses as a foundation, adding drivers that are critical for AI development overall. That essential data layer makes the index introduced here distinct from other indicators of AI “vibrancy” or measures of global investments, innovations, and implementation of AI…(More)”.