Debate and Decide: Innovative Participatory Governance in South Australia 2010–2018


Paper by Matt D. Ryan: “This article provides an account of how innovative participatory governance unfolded in South Australia between 2010 and 2018. In doing so it explores how an ‘interactive’ political leadership style, which scholarship argues is needed in contemporary democracy, played out in practice. Under the leadership of Premier Jay Weatherill this approach to governing, known as ‘debate and decide’, became regarded as one of the most successful examples of democratic innovation globally. Using an archival and media method of analysis the article finds evidence of the successful application of an interactive political leadership style, but one that was so woven into competitive politics that it was abandoned after a change in government in March 2018. To help sustain interactive political leadership styles the article argues for research into how a broader base of politicians perceives the benefits and risks of innovative participatory governance. It also argues for a focus on developing politicians’ collaborative leadership capabilities. However, the article concludes by asking: if political competition is built into our system of government, are we be better off leveraging it, rather than resisting it, in the pursuit of democratic reform?…(More)”.

The New Digital Dark Age


Article by Gina Neff: “For researchers, social media has always represented greater access to data, more democratic involvement in knowledge production, and great transparency about social behavior. Getting a sense of what was happening—especially during political crises, major media events, or natural disasters—was as easy as looking around a platform like Twitter or Facebook. In 2024, however, that will no longer be possible.

In 2024, we will face a grim digital dark age, as social media platforms transition away from the logic of Web 2.0 and toward one dictated by AI-generated content. Companies have rushed to incorporate large language models (LLMs) into online services, complete with hallucinations (inaccurate, unjustified responses) and mistakes, which have further fractured our trust in online information.

Another aspect of this new digital dark age comes from not being able to see what others are doing. Twitter once pulsed with publicly readable sentiment of its users. Social researchers loved Twitter data, relying on it because it provided a ready, reasonable approximation of how a significant slice of internet users behaved. However, Elon Musk has now priced researchers out of Twitter data after recently announcing that it was ending free access to the platform’s API. This made it difficult, if not impossible, to obtain data needed for research on topics such as public health, natural disaster response, political campaigning, and economic activity. It was a harsh reminder that the modern internet has never been free or democratic, but instead walled and controlled.

Closer cooperation with platform companies is not the answer. X, for instance, has filed a suit against independent researchers who pointed out the rise in hate speech on the platform. Recently, it has also been revealed that researchers who used Facebook and Instagram’s data to study the platforms’ role in the US 2020 elections had been granted “independence by permission” by Meta. This means that the company chooses which projects to share its data with and, while the research may be independent, Meta also controls what types of questions are asked and who asks them…(More)”.

What It Takes to Build Democratic Institutions


Article by Daron Acemoglu: “Chile’s failure to draft a new constitution that enjoys widespread support from voters is the predictable result of allowing partisans and ideologues to lead the process. Democratic institutions are built by delivering what ordinary voters expect and demand from government, as the history of Nordic social democracy shows…

There are plenty of good models around to help both developing and industrialized countries build better democratic institutions. But with its abortive attempts to draft a new constitution, Chile is offering a lesson in what to avoid.

Though it is one of the richest countries in Latin America, Chile is still suffering from the legacy of General Augusto Pinochet’s brutal dictatorship and historic inequalities. The country has made some progress in building democratic institutions since the 1988 plebiscite that began the transition from authoritarianism, and education and social programs have reduced income inequality. But major problems remain. There are deep inequalities not just in income, but also in access to government services, high-quality educational resources, and labor-market opportunities. Moreover, Chile still has the constitution that Pinochet imposed in 1980.

Yet while it seems natural to start anew, Chile has gone about it the wrong way. Following a 2020 referendum that showed overwhelming support for drafting a new constitution, it entrusted the process to a convention of elected delegates. But only 43% of voters turned out for the 2021 election to fill the convention, and many of the candidates were from far-left circles with strong ideological commitments to draft a constitution that would crack down on business and establish myriad new rights for different communities. When the resulting document was put to a vote, 62% of Chileans rejected it…(More)”

What does it mean to trust a technology?


Article by Jack Stilgoe: “A survey published in October 2023 revealed what seemed to be a paradox. Over the past decade, self-driving vehicles have improved immeasurably, but public trust in the technology is low and falling. Only 37% of Americans said they would be comfortable riding in a self- driving vehicle, down from 39% in 2022 and 41% in 2021. Those that have used the technology express more enthusiasm, but the rest have seemingly had their confidence shaken by the failure of the technology to live up to its hype.

Purveyors and regulators of any new technology are likely to worry about public trust. In the short term, they worry that people won’t want to make use of new innovations. But they also worry that a public backlash might jeopardize not just a single company but a whole area of technological innovation. Excitement about artificial intelligence (AI) has been accompanied by a concern about the need to “build trust” in the technology. Trust—letting one’s guard down despite incomplete information—is vital, but innovators must not take it for granted. Nor can it be circumvented through clever engineering. When cryptocurrency enthusiasts call their technology “trustless” because they think it solves age-old problems of banking (an unavoidably imperfect social institution), we should at least view them with skepticism.

For those concerned about public trust and new technologies, social science has some important lessons. The first is that people trust people, not things. When we board an airplane or agree to get vaccinated, we are placing our trust not in these objects but in the institutions that govern them. We trust that professionals are well-trained; we trust that regulators have assessed the risks; we trust that, if something goes wrong, someone will be held accountable, harms will be compensated, and mistakes will be rectified. Societies can no longer rely on the face-to-face interactions that once allowed individuals to do business. So it is more important than ever that faceless institutions are designed and continuously monitored to realize the benefits of new technologies while mitigating the risks….(More)”.

Navigating the Metrics Maze: Lessons from Diverse Domains for Federal Chief Data Officers


Paper by the CDO Council: “In the rapidly evolving landscape of government, Federal Chief Data Officers (CDOs) have emerged as crucial leaders tasked with harnessing the power of data to drive organizational success. However, the relative newness of this role brings forth unique challenges, particularly in the realm of measuring and communicating the value of their efforts.

To address this measurement conundrum, this paper delves into lessons from non-data domains such as asset management, inventory management, manufacturing, and customer experience. While these fields share common ground with CDOs in facing critical questions, they stand apart in possessing established performance metrics. Drawing parallels with domains that have successfully navigated similar challenges offers a roadmap for establishing metrics that can transcend organizational boundaries.

By learning from the experiences of other domains and adopting a nuanced approach to metrics, CDOs can pave the way for a clearer understanding of the impact and value of their vital contributions to the data-driven future…(More)”.

How Tracking and Technology in Cars Is Being Weaponized by Abusive Partners


Article by Kashmir Hill: “After almost 10 years of marriage, Christine Dowdall wanted out. Her husband was no longer the charming man she had fallen in love with. He had become narcissistic, abusive and unfaithful, she said. After one of their fights turned violent in September 2022, Ms. Dowdall, a real estate agent, fled their home in Covington, La., driving her Mercedes-Benz C300 sedan to her daughter’s house near Shreveport, five hours away. She filed a domestic abuse report with the police two days later.

Her husband, a Drug Enforcement Administration agent, didn’t want to let her go. He called her repeatedly, she said, first pleading with her to return, and then threatening her. She stopped responding to him, she said, even though he texted and called her hundreds of times.

Ms. Dowdall, 59, started occasionally seeing a strange new message on the display in her Mercedes, about a location-based service called “mbrace.” The second time it happened, she took a photograph and searched for the name online.

“I realized, oh my God, that’s him tracking me,” Ms. Dowdall said.

“Mbrace” was part of “Mercedes me” — a suite of connected services for the car, accessible via a smartphone app. Ms. Dowdall had only ever used the Mercedes Me app to make auto loan payments. She hadn’t realized that the service could also be used to track the car’s location. One night, when she visited a male friend’s home, her husband sent the man a message with a thumbs-up emoji. A nearby camera captured his car driving in the area, according to the detective who worked on her case.

Ms. Dowdall called Mercedes customer service repeatedly to try to remove her husband’s digital access to the car, but the loan and title were in his name, a decision the couple had made because he had a better credit score than hers. Even though she was making the payments, had a restraining order against her husband and had been granted sole use of the car during divorce proceedings, Mercedes representatives told her that her husband was the customer so he would be able to keep his access. There was no button she could press to take away the app’s connection to the vehicle.

“This is not the first time that I’ve heard something like this,” one of the representatives told Ms. Dowdall…(More)”.

Where Did the Open Access Movement Go Wrong?


An Interview with Richard Poynder by Richard Anderson: “…Open access was intended to solve three problems that have long blighted scholarly communication – the problems of accessibilityaffordability, and equity. 20+ years after the Budapest Open Access Initiative (BOAI) we can see that the movement has signally failed to solve the latter two problems. And with the geopolitical situation deteriorating solving the accessibility problem now also looks to be at risk. The OA dream of “universal open access” remains a dream and seems likely to remain one.

What has been the essence of the OA movement’s failure?

The fundamental problem was that OA advocates did not take ownership of their own movement. They failed, for instance, to establish a central organization (an OA foundation, if you like) in order to organize and better manage the movement; and they failed to publish a single, canonical definition of open access. This is in contrast to the open source movement, and is an omission I drew attention to in 2006

This failure to take ownership saw responsibility for OA pass to organizations whose interests are not necessarily in sync with the objectives of the movement.

It did not help that the BOAI definition failed to specify that to be classified as open access, scholarly works needed to be made freely available immediately on publication and that they should remain freely available in perpetuity. Nor did it give sufficient thought to how OA would be funded (and OA advocates still fail to do that).

This allowed publishers to co-opt OA for their own purposes, most notably by introducing embargoes and developing the pay-to-publish gold OA model, with its now infamous article processing charge (APC).

Pay-to-publish OA is now the dominant form of open access and looks set to increase the cost of scholarly publishing and so worsen the affordability problem. Amongst other things, this has disenfranchised unfunded researchers and those based in the global south (notwithstanding APC waiver promises).

What also did not help is that OA advocates passed responsibility for open access over to universities and funders. This was contradictory, because OA was conceived as something that researchers would opt into. The assumption was that once the benefits of open access were explained to them, researchers would voluntarily embrace it – primarily by self-archiving their research in institutional or preprint repositories. But while many researchers were willing to sign petitions in support of open access, few (outside disciplines like physics) proved willing to practice it voluntarily.

In response to this lack of engagement, OA advocates began to petition universities, funders, and governments to introduce OA policies recommending that researchers make their papers open access. When these policies also failed to have the desired effect, OA advocates demanded their colleagues be forced to make their work OA by means of mandates requiring them to do so.

Most universities and funders (certainly in the global north) responded positively to these calls, in the belief that open access would increase the pace of scientific development and allow them to present themselves as forward-thinking, future-embracing organizations. Essentially, they saw it as a way of improving productivity and ROI while enhancing their public image.

While many researchers were willing to sign petitions in support of open access, few proved willing to practice it voluntarily.

But in light of researchers’ continued reluctance to make their works open access, universities and funders began to introduce increasingly bureaucratic rules, sanctions, and reporting tools to ensure compliance, and to manage the more complex billing arrangements that OA has introduced.

So, what had been conceived as a bottom-up movement founded on principles of voluntarism morphed into a top-down system of command and control, and open access evolved into an oppressive bureaucratic process that has failed to address either the affordability or equity problems. And as the process, and the rules around that process, have become ever more complex and oppressive, researchers have tended to become alienated from open access.

As a side benefit for universities and funders OA has allowed them to better micromanage their faculty and fundees, and to monitor their publishing activities in ways not previously possible. This has served to further proletarianize researchers and today they are becoming the academic equivalent of workers on an assembly line. Philip Mirowski has predicted that open access will lead to the deskilling of academic labor. The arrival of generative AI might seem to make that outcome the more likely…

Can these failures be remedied by means of an OA reset? With this aim in mind (and aware of the failures of the movement), OA advocates are now devoting much of their energy to trying to persuade universities, funders, and philanthropists to invest in a network of alternative nonprofit open infrastructures. They envisage these being publicly owned and focused on facilitating a flowering of new diamond OA journals, preprint servers, and Publish, Review, Curate (PRC) initiatives. In the process, they expect commercial publishers will be marginalized and eventually dislodged.

But it is highly unlikely that the large sums of money that would be needed to create these alternative infrastructures will be forthcoming, certainly not at sufficient levels or on anything other than a temporary basis.

While it is true that more papers and preprints are being published open access each year, I am not convinced this is taking us down the road to universal open access, or that there is a global commitment to open access.

Consequently, I do not believe that a meaningful reset is possible: open access has reached an impasse and there is no obvious way forward that could see the objectives of the OA movement fulfilled.

Partly for this reason, we are seeing attempts to rebrand, reinterpret, and/or reimagine open access and its objectives…(More)”.

Rebalancing AI


Article by Daron Acemoglu and Simon Johnson: “Optimistic forecasts regarding the growth implications of AI abound. AI adoption could boost productivity growth by 1.5 percentage points per year over a 10-year period and raise global GDP by 7 percent ($7 trillion in additional output), according to Goldman Sachs. Industry insiders offer even more excited estimates, including a supposed 10 percent chance of an “explosive growth” scenario, with global output rising more than 30 percent a year.

All this techno-optimism draws on the “productivity bandwagon”: a deep-rooted belief that technological change—including automation—drives higher productivity, which raises net wages and generates shared prosperity.

Such optimism is at odds with the historical record and seems particularly inappropriate for the current path of “just let AI happen,” which focuses primarily on automation (replacing people). We must recognize that there is no singular, inevitable path of development for new technology. And, assuming that the goal is to sustainably improve economic outcomes for more people, what policies would put AI development on the right path, with greater focus on enhancing what all workers can do?…(More)”

The 2010 Census Confidentiality Protections Failed, Here’s How and Why


Paper by John M. Abowd, et al: “Using only 34 published tables, we reconstruct five variables (census block, sex, age, race, and ethnicity) in the confidential 2010 Census person records. Using the 38-bin age variable tabulated at the census block level, at most 20.1% of reconstructed records can differ from their confidential source on even a single value for these five variables. Using only published data, an attacker can verify that all records in 70% of all census blocks (97 million people) are perfectly reconstructed. The tabular publications in Summary File 1 thus have prohibited disclosure risk similar to the unreleased confidential microdata. Reidentification studies confirm that an attacker can, within blocks with perfect reconstruction accuracy, correctly infer the actual census response on race and ethnicity for 3.4 million vulnerable population uniques (persons with nonmodal characteristics) with 95% accuracy, the same precision as the confidential data achieve and far greater than statistical baselines. The flaw in the 2010 Census framework was the assumption that aggregation prevented accurate microdata reconstruction, justifying weaker disclosure limitation methods than were applied to 2010 Census public microdata. The framework used for 2020 Census publications defends against attacks that are based on reconstruction, as we also demonstrate here. Finally, we show that alternatives to the 2020 Census Disclosure Avoidance System with similar accuracy (enhanced swapping) also fail to protect confidentiality, and those that partially defend against reconstruction attacks (incomplete suppression implementations) destroy the primary statutory use case: data for redistricting all legislatures in the country in compliance with the 1965 Voting Rights Act…(More)”.

Eat, Click, Judge: The Rise of Cyber Jurors on China’s Food Apps


Article from Ye Zhanhang: “From unwanted ingredients in takeaway meals and negative restaurant reviews to late deliveries and poor product quality, digital marketplaces teem with minor frustrations. 

But because they affect customer satisfaction and business reputations, several Chinese online shopping platforms have come up with a unique solution: Ordinary users can become “cyber jurors” to deliberate and cast decisive votes in resolving disputes between buyers and sellers.

Though introduced in 2020, the concept has surged in popularity among young Chinese in recent months, primarily fueled by viral cases that users eagerly follow, scrutinizing every detail and deliberation online…

To be eligible for the role, a user must meet certain criteria, including having a verified account, maintaining consumption records within the past three months, and successfully navigating five mock cases as part of an entry test. Cyber jurors don’t receive any money for completing cases but may be rewarded with coupons.

Xianyu, an online secondhand shopping platform, has also introduced a “court” system that assembles a jury of 17 volunteer users to adjudicate disputes between buyers and sellers. 

Miao Mingyu, a law professor at the University of Chinese Academy of Social Sciences, told China Youth Daily that this public jury function, with its impartial third-party perspective, has the potential to enhance transaction transparency and the fairness of the platform’s evaluation system.

Despite Chinese law prohibiting platforms from removing user reviews of products, Miao noted that this feature has enabled the platform to effectively address unfair negative reviews without violating legal constraints…(More)”.