Invention and Innovation: A Brief History of Hype and Failure


Book by Vaclav Smil: “The world is never finished catching up with Vaclav Smil. In his latest and perhaps most readable book, Invention and Innovation, the prolific author—a favorite of Bill Gates—pens an insightful and fact-filled jaunt through the history of human invention. Impatient with the hype that so often accompanies innovation, Smil offers in this book a clear-eyed corrective to the overpromises that accompany everything from new cures for diseases to AI. He reminds us that even after we go quite far along the invention-development-application trajectory, we may never get anything real to deploy. Or worse, even after we have succeeded by introducing an invention, its future may be marked by underperformance, disappointment, demise, or outright harm.

Drawing on his vast breadth of scientific and historical knowledge, Smil explains the difference between invention and innovation, and looks not only at inventions that failed to dominate as promised (such as the airship, nuclear fission, and supersonic flight), but also at those that turned disastrous (leaded gasoline, DDT, and chlorofluorocarbons). And finally, most importantly, he offers a “wish list” of inventions that we most urgently need to confront the staggering challenges of the twenty-first century.

Filled with engaging examples and pragmatic approaches, this book is a sobering account of the folly that so often attends human ingenuity—and how we can, and must, better align our expectations with reality…(More)”.

The Big Con: How the Consulting Industry Weakens Our Businesses, Infantilizes Our Governments, and Warps Our Economies


Book by Mariana Mazzucato and Rosie Collington: “There is an entrenched relationship between the consulting industry and the way business and government are managed today that must change. Mariana Mazzucato and Rosie Collington show that our economies’ reliance on companies such as McKinsey & Company, Boston Consulting Group, Bain & Company, PwC, Deloitte, KPMG, and EY stunts innovation, obfuscates corporate and political accountability, and impedes our collective mission of halting climate breakdown.

The “Big Con” describes the confidence trick the consulting industry performs in contracts with hollowed-out and risk-averse governments and shareholder value-maximizing firms. It grew from the 1980s and 1990s in the wake of reforms by the neoliberal right and Third Way progressives, and it thrives on the ills of modern capitalism, from financialization and privatization to the climate crisis. It is possible because of the unique power that big consultancies wield through extensive contracts and networks—as advisors, legitimators, and outsourcers—and the illusion that they are objective sources of expertise and capacity. In the end, the Big Con weakens our businesses, infantilizes our governments, and warps our economies.

In The Big Con, Mazzucato and Collington throw back the curtain on the consulting industry. They dive deep into important case studies of consultants taking the reins with disastrous results, such as the debacle of the roll out of HealthCare.gov and the tragic failures of governments to respond adequately to the COVID-19 pandemic. The result is an important and exhilarating intellectual journey into the modern economy’s beating heart. With peerless scholarship, and a wealth of original research, Mazzucato and Collington argue brilliantly for building a new system in which public and private sectors work innovatively for the common good…(More)”.

Big Data and Public Policy


Book by Rebecca Moody and Victor Bekkers: “This book provides a comprehensive overview of how the course, content and outcome of policy making is affected by big data. It scrutinises the notion that big and open data makes policymaking a more rational process, in which policy makers are able to predict, assess and evaluate societal problems. It also examines how policy makers deal with big data, the problems and limitations they face, and how big data shapes policymaking on the ground. The book considers big data from various perspectives, not just the political, but also the technological, legal, institutional and ethical dimensions. The potential of big data use in the public sector is also assessed, as well as the risks and dangers this might pose. Through several extended case studies, it demonstrates the dynamics of big data and public policy. Offering a holistic approach to the study of big data, this book will appeal to students and scholars of public policy, public administration and data science, as well as those interested in governance and politics…(More)”.

The Power of the Stora Rör Swimming Association and Other Local Institutions


Article by Erik Angner: “On a late-summer afternoon of 1938, two eleven-year-old girls waded into the water in Stora Rör harbor on the Baltic island of Öland. They were awaiting their mother, who was returning by ferry from a hospital visit on the mainland. Unbeknownst to the girls, the harbor had been recently dredged. Where there used to be shallow sands, the water was now cold, dark, and deep. The girls couldn’t swim. They drowned mere feet from safety—in full view of a powerless little sister on the beach.

The community was shaken. It resolved that no such tragedy should ever happen again. To make sure every child would learn to swim, the community decided to offer swimming lessons to anyone interested. The Stora Rör Swimming Association, founded that same year, is still going strong. It’s enrolled thousands of children, adolescents, and adults. My grandmother, a physical-education teacher by training, was one of its first instructors. My father, myself, and my children all learned how to swim there.

It’s impossible to know if the association has saved lives. It may well have. The community has been spared, although kids play in and fall into the water all the time. Nationwide, drowning is the leading cause of death for Swedish kids between one and six years of age.

We do know that the association has had many other beneficial effects. It has offered healthy, active outdoor summer activities for generations of kids. The activities of the association remain open to all. Fees are nominal. Children come from families of farmers and refugees, artists and writers, university professors and CEOs of major corporations, locals and tourists…

In economic terms, the Stora Rör Swimming Association is an institution. It’s a set of rules, or “prescriptions,” that humans use to structure all sorts of repeated interactions. These rules can be formalized in a governing document. The constitution of the association says that you have to pay dues if you want to remain a member in good standing, for example. But the rules that define the institution don’t need to be written down. They don’t even need to be formulated in words. “Attend the charity auction and bid on things if you can afford it.” “Volunteer to serve on the board when it’s your turn.” “Treat swimming teachers with respect.” These are all unwritten rules. They may never have been formulated quite like this before. Still, they’re widely—if not universally—followed. And, from an economic perspective, these rules taken together define what sort of thing the Swimming Association is.

Economist Elinor Ostrom studied institutions throughout her career. She wanted to know what institutions do, how and why they work, how they appear and evolve over time, how we can build and improve them, and, finally, how to share that knowledge with the rest of us. She believed in the power of economics to “bring out the best in humans.” The way to do it, she thought, was to help them build community—developing the rich network of relationships that form the fabric of a society…(More)”.

Measuring Partial Democracies: Rules and their Implementation


Paper by Debarati Basu,  Shabana Mitra &  Archana Purohit: “This paper proposes a new index that focuses on capturing the extent of democracy in a country using not only the existence of rules but also the extent of their implementation. The measure, based on the axiomatically robust framework of (Alkire and Foster, J Public Econ 95:476–487, 2011), is able to moderate the existence of democratic rules by their actual implementation. By doing this we have a meaningful way of capturing the notion of a partial democracy within a continuum between non-democratic and democratic, separating out situations when the rules exist but are not implemented well. We construct our index using V-Dem data from 1900 to 2010 for over 100 countries to measure the process of democratization across the world. Our results show that we can track the progress in democratization, even when the regime remains either a democracy or an autarchy. This is the notion of partial democracy that our implementation-based index measures through a wide-based index that is consistent, replicable, extendable, easy to interpret, and more nuanced in its ability to capture the essence of democracy…(More)”.

Federated machine learning in data-protection-compliant research


Paper by Alissa Brauneck et al : “In recent years, interest in machine learning (ML) as well as in multi-institutional collaborations has grown, especially in the medical field. However, strict application of data-protection laws reduces the size of training datasets, hurts the performance of ML systems and, in the worst case, can prevent the implementation of research insights in clinical practice. Federated learning can help overcome this bottleneck through decentralised training of ML models within the local data environment, while maintaining the predictive performance of ‘classical’ ML. Thus, federated learning provides immense benefits for cross-institutional collaboration by avoiding the sharing of sensitive personal data(Fig. 1; refs.). Because existing regulations (especially the General Data Protection Regulation 2016/679 of the European Union, or GDPR) set stringent requirements for medical data and rather vague rules for ML systems, researchers are faced with uncertainty. In this comment, we provide recommendations for researchers who intend to use federated learning, a privacy-preserving ML technique, in their research. We also point to areas where regulations are lacking, discussing some fundamental conceptual problems with ML regulation through the GDPR, related especially to notions of transparency, fairness and error-free data. We then provide an outlook on how implications from data-protection laws can be directly incorporated into federated learning tools…(More)”.

Work and meaning in the age of AI


Report by Daniel Susskind: “It is often said that work is not only a source of income but also of meaning. In this paper, I explore the theoretical and empirical literature that addresses this relationship between work and meaning. I show that the relationship is far less clear than is commonly supposed: There is a great heterogeneity in its nature, both among today’s workers and workers over time. I explain why this relationship matters for policymakers and economists concerned about the impact of technology on work. In the short term, it is important for predicting labour market outcomes of interest. It also matters for understanding how artificial intelligence (AI) affects not only the quantity of work but its quality as well: These new technologies may erode the meaning that people get from their work. In the medium term, if jobs are lost, this relationship also matters for designing bold policy interventions like the ‘Universal Basic Income’ and ‘Job Guarantee Schemes’: Their design, and any choice between them, is heavily dependent on policymakers’—often tacit—assumptions about the nature of this underlying relationship between work and meaning. For instance, policymakers must decide whether to simply focus on replacing lost income alone (as with a Universal Basic Income) or, if they believe that work is an important and non-substitutable source of meaning, on protecting jobs for that additional role as well (as with a Job Guarantee Scheme). In closing, I explore the challenge that the age of AI presents for an important feature of liberal political theory: the idea of ‘neutrality.’..(More)”

Ready, set, share: Researchers brace for new data-sharing rules


Jocelyn Kaiser and Jeffrey Brainard in Science: “…By 2025, new U.S. requirements for data sharing will extend beyond biomedical research to encompass researchers across all scientific disciplines who receive federal research funding. Some funders in the European Union and China have also enacted data-sharing requirements. The new U.S. moves are feeding hopes that a worldwide movement toward increased sharing is in the offing. Supporters think it could speed the pace and reliability of science.

Some scientists may only need to make a few adjustments to comply with the policies. That’s because data sharing is already common in fields such as protein crystallography and astronomy. But in other fields the task could be weighty, because sharing is often an afterthought. For example, a study involving 7750 medical research papers found that just 9% of those published from 2015 to 2020 promised to make their data publicly available, and authors of just 3% actually shared, says lead author Daniel Hamilton of the University of Melbourne, who described the finding at the International Congress on Peer Review and Scientific Publication in September 2022. Even when authors promise to share their data, they often fail to follow through. Out of 21,000 journal articles that included data-sharing plans, a study published in PLOS ONE in 2020 found, fewer than 21% provided links to the repository storing the data.

Journals and funders, too, have a mixed record when it comes to supporting data sharing. Research presented at the September 2022 peer-review congress found only about half of the 110 largest public, corporate, and philanthropic funders of health research around the world recommend or require grantees to share data…

“Health research is the field where the ethical obligation to share data is the highest,” says Aidan Tan, a clinician-researcher at the University of Sydney who led the study. “People volunteer in clinical trials and put themselves at risk to advance medical research and ultimately improve human health.”

Across many fields of science, researchers’ support for sharing data has increased during the past decade, surveys show. But given the potential cost and complexity, many are apprehensive about the NIH policy, and other requirements to follow. “How we get there is pretty messy right now,” says Parker Antin, a developmental biologist and associate vice president for research at the University of Arizona. “I’m really not sure whether the total return will justify the cost. But I don’t know of any other way to find out than trying to do it.”

Science offers this guide as researchers prepare to plunge in….(More)”.

The State of Open Data Policy Repository


The State of Open Data Policy Repository is a collection of recent policy developments surrounding open data, data reuse, and data collaboration around the world. 

A refinement of compilation of policies launched at the Open Data Policy Summit last year, the State of Open Data Policy Online Repository is an interactive resource that looks at recent legislation, directives, and proposals that affect open data and data collaboration all around the world. It captures what kinds of data collaboration issues policymakers are currently focused on and where the momentum for data innovation is heading in countries around the world.

Users can filter policies according to region, country, focus, and type of data sharing. The review currently surfaced approximately 60 examples of recent legislative acts, proposals, directives, and other policy documents, from which the Open Data Policy Lab draws findings about the need to promote more innovative policy frameworks.

This collection shows that, despite increased interest in the third wave conception of open data, policy development remains nascent. It is primarily concerned with open data repositories at the expense of alternative forms of collaboration. Most policies listed focus on releasing government data and, elsewhere, most nations still don’t have open data rules or a method to put the policies in place. 

This work reveals a pressing need for institutions to create frameworks that can direct data professionals since there are worries that inaction may both allow for misuse of data and lead to missed chances to use data…(More)”.

Computational Social Science for the Public Good: Towards a Taxonomy of Governance and Policy Challenges


Chapter by Stefaan G. Verhulst: “Computational Social Science (CSS) has grown exponentially as the process of datafication and computation has increased. This expansion, however, is yet to translate into effective actions to strengthen public good in the form of policy insights and interventions. This chapter presents 20 limiting factors in how data is accessed and analysed in the field of CSS. The challenges are grouped into the following six categories based on their area of direct impact: Data Ecosystem, Data Governance, Research Design, Computational Structures and Processes, the Scientific Ecosystem, and Societal Impact. Through this chapter, we seek to construct a taxonomy of CSS governance and policy challenges. By first identifying the problems, we can then move to effectively address them through research, funding, and governance agendas that drive stronger outcomes…(More)”. Full Book: Handbook of Computational Social Science for Policy