Paper by Thomas Margoni, Charlotte Ducuing and Luca Schirru: “The Data Act proposal of February 2022 constitutes a central element of a broader and ambitious initiative of the European Commission (EC) to regulate the data economy through the erection of a new general regulatory framework for data and digital markets. The resulting framework may be represented as a model of governance between a pure market-driven model and a fully regulated approach, thereby combining elements that traditionally belong to private law (e.g., property rights, contracts) and public law (e.g., regulatory authorities, limitation of contractual freedom). This article discusses the role of (intellectual) property rights as well as of other forms of rights allocation in data legislation with particular attention to the Data Act proposal. We argue that the proposed Data Act has the potential to play a key role in the way in which data, especially privately held data, may be accessed, used, and shared. Nevertheless, it is only by looking at the whole body of data (and data related) legislation that the broader plan for a data economy can be grasped in its entirety. Additionally, the Data Act proposal may also arguably reveal the elements for a transition from a property-based to a governance-based paradigm in the EU data strategy. Whereas elements of data governance abound, the stickiness of property rights and rhetoric seem however hard to overcome. The resulting regulatory framework, at least for now, is therefore an interesting but not always perfectly coordinated mix of both. Finally, this article suggests that the Data Act Proposal may have missed the chance to properly address the issue of data holders’ power and related information asymmetries, as well as the need for coordination mechanisms…(More)”.
End of data sharing could make Covid-19 harder to control, experts and high-risk patients warn
Article by Sam Whitehead: “…The federal government’s public health emergency that’s been in effect since January 2020 expires May 11. The emergency declaration allowed for sweeping changes in the U.S. health care system, like requiring state and local health departments, hospitals, and commercial labs to regularly share data with federal officials.
But some shared data requirements will come to an end and the federal government will lose access to key metrics as a skeptical Congress seems unlikely to grant agencies additional powers. And private projects, like those from The New York Times and Johns Hopkins University, which made covid data understandable and useful for everyday people, stopped collecting data in March.
Public health legal scholars, data experts, former and current federal officials, and patients at high risk of severe covid outcomes worry the scaling back of data access could make it harder to control covid.
There have been improvements in recent years, such as major investments in public health infrastructure and updated data reporting requirements in some states. But concerns remain that the overall shambolic state of U.S. public health data infrastructure could hobble the response to any future threats.
“We’re all less safe when there’s not the national amassing of this information in a timely and coherent way,” said Anne Schuchat, former principal deputy director of the Centers for Disease Control and Prevention.
A lack of data in the early days of the pandemic left federal officials, like Schuchat, with an unclear picture of the rapidly spreading coronavirus. And even as the public health emergency opened the door for data-sharing, the CDC labored for months to expand its authority.
Eventually, more than a year into the pandemic, the CDC gained access to data from private health care settings, such as hospitals and nursing homes, commercial labs, and state and local health departments…(More)”. See also: Why we still need data to understand the COVID-19 pandemic
Why we need applied humanities approaches
Article by Kathryn Strong Hansen: “Since the term “applied humanities” is not especially common, some explanation may be helpful. Applied humanities education prepares students to use humanities knowledge and methods in practice rather than only in theory. As the University of Arizona’s Department of Public and Applied Humanities puts it, the goal is “public enrichment and the direct and tangible improvement of the human condition.” While this goal undoubtedly involves “intrahumanities” outputs like museum and exhibit curation or textual editing, public enrichment through the humanities can also be pursued through science and engineering curricula.
The direct goal of much science education is improving the human condition, such as CRISPR developments opening up possibilities for gene therapies. Similarly, good engineering seeks to improve the human condition, like the LEED-certified building methods that minimize negative impacts on the environment.
Since the humanities concern themselves with the human experience in all its facets, they can offer much to STEM endeavors, and applied humanities approaches have been implemented for many decades. One of the most established applied humanities pursuits is applied linguistics, which has existed as a field of study since about 1948. Another useful and growing example is that of the medical humanities, which provide medical practitioners with training that can help them interact more effectively with patients and navigate the emotional impact of their profession.
While applied approaches might be less widespread or established in other humanities fields, they are just as needed. In part, they are needed because the skills and knowledge of humanities scholars can help students in a multiplicity of fields, including STEM disciplines, to improve their understanding of their subject matter and how it connects to society at large…(More)”.
Behavioral Economics: Policy Impact and Future Directions
Report from the National Academies of Sciences, Engineering, and Medicine: “Behavioral economics – a field based in collaborations among economists and psychologists – focuses on integrating a nuanced understanding of behavior into models of decision-making. Since the mid-20th century, this growing field has produced research in numerous domains and has influenced policymaking, research, and marketing. However, little has been done to assess these contributions and review evidence of their use in the policy arena.
Behavioral Economics: Policy Impact and Future Directions examines the evidence for behavioral economics and its application in six public policy domains: health, retirement benefits, climate change, social safety net benefits, climate change, education, and criminal justice. The report concludes that the principles of behavioral economics are indispensable for the design of policy and recommends integrating behavioral specialists into policy development within government units. In addition, the report calls for strengthening research methodology and identifies research priorities for building on the accomplishments of the field to date…(More)”.
How well do the UK government’s ‘areas of research interest’ work as boundary objects to facilitate the use of research in policymaking?
Paper by Annette Boaz and Kathryn Oliver: “Articulating the research priorities of government is one mechanism for promoting the production of relevant research to inform policy. This study focuses on the Areas of Research Interest (ARIs) produced and published by government departments in the UK. Through a qualitative study consisting of interviews with 25 researchers, civil servants, intermediaries and research funders, the authors explored the role of ARIs. Using the concept of boundary objects, the paper considers the ways in which ARIs are used and how they are supported by boundary practices and boundary workers, including through engagement opportunities. The paper addresses the following questions: What boundaries do ARIs cross, intended and otherwise? What characteristics of ARIs enable or hinder this boundary-crossing? and What resources, skills, work or conditions are required for this boundary-crossing to work well? We see the ARIs being used as a boundary object across multiple boundaries, with implications for the ways in which the ARIs are crafted and shared. In the application of ARIs in the UK policy context, we see a constant interplay between boundary objects, practices and people all operating within the confines of existing systems and processes. For example, understanding what was meant by a particular ARI sometimes involved ‘decoding’ work as part of the academic-policy engagement process. While ARIs have an important role to play they are no magic bullet. Nor do they tell the whole story of governmental research interests. Optimizing the use of research in policy making requires the galvanisation of a range of mechanisms, including ARIs…(More)”.
The Coming Age of AI-Powered Propaganda
Essay by Josh A. Goldstein and Girish Sastry: “In the seven years since Russian operatives interfered in the 2016 U.S. presidential election, in part by posing as Americans in thousands of fake social media accounts, another technology with the potential to accelerate the spread of propaganda has taken center stage: artificial intelligence, or AI. Much of the concern has focused on the risks of audio and visual “deepfakes,” which use AI to invent images or events that did not actually occur. But another AI capability is just as worrisome. Researchers have warned for years that generative AI systems trained to produce original language—“language models,” for short—could be used by U.S. adversaries to mount influence operations. And now, these models appear to be on the cusp of enabling users to generate a near limitless supply of original text with limited human effort. This could improve the ability of propagandists to persuade unwitting voters, overwhelm online information environments, and personalize phishing emails. The danger is twofold: not only could language models sway beliefs; they could also corrode public trust in the information people rely on to form judgments and make decisions.
The progress of generative AI research has outpaced expectations. Last year, language models were used to generate functional proteins, beat human players in strategy games requiring dialogue, and create online assistants. Conversational language models have come into wide use almost overnight: more than 100 million people used OpenAI’s ChatGPT program in the first two months after it was launched, in December 2022, and millions more have likely used the AI tools that Google and Microsoft introduced soon thereafter. As a result, risks that seemed theoretical only a few years ago now appear increasingly realistic. For example, the AI-powered “chatbot” that powers Microsoft’s Bing search engine has shown itself to be capable of attempting to manipulate users—and even threatening them.
As generative AI tools sweep the world, it is hard to imagine that propagandists will not make use of them to lie and mislead…(More)”.
What AI Means For Animals
Article by Peter Singer and Tse Yip Fai: “The ethics of artificial intelligence has attracted considerable attention, and for good reason. But the ethical implications of AI for billions of nonhuman animals are not often discussed. Given the severe impacts some AI systems have on huge numbers of animals, this lack of attention is deeply troubling.
As more and more AI systems are deployed, they are beginning to directly impact animals in factory farms, zoos, pet care and through drones that target animals. AI also has indirect impacts on animals, both good and bad — it can be used to replace some animal experiments, for example, or to decode animal “languages.” AI can also propagate speciesist biases — try searching “chicken” on any search engine and see if you get more pictures of living chickens or dead ones. While all of these impacts need ethical assessment, the area in which AI has by far the most significant impact on animals is factory farming. The use of AI in factory farms will, in the long run, increase the already huge number of animals who suffer in terrible conditions.
AI systems in factory farms can monitor animals’ body temperature, weight and growth rates and detect parasites, ulcers and injuries. Machine learning models can be created to see how physical parameters relate to rates of growth, disease, mortality and — the ultimate criterion — profitability. The systems can then prescribe treatments for diseases or vary the quantity of food provided. In some cases, they can use their connected physical components to act directly on the animals, emitting sounds to interact with them — giving them electric shocks (when the grazing animal reaches the boundary of the desired area, for example), marking and tagging their bodies or catching and separating them.
You might be thinking that this would benefit the animals — that it means they will get sick less often, and when they do get sick, the problems will be quickly identified and cured, with less room for human error. But the short-term animal welfare benefits brought about by AI are, in our view, clearly outweighed by other consequences…(More)” See also: AI Ethics: The Case for Including Animals.
The Future of Consent: The Coming Revolution in Privacy and Consumer Trust
Report by Ogilvy: “The future of consent will be determined by how we – as individuals, nations, and a global species – evolve our understanding of what counts as meaningful consent. For consumers and users, the greatest challenge lies in connecting consent to a mechanism of relevant, personal control over their data. For businesses and other organizations, the task will be to recast consent as a driver of positive economic outcomes, rather than an obstacle.
In the coming years of digital privacy innovation, regulation, and increasing market maturity, everyone will need to think more deeply about their relationship with consent. As an initial step, we’ve assembled this snapshot on the current and future state of (meaningful) consent: what it means, what the obstacles are, and which critical changes we need to embrace to evolve…(More)”.
A Guide to Adaptive Government: Preparing for Disruption
Report by Nicholas D. Evans: “With disruption now the norm rather than the exception, governments need to rethink business as usual and prepare for business as disrupted.
Government executives and managers should plan for continuous disruption and for how their agencies and departments will operate under continuous turbulence and change. In 2022 alone, the world witnessed war in Ukraine, the continuing effects of the COVID-19 pandemic, and natural disasters such as Hurricane Ian—not to mention energy scarcity, supply chain shortages, the start of a global recession, record highs for inflation, and rising interest rates.
Traditional business continuity and disaster recovery playbooks and many other such earlier approaches—born when disruption was the exception—are no longer sufficient. Rather than operating “business as usual,” government agencies and departments now must plan and operate for “business as disrupted.” One other major pivot point: when these disruptions happen, such as COVID, they bring an opportunity to drive a long awaited or postponed transformation. It is about leveraging that opportunity for change and not simply returning to the status quo The impact to supply chains during the COVID-19 pandemic and recovery illustrates this insight…
Evans recognizes the importance of pursuing agile principles as foundational in realizing the vision of adaptive government described in this report. Agile government principles serve as a powerful foundation for building “intrinsic agility,” since they encourage key cultural, behavioral, and growth mindset approaches to embed agility and adaptability into organizational norms and processes. Many of the insights, guidance, and recommendations offered in this report complement work pursued by the Agile Government Center (AGC), led by the National Academy of Public Administration in collaboration with our Center, and spearheaded by NAPA Fellow and Center Executive Fellow Ed DeSeve.
This report illustrates the strategic significance of adaptability to government organizations today. The author offers new strategies, techniques, and tools to accelerate digital transformation, and better position government agencies to respond to the next wave of both opportunities and disruptive threats—similar to what our Center, NAPA, and partner organizations refer to as “future shocks.” Adaptability as a core competency can support both innovation and risk management, helping governments to optimize for ever-changing mission needs and ambient conditions Adaptability represents a powerful enabler for modern government and enterprise organizations.
We hope that this report helps government leaders, academic experts, and other stakeholders to infuse adaptive thinking throughout the public sector, leading to more effective operations, better outcomes, and improved performance in a world where the only constant seems to be the inevitability of change and disruption…(More)”.
Workforce ecosystems and AI
Report by David Kiron, Elizabeth J. Altman, and Christoph Riedl: “Companies increasingly rely on an extended workforce (e.g., contractors, gig workers, professional service firms, complementor organizations, and technologies such as algorithmic management and artificial intelligence) to achieve strategic goals and objectives. When we ask leaders to describe how they define their workforce today, they mention a diverse array of participants, beyond just full- and part-time employees, all contributing in various ways. Many of these leaders observe that their extended workforce now comprises 30-50% of their entire workforce. For example, Novartis has approximately 100,000 employees and counts more than 50,000 other workers as external contributors. Businesses are also increasingly using crowdsourcing platforms to engage external participants in the development of products and services. Managers are thinking about their workforce in terms of who contributes to outcomes, not just by workers’ employment arrangements.
Our ongoing research on workforce ecosystems demonstrates that managing work across organizational boundaries with groups of interdependent actors in a variety of employment relationships creates new opportunities and risks for both workers and businesses. These are not subtle shifts. We define a workforce ecosystem as:
A structure that encompasses actors, from within the organization and beyond, working to create value for an organization. Within the ecosystem, actors work toward individual and collective goals with interdependencies and complementarities among the participants.
The emergence of workforce ecosystems has implications for management theory, organizational behavior, social welfare, and policymakers. In particular, issues surrounding work and worker flexibility, equity, and data governance and transparency pose substantial opportunities for policymaking.
At the same time, artificial intelligence (AI)—which we define broadly to include machine learning and algorithmic management—is playing an increasingly large role within the corporate context. The widespread use of AI is already displacing workers through automation, augmenting human performance at work, and creating new job categories…(More)”.