Toward Equitable Innovation in Health and Medicine: A Framework 


Report by The National Academies: “Advances in biomedical science, data science, engineering, and technology are leading to high-pace innovation with potential to transform health and medicine. These innovations simultaneously raise important ethical and social issues, including how to fairly distribute their benefits and risks. The National Academies of Sciences, Engineering, and Medicine, in collaboration with the National Academy of Medicine, established the Committee on Creating a Framework for Emerging Science, Technology, and Innovation in Health and Medicine to provide leadership and engage broad communities in developing a framework for aligning the development and use of transformative technologies with ethical and equitable principles. The committees resulting report describes a governance framework for decisions throughout the innovation life cycle to advance equitable innovation and support an ecosystem that is more responsive to the needs of a broader range of individuals and is better able to recognize and address inequities as they arise…(More)”.

Data Governance and Privacy Challenges in the Digital Healthcare Revolution


Paper by Nargiz Kazimova: “The onset of the COVID-19 pandemic has catalyzed an imperative for digital transformation in the healthcare sector. This study investigates the accelerated shift towards a digitally-enhanced healthcare delivery system, advocating for the widespread adoption of telemedicine and the relaxation of regulatory barriers. The paper also scrutinizes the burgeoning use of electronic health records, wearable devices, artificial intelligence, and machine learning, and how these technologies offer promising avenues for improving patient care and medical outcomes. Despite the advancements, the rapid digital integration raises significant privacy and security concerns. The stigma associated with certain illnesses and potential discrimination presents serious challenges that digital healthcare innovations can exacerbate.
This research underscores the criticality of stringent data governance to safeguard personal health information in the face of growing digitalization. The analysis begins with an exploration of the data governance role in optimizing healthcare outcomes and preserving privacy, followed by an assessment of the breadth and depth of health data proliferation. The paper subsequently navigates the complex legal and ethical terrain, contrasting HIPAA and GDPR frameworks to underline the current regulatory challenges.
A comprehensive set of strategic recommendations is provided for reinforcing data governance and enhancing privacy protection in healthcare. The author advises on updating legal provisions to match the dynamic healthcare environment, widening the scope of privacy laws, and improving the transparency of data-sharing practices. The establishment of ethical guidelines for the collection and use of health data is also recommended, focusing on explicit consent, decision-making transparency, harm accountability, maintenance of data anonymity, and the mitigation of biases in datasets.
Moreover, the study advocates for stronger transparency in data sharing with clear communication on data use, rigorous internal and external audit mechanisms, and informed consent processes. The conclusion calls for increased collaboration between healthcare providers, patients, administrative staff, ethicists, regulators, and technology companies to create governance models that reconcile patient rights with the expansive use of health data. The paper culminates in a call to action for a balanced approach to privacy and innovation in the data-driven era of healthcare…(More)”.

The AI regulations that aren’t being talked about


Article by Deloitte: “…But our research shows that this focus may be overlooking some of the most important tools already on the books. Of the 1,600+ policies we analyzed, only 11% were focused on regulating AI-adjacent issues like data privacy, cybersecurity, intellectual property, and so on (Figure 5). Even when limiting the search to only regulations, 60% were focused directly on AI and only 40% on AI-adjacent issues (Figure 5). For example, several countries have data protection agencies with regulatory powers to help protect citizens’ data privacy. But while these agencies may not have AI or machine learning named specifically in their charters, the importance of data in training and using AI models makes them an important AI-adjacent tool.

This can be problematic because directly regulating a fast-moving technology like AI can be difficult. Take the hypothetical example of removing bias from home loan decisions. Regulators could accomplish this goal by mandating that AI should have certain types of training data to ensure that the models are representative and will not produce biased results, but such an approach can become outdated when new methods of training AI models emerge. Given the diversity of different types of AI models already in use, from recurrent neural networks to generative pretrained transformers to generative adversarial networks and more, finding a single set of rules that can deliver what the public desires both now, and in the future, may be a challenge…(More)”.

The battle over right to repair is a fight over your car’s data


Article by Ofer Tur-Sinai: “Cars are no longer just a means of transportation. They have become rolling hubs of data communication. Modern vehicles regularly transmit information wirelessly to their manufacturers.

However, as cars grow “smarter,” the right to repair them is under siege.

As legal scholars, we find that the question of whether you and your local mechanic can tap into your car’s data to diagnose and repair spans issues of property rights, trade secrets, cybersecurity, data privacy and consumer rights. Policymakers are forced to navigate this complex legal landscape and ideally are aiming for a balanced approach that upholds the right to repair, while also ensuring the safety and privacy of consumers…

Until recently, repairing a car involved connecting to its standard on-board diagnostics port to retrieve diagnostic data. The ability for independent repair shops – not just those authorized by the manufacturer – to access this information was protected by a state law in Massachusetts, approved by voters on Nov. 6, 2012, and by a nationwide memorandum of understanding between major car manufacturers and the repair industry signed on Jan. 15, 2014.

However, with the rise of telematics systems, which combine computing with telecommunications, these dynamics are shifting. Unlike the standardized onboard diagnostics ports, telematics systems vary across car manufacturers. These systems are often protected by digital locks, and circumventing these locks could be considered a violation of copyright law. The telematics systems also encrypt the diagnostic data before transmitting it to the manufacturer.

This reduces the accessibility of telematics information, potentially locking out independent repair shops and jeopardizing consumer choice – a lack of choice that can lead to increased costs for consumers….

One issue left unresolved by the legislation is the ownership of vehicle data. A vehicle generates all sorts of data as it operates, including location, diagnostic, driving behavior, and even usage patterns of in-car systems – for example, which apps you use and for how long.

In recent years, the question of data ownership has gained prominence. In 2015, Congress legislated that the data stored in event data recorders belongs to the vehicle owner. This was a significant step in acknowledging the vehicle owner’s right over specific datasets. However, the broader issue of data ownership in today’s connected cars remains unresolved…(More)”.

Private UK health data donated for medical research shared with insurance companies


Article by Shanti Das: “Sensitive health information donated for medical research by half a million UK citizens has been shared with insurance companies despite a pledge that it would not be.

An Observer investigation has found that UK Biobank opened up its vast biomedical database to insurance sector firms several times between 2020 and 2023. The data was provided to insurance consultancy and tech firms for projects to create digital tools that help insurers predict a person’s risk of getting a chronic disease. The findings have raised concerns among geneticists, data privacy experts and campaigners over vetting and ethical checks at Biobank.

Set up in 2006 to help researchers investigating diseases, the database contains millions of blood, saliva and urine samples, collected regularly from about 500,000 adult volunteers – along with medical records, scans, wearable device data and lifestyle information.

Approved researchers around the world can pay £3,000 to £9,000 to access records ranging from medical history and lifestyle information to whole genome sequencing data. The resulting research has yielded major medical discoveries and led to Biobank being considered a “jewel in the crown” of British science.

Biobank said it strictly guarded access to its data, only allowing access by bona fide researchers for health-related projects in the public interest. It said this included researchers of all stripes, whether employed by academic, charitable or commercial organisations – including insurance companies – and that “information about data sharing was clearly set out to participants at the point of recruitment and the initial assessment”.

But evidence gathered by the Observer suggests Biobank did not explicitly tell participants it would share data with insurance companies – and made several public commitments not to do so.

When the project was announced, in 2002, Biobank promised that data would not be given to insurance companies after concerns were raised that it could be used in a discriminatory way, such as by the exclusion of people with a particular genetic makeup from insurance.

In an FAQ section on the Biobank website, participants were told: “Insurance companies will not be allowed access to any individual results nor will they be allowed access to anonymised data.” The statement remained online until February 2006, during which time the Biobank project was subject to public scrutiny and discussed in parliament.

The promise was also reiterated in several public statements by backers of Biobank, who said safeguards would be built in to ensure that “no insurance company or police force or employer will have access”.

This weekend, Biobank said the pledge – made repeatedly over four years – no longer applied. It said the commitment had been made before recruitment formally began in 2007 and that when Biobank volunteers enrolled they were given revised information.

This included leaflets and consent forms that contained a provision that anonymised Biobank data could be shared with private firms for “health-related” research, but did not explicitly mention insurance firms or correct the previous assurances…(More)”

Commission welcomes final agreement on EU Digital Identity Wallet


Press Release: “The Commission welcomes the final agreement reached today by the European Parliament and the Council of the EU at the final trilogue on the Regulation introducing European Digital Identity Wallets. This concludes the co-legislators’ work implementing the results of the provisional political agreement reached on 29 June 2023 on a legal framework for an EU Digital Identity, the first trusted and secure digital identity framework for all Europeans.

This marks an important step towards the Digital Decade 2030 targets on the digitalisation of public services. All EU citizens will be offered the possibility to have an EU Digital Identity Wallet to access public and private online services in full security and protection of personal data all over Europe.

In addition to public services, Very Large Online Platforms designated under the Digital Services Act (including services such as Amazon, Booking.com or Facebook) and private services that are legally required to authenticate their users will have to accept the EU Digital Identity Wallet for logging into their online services. In addition, the wallets’ features and common specifications will make it attractive for all private service providers to accept them for their services, thus creating new business opportunities. The Wallet will also facilitate service providers’ compliance with various regulatory requirements.

In addition to securely storing their digital identity, the Wallet will allow users to open bank accounts, make payments and hold digital documents, such as a mobile Driving Licence, a medical prescription, a professional certificate or a travel ticket. The Wallet will offer a user-friendly and practical alternative to online identification guaranteed by EU law. The Wallet will fully respect the user’s choice whether or not to share personal data, it will offer the highest degree of security certified independently to the same standards, and relevant parts of its code will be published open source to exclude any possibility of misuse, illegal tracking, tracing or government interception.

The legislative discussions have strengthened the ambition of the regulation in a number of areas important for citizens. The Wallet will contain a dashboard of all transactions accessible to its holder, offer the possibility to report alleged violations of data protection, and allow interaction between wallets. Moreover, citizens will be able to onboard the wallet with existing national eID schemes and benefit from free eSignatures for non-professional use…(More)”.

Managing smart city governance – A playbook for local and regional governments


Report by UN Habitat” “This playbook and its recommendations are primarily aimed at municipal governments and their political leaders, local administrators, and public officials who are involved in smart city initiatives. The recommendations, which are delineated in the subsequent sections of this playbook, are intended to help develop more effective, inclusive, and sustainable governance practices for urban digital transformations. The guidance offered on these pages could also be useful for national agencies, private companies, non-governmental organizations, and all stakeholders committed to promoting the sustainable development of urban communities through the implementation of smart city initiatives…(More)”.

Despite Its Problems, Network Technology Can Help Renew Democracy


Essay by Daniel Araya: “The impact of digital technologies on contemporary economic and social development has been nothing short of revolutionary. The rise of the internet has transformed the way we share content, buy and sell goods, and manage our institutions. But while the hope of the internet has been its capacity to expand human connection and bring people together, the reality has often been something else entirely.

When social media networks first emerged about a decade ago, they were hailed as “technologies of liberation” with the capacity to spread democracy. While these social networks have undeniably democratized access to information, they have also helped to stimulate social and political fragmentation, eroding the discursive fibres that hold democracies together.

Prior to the internet, news and media were the domain of professional journalists, overseen by powerful experts, and shaped by gatekeepers. However, in the age of the internet, platforms circumvent the need for gatekeepers altogether. Bypassing the centralized distribution channels that have served as a foundation to mass industrial societies, social networks have begun reshaping the way democratic societies build consensus. Given the importance of discourse to democratic self-government, concern is growing that democracy is failing…(More)”.

Parliament Buildings: The Architecture of Politics in Europe


Book edited by Sophia Psarra, Uta Staiger, and Claudia Sternberg: “As political polarisation undermines confidence in the shared values and established constitutional orders of many nations, it is imperative that we explore how parliaments are to stay relevant and accessible to the citizens whom they serve. The rise of modern democracies is thought to have found physical expression in the staged unity of the parliamentary seating plan. However, the built forms alone cannot give sufficient testimony to the exercise of power in political life.

Parliament Buildings brings together architecture, history, art history, history of political thought, sociology, behavioural psychology, anthropology and political science to raise a host of challenging questions. How do parliament buildings give physical form to norms and practices, to behaviours, rituals, identities and imaginaries? How are their spatial forms influenced by the political cultures they accommodate? What kinds of histories, politics and morphologies do the diverse European parliaments share, and how do their political trajectories intersect?

This volume offers an eclectic exploration of the complex nexus between architecture and politics in Europe. Including contributions from architects who have designed or remodelled four parliament buildings in Europe, it provides the first comparative, multi-disciplinary study of parliament buildings across Europe and across history…(More)”

Cities are ramping up to make the most of generative AI


Blog by Citylab: “Generative artificial intelligence promises to transform the way we work, and city leaders are taking note. According to a recent survey by Bloomberg Philanthropies in partnership with the Centre for Public Impact, the vast majority of mayors (96 percent) are interested in how they can use generative AI tools like ChatGPT—which rely on machine learning to identify patterns in data and create, or generate, new content after being fed prompts—to improve local government. Of those cities surveyed, 69 percent report that they are already exploring or testing the technology. Specifically, they’re interested in how it can help them more quickly and successfully address emerging challenges with traffic and transportation, infrastructure, public safety, climate, education, and more.  

Yet even as a majority of city leaders surveyed are exploring generative AI’s potential, only a small fraction of them (2 percent) are actively deploying the technology. They indicated there are a number of issues getting in the way of broader implementation, including a lack of technical expertise, budgetary constraints, and ethical considerations like security, privacy, and transparency…(More)”.