Stefaan Verhulst
Lily Hay Newman at Wired: “…Given that IoT security seems unlikely to magically improve anytime soon, researchers and regulators are rallying behind a new approach to managing IoT risk. Think of it as nutrition labels for embedded devices.
At the IEEE Symposium on Security & Privacy last month, researchers from Carnegie Mellon University presented a prototype security and privacy label they created based on interviews and surveys of people who own IoT devices, as well as privacy and security experts. They also published a tool for generating their labels. The idea is to shed light on a device’s security posture but also explain how it manages user data and what privacy controls it has. For example, the labels highlight whether a device can get security updates and how long a company has pledged to support it, as well as the types of sensors present, the data they collect, and whether the company shares that data with third parties.
“In an IoT setting, the amount of sensors and information you have about users is potentially invasive and ubiquitous,” says Yuvraj Agarwal, a networking and embedded systems researcher who worked on the project. “It’s like trying to fix a leaky bucket. So transparency is the most important part. This work shows and enumerates all the choices and factors for consumers.”
Nutrition labels on packaged foods have a certain amount of standardization around the world, but they’re still more opaque than they could be. And security and privacy issues are even less intuitive to most people than soluble and insoluble fiber. So the CMU researchers focused a lot of their efforts on making their IoT label as transparent and accessible as possible. To that end, they included both a primary and secondary layer to the label. The primary label is what would be printed on device boxes. To access the secondary label, you could follow a URL or scan a QR code to see more granular information about a device….(More)”.
Case report by Giampiero Giacomello and Oltion Preka: “In an increasingly technology-dependent world, it is not surprising that STEM (Science, Technology, Engineering, and Mathematics) graduates are in high demand. This state of affairs, however, has made the public overlook the case that not only computing and artificial intelligence are naturally interdisciplinary, but that a huge portion of generated data comes from human–computer interactions, thus they are social in character and nature. Hence, social science practitioners should be in demand too, but this does not seem the case. One of the reasons for such a situation is that political and social science departments worldwide tend to remain in their “comfort zone” and see their disciplines quite traditionally, but by doing so they cut themselves off from many positions today. The authors believed that these conditions should and could be changed and thus in a few years created a specifically tailored course for students in Political Science. This paper examines the experience of the last year of such a program, which, after several tweaks and adjustments, is now fully operational. The results and students’ appreciation are quite remarkable. Hence the authors considered the experience was worth sharing, so that colleagues in social and political science departments may feel encouraged to follow and replicate such an example….(More)”
Blog Post by Jennifer Park, Lauren Su, Lisa Fiedler, and Madeleine Weatherhead: “Since January of this year, the novel coronavirus has swept rapidly throughout the United States, leaving no city untouched. To contain the virus’ spread and protect residents’ health and livelihoods, local leaders have had to act swiftly and decisively. It is a challenge in scope and scale unlike any other in recent history — and it has underscored the power of data to guide life-and-death decisions and build trust.
Take, for example, Los Angeles. As cities across the country began issuing states of emergency and acting to promote public health, Mayor Eric Garcetti quickly identified the city’s response priorities: supporting families, small businesses, healthcare workers, and unhoused Angelenos, and increasing the healthcare equipment and testing kits available for the city. Mayor Garcetti tapped his Chief Information Officer and Innovation Team to collect and analyze data, to inform decisions, and share real-time information publicly.
The Mayor was soon conducting daily briefings, updating the public on the latest virus-related data and informing city residents about various decisions made by the city — from pausing parking rules enforcement to opening thousands of temporary shelter beds. He used data to justify key decisions, linking stay-at-home orders to a decrease in COVID-19 cases from week to week.
Los Angeles’ swift response built on an existing culture of leveraging data to set goals, make decisions, and communicate with the public. Its leaders are now seeing the positive impact of having invested in foundational data capacity — regular tracking of cases, hospital capacity, and infection rates have proven to be vital to helping and accelerating the city’s responses to COVID-19.
Other cities, too, have leaned on established data practices and infrastructure in their response efforts, both to the benefit of their residents and to lay a stronger foundation to guide recovery….(More)“.
Reflection Document by The GovLab: “Racism is a systemic issue that pervades every aspect of life in the United States and around the world. In recent months, its corrosive influence has been made starkly visible, especially on Black people. Many people are hurting. Their rage and suffering stem from centuries of exclusion and from being subject to repeated bias and violence. Across the country, there have been protests decrying racial injustice. Activists have called upon the government to condemn bigotry and racism, to act against injustice, to address systemic and growing inequality.
Institutions need to take meaningful action to address such demands. Though racism is not experienced in the same way by all communities of color, policymakers must respond to the anxieties and apprehensions of Black people as well as those of communities of color more generally. This work will require institutions and individuals to reflect on how they may be complicit in perpetuating structural and systematic inequalities and harm and to ask better questions about the inequities that exist in society (laid bare in both recent acts of violence and in racial disadvantages in health outcomes during the ongoing COVID-19 crisis). This work is necessary but unlikely to be easy. As Rashida Richardson, Director of Policy Research at the AI Now Institute at NYU notes:
“Social and political stratifications also persist and worsen because they are embedded into our social and legal systems and structures. Thus, it is difficult for most people to see and understand how bias and inequalities have been automated or operationalized over time.”
We believe progress can be made, at least in part, through responsible data access and analysis, including increased availability of (disaggregated) data through data collaboration. Of course, data is only one part of the overall picture, and we make no claims that data alone can solve such deeply entrenched problems. Nonetheless, data can have an impact by making inequalities resulting from racism more quantifiable and inaction less excusable.
…Prioritizing any of these topics will also require increased community engagement and participatory agenda setting. Likewise, we are deeply conscious that data can have a negative as well as positive impact and that technology can perpetuate racism when designed and implemented without the input and participation of minority communities and organizations. While our report here focuses on the promise of data, we need to remain aware of the potential to weaponize data against vulnerable and already disenfranchised communities. In addition, (hidden) biases in data collected and used in AI algorithms, as well as in a host of other areas across the data life cycle, will only exacerbate racial inequalities if not addressed….(More)”
ALSO: The piece is supplemented by a crowdsourced listing of Data-Driven Efforts to Address Racial Inequality.
Paper by Sahar F. Aziz and Khaled A. Beydoun: “Virtual surveillance is the modern extension of established policing models that tie dissident Muslim advocacy to terror suspicion and Black activism to political subversion. Countering Violent Extremism (“CVE”) and Black Identity Extremism (“BIE”) programs that specifically target Muslim and Black populations are shifting from on the ground to online.
Law enforcement exploits social media platforms — where activism and advocacy is robust — to monitor and crack down on activists. In short, the new policing is the old policing, but it is stealthily morphing and moving onto virtual platforms where activism is fluidly unfolding in real time. This Article examines how the law’s failure to keep up with technological advancements in social media poses serious risks to the ability of minority communities to mobilize against racial and religious injustice….(More)”.
Paper by Jörg Hoffmann: “The expected economic and social benefits of data access and sharing are enormous. And yet, particularly in the B2B context, data sharing of privately held data between companies has not taken off at efficient scale. This already led to the adoption of sector specific data governance and access regimes. Two of these regimes are enshrined in the PSD2 that introduced an access to account and a data portability rule for specific account information for third party payment providers.
This paper analyses these sector-specific access and portability regimes and identifies regulatory shortcomings that should be addressed and can serve as further guidance for further data access regulation. It first develops regulatory guidelines that build around the multiple regulatory dimensions of data and the potential adverse effects that may be created by too broad data access regimes.
In this regard the paper assesses the role of factual data exclusivity for data driven innovation incentives for undertakings, the role of industrial policy driven market regulation within the principle of a free market economy, the impact of data sharing on consumer sovereignty and choice, and ultimately data induced-distortions of competition. It develops the findings by taking recourse to basic IP and information economics and the EU competition law case law pertaining refusal to supply cases, the rise of ‘surveillance capitalism’ and to current competition policy considerations with regard to the envisioned preventive competition control regime tackling data rich ‘undertakings of paramount importance for competition across markets’ in Germany. This is then followed by an analysis of the PSD2 access and portability regimes in light of the regulatory principles….(More)”.
Book by Dipayan Ghosh on “Designing a new digital social contact for our technological future…High technology presents a paradox. In just a few decades, it has transformed the world, making almost limitless quantities of information instantly available to billions of people and reshaping businesses, institutions, and even entire economies. But it also has come to rule our lives, addicting many of us to the march of megapixels across electronic screens both large and small.
Despite its undeniable value, technology is exacerbating deep social and political divisions in many societies. Elections influenced by fake news and unscrupulous hidden actors, the cyber-hacking of trusted national institutions, the vacuuming of private information by Silicon Valley behemoths, ongoing threats to vital infrastructure from terrorist groups and even foreign governments—all these concerns are now part of the daily news cycle and are certain to become increasingly serious into the future.
In this new world of endless technology, how can individuals, institutions, and governments harness its positive contributions while protecting each of us, no matter who or where we are?
In this book, a former Facebook public policy adviser who went on to assist President Obama in the White House offers practical ideas for using technology to create an open and accessible world that protects all consumers and civilians. As a computer scientist turned policymaker, Dipayan Ghosh answers the biggest questions about technology facing the world today. Proving clear and understandable explanations for complex issues, Terms of Disservice will guide industry leaders, policymakers, and the general public as we think about how we ensure that the Internet works for everyone, not just Silicon Valley….(More)”.
Els Torreele at StatNews: “…Imagine mobilizing the world’s brightest and most creative minds — from biotech and pharmaceutical industries, universities, government agencies, and more — to work together using all available knowledge, innovation, and infrastructure to develop an effective vaccine against Covid-19. A true “people’s vaccine” that would be made freely available to all people in all countries. That’s what an open letter by more than 140 world leaders and experts calls for.
Unfortunately, that is not how the race for a Covid-19 vaccine is being run. The rules of that game are oblivious to the goal of maximizing global health outcomes and access.
Despite a pipeline of more than 100 vaccine candidates reflecting massive public and private efforts, there exists no public-health-focused way to design or prioritize the development of the most promising candidates. Instead, the world is adopting a laissez-faire approach and letting individual groups and companies compete for marketing authorization, each with their proprietary vaccine candidate, and assume that the winner of that race will be the best vaccine to tackle the pandemic.
Science thrives, and technological progress is made, when knowledge is exchanged and shared freely, generating collective intelligence by building on the successes and failures of others in real time instead of through secretive competition. Regrettably, market logic has come to overtake medicinal product innovation, including the unproven premise that competition is an efficient way to advance science and deliver the best solutions for public health….(More)”.
Report by Denise Linn Riedl: “Our cities are changing at an incredible pace. The technology being deployed on our sidewalks and streetlights has the potential to improve mobility, sustainability, connectivity, and city services.
Public value and public inclusion in this change, however, are not inevitable. Depending on how these technologies are deployed, they have the potential to increase inequities and distrust as much as they can create responsive government services.
Recognizing this tension, an initial coalition of local practitioners began collaborating in 2019 with the support of the Benton Institute for Broadband & Society. We combined knowledge of and personal experience with local governments to tackle a common question: What does procedural justice look like when cities deploy new technology?
This guide is meant for any local worker—inside or outside of government—who is helping to plan or implement technological change in their community. It’s a collection of experiences, cases, and best practices that we hope will be valuable and will make projects stronger, more sustainable, and more inclusive….(More)”.
AP Article by Matt O’Brien: “IBM is getting out of the facial recognition business, saying it’s concerned about how the technology can be used for mass surveillance and racial profiling.
Ongoing protests responding to the death of George Floyd have sparked a broader reckoning over racial injustice and a closer look at the use of police technology to track demonstrators and monitor American neighborhoods.
IBM is one of several big tech firms that had earlier sought to improve the accuracy of their face-scanning software after research found racial and gender disparities. But its new CEO is now questioning whether it should be used by police at all.
“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” wrote CEO Arvind Krishna in a letter sent Monday to U.S. lawmakers.
IBM’s decision to stop building and selling facial recognition software is unlikely to affect its bottom line, since the tech giant is increasingly focused on cloud computing while an array of lesser-known firms have cornered the market for government facial recognition contracts.
“But the symbolic nature of this is important,” said Mutale Nkonde, a research fellow at Harvard and Stanford universities who directs the nonprofit AI For the People.
Nkonde said IBM shutting down a business “under the guise of advancing anti-racist business practices” shows that it can be done and makes it “socially unacceptable for companies who tweet Black Lives Matter to do so while contracting with the police.”…(More)”.