AI Can Predict Potential Nutrient Deficiencies from Space


Article by Rachel Berkowitz: “Micronutrient deficiencies afflict more than two billion people worldwide, including 340 million children. This lack of vitamins and minerals can have serious health consequences. But diagnosing deficiencies early enough for effective treatment requires expensive, time-consuming blood draws and laboratory tests.

New research provides a more efficient approach. Computer scientist Elizabeth Bondi and her colleagues at Harvard University used publicly available satellite data and artificial intelligence to reliably pinpoint geographical areas where populations are at high risk of micronutrient deficiencies. This analysis could potentially pave the way for early public health interventions.

Existing AI systems can use satellite data to predict localized food security issues, but they typically rely on directly observable features. For example, agricultural productivity can be estimated from views of vegetation. Micronutrient availability is harder to calculate. After seeing research showing that areas near forests tend to have better dietary diversity, Bondi and her colleagues were inspired to identify lesser-known markers for potential malnourishment. Their work shows that combining data such as vegetation cover, weather and water presence can suggest where populations will lack iron, vitamin B12 or vitamin A.

The team examined raw satellite measurements and consulted with local public health officials, then used AI to sift through the data and pinpoint key features. For instance, a food market, inferred based on roads and buildings visible, was vital for predicting a community’s risk level. The researchers then linked these features to specific nutrients lacking in four regions’ populations across Madagascar. They used real-world biomarker data (blood samples tested in labs) to train and test their AI program….(More)”.

“Co-construction” in deliberative democracy: lessons from the French Citizens’ Convention for Climate


Paper by Louis-Gaëtan Giraudet et al: “Launched in 2019, the French Citizens’ Convention for Climate (CCC) tasked 150 randomly chosen citizens with proposing fair and effective measures to fight climate change. This was to be fulfilled through an “innovative co-construction procedure”, involving some unspecified external input alongside that from the citizens. Did inputs from the steering bodies undermine the citizens’ accountability for the output? Did co-construction help the output resonate with the general public, as is expected from a citizens’ assembly? To answer these questions, we build on our unique experience in observing the CCC proceedings and documenting them with qualitative and quantitative data. We find that the steering bodies’ input, albeit significant, did not impair the citizens’ agency, creativity, and freedom of choice. While succeeding in creating consensus among the citizens who were involved, this co-constructive approach, however, failed to generate significant support among the broader public. These results call for a strengthening of the commitment structure that determines how follow-up on the proposals from a citizens’ assembly should be conducted…(More)”.

10 learnings from considering AI Ethics through global perspectives


Blog by Sampriti Saxena and Stefaan G. Verhulst: “Artificial Intelligence (AI) technologies have the potential to solve the world’s biggest challenges. However, they also come with certain risks to individuals and groups. As these technologies become more prevalent around the world, we need to consider the ethical ramifications of AI use to identify and rectify potential harms. Equally, we need to consider the various associated issues from a global perspective, not assuming that a single approach will satisfy different cultural and societal expectations.

In February 2021, The Governance Lab (The GovLab), the NYU Tandon School of Engineering, the Global AI Ethics Consortium (GAIEC), the Center for Responsible AI @ NYU (R/AI), and the Technical University of Munich’s (TUM) Institute for Ethics in Artificial Intelligence (IEAI) launched AI Ethics: Global Perspectives. …A year and a half later, the course has grown to 38 modules, contributed by 40 faculty members representing over 20 countries. Our conversations with faculty members and our experiences with the course modules have yielded a wealth of knowledge about AI ethics. In keeping with the values of openness and transparency that underlie the course, we summarized these insights into ten learnings to share with a broader audience. In what follows, we outline our key lessons from experts around the world.

Our Ten Learnings:

  1. Broaden the Conversation
  2. The Public as a Stakeholder
  3. Centering Diversity and Inclusion in Ethics
  4. Building Effective Systems of Accountability
  5. Establishing Trust
  6. Ask the Right Questions
  7. The Role of Independent Research
  8. Humans at the Center
  9. Our Shared Responsibility
  10. The Challenge and Potential for a Global Framework…(More)”.

From Knowing to Doing: Operationalizing the 100 Questions for Air Quality Initiative


Report by Jessica Seddon, Stefaan G. Verhulst and Aimee Maron: “…summarizes the September 2021 capstone event that wrapped up 100 Questions for Air Quality, led by GovLab and World Resources Institute (WR). This initiative brought together a group of 100 atmospheric scientists, policy experts, academics and data providers from around the world to identify the most important questions for setting a new, high-impact agenda for further investments in data and data science. After a thorough process of sourcing questions, clustering and ranking them – the public was asked to vote. The results were surprising: the most important question was not about what new data or research is needed, but on how we do more with what we already know to generate political will and investments in air quality solutions.

Co-hosted by Clean Air Fund, Climate and Clean Air Coalition, and Clean Air Catalyst, the 2021 roundtable discussion focused on an answer to that question. This conference proceeding summary reflects early findings from that session and offers a starting point for a much-needed conversation on data-to-action. The group of experts and practitioners from academia, businesses, foundations, government, multilateral organizations, nonprofits, and think tanks have not been identified so they could speak freely….(More)”.

The Behavioral Economics Guide 2022


Editorial by Kathleen Vohs & Avni Shah: “This year’s Behavioral Economics Guide editorial reviews recent work in the areas of self-control and goals. To do so, we distilled the latest findings and advanced a set of guiding principles termed the FRESH framework: Fatigue, Reminders, Ease, Social influence, and Habits. Example findings reviewed include physicians giving out more prescriptions for opioids later in the workday compared to earlier (fatigue); the use of digital reminders to prompt people to re-engage with goals, such as for personal savings, from which they may have turned away (reminders); visual displays that give people data on their behavioral patterns so as to enable feedback and active monitoring (ease); the importance of geographically-local peers in changing behaviors such as residential water use (social influence); and digital and other tools that help people break the link between aspects of the environment and problematic behaviors (habits). We used the FRESH framework as a potential guide for thinking about the kinds of behaviors people can perform in achieving the goal of being environmental stewards of a more sustainable future…(More)”.

Technology is Not Neutral: A Short Guide to Technology Ethics


Book by Stephanie Hare: “It seems that just about every new technology that we bring to bear on improving our lives brings with it some downside, side effect or unintended consequence.

These issues can pose very real and growing ethical problems for all of us. For example, automated facial recognition can make life easier and safer for us – but it also poses huge issues with regard to privacy, ownership of data and even identity theft. How do we understand and frame these debates, and work out strategies at personal and governmental levels?

Technology Is Not Neutral: A Short Guide to Technology Ethics addresses one of today’s most pressing problems: how to create and use tools and technologies to maximize benefits and minimize harms? Drawing on the author’s experience as a technologist, political risk analyst and historian, the book offers a practical and cross-disciplinary approach that will inspire anyone creating, investing in or regulating technology, and it will empower all readers to better hold technology to account…(More)”.

New laws to strengthen Canadians’ privacy protection and trust in the digital economy


Press Release: “Canadians increasingly rely on digital technology to connect with loved ones, to work and to innovate. That’s why the Government of Canada is committed to making sure Canadians can benefit from the latest technologies, knowing that their personal information is safe and secure and that companies are acting responsibly.

Today, the Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry, together with the Honourable David Lametti, Minister of Justice and Attorney General of Canada, introduced the Digital Charter Implementation Act, 2022, which will significantly strengthen Canada’s private sector privacy law, create new rules for the responsible development and use of artificial intelligence (AI), and continue advancing the implementation of Canada’s Digital Charter. As such, the Digital Charter Implementation Act, 2022 will include three proposed acts: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act.

The proposed Consumer Privacy Protection Act will address the needs of Canadians who rely on digital technology and respond to feedback received on previous proposed legislation. This law will ensure that the privacy of Canadians will be protected and that innovative businesses can benefit from clear rules as technology continues to evolve. This includes:

  • increasing control and transparency when Canadians’ personal information is handled by organizations;
  • giving Canadians the freedom to move their information from one organization to another in a secure manner;
  • ensuring that Canadians can request that their information be disposed of when it is no longer needed;
  • establishing stronger protections for minors, including by limiting organizations’ right to collect or use information on minors and holding organizations to a higher standard when handling minors’ information;
  • providing the Privacy Commissioner of Canada with broad order-making powers, including the ability to order a company to stop collecting data or using personal information; and
  • establishing significant fines for non-compliant organizations—with fines of up to 5% of global revenue or $25 million, whichever is greater, for the most serious offences.

The proposed Personal Information and Data Protection Tribunal Act will enable the creation of a new tribunal to facilitate the enforcement of the Consumer Privacy Protection Act. 

The proposed Artificial Intelligence and Data Act will introduce new rules to strengthen Canadians’ trust in the development and deployment of AI systems, including:

  • protecting Canadians by ensuring high-impact AI systems are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and bias;
  • establishing an AI and Data Commissioner to support the Minister of Innovation, Science and Industry in fulfilling ministerial responsibilities under the Act, including by monitoring company compliance, ordering third-party audits, and sharing information with other regulators and enforcers as appropriate; and
  • outlining clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment…(More)”.

How Period-Tracker Apps Treat Your Data, and What That Means if Roe v. Wade Is Overturned


Article by Nicole Nguyen and Cordilia James: “You might not talk to your friends about your monthly cycle, but there’s a good chance you talk to an app about it. And why not? Period-tracking apps are more convenient than using a diary, and the insights are more interesting, too. 

But how much do you know about the ways apps and trackers collect, store—and sometimes share—your fertility and menstrual-cycle data?

The question has taken on new importance following the leak of a draft Supreme Court opinion that would overturn Roe v. Wade. Roe established a constitutional right to abortion, and should the court reverse its 1973 decision, about half the states in the U.S. are likely to restrict or outright ban the procedure.

Phone and app data have long been shared and sold without prominent disclosure, often for advertising purposes. HIPAA, aka the Health Insurance Portability and Accountability Act, might protect information shared between you and your healthcare provider, but it doesn’t typically apply to data you put into an app, even a health-related one. Flo Health Inc., maker of a popular period and ovulation tracker, settled with the Federal Trade Commission in 2021 for sharing sensitive health data with Facebook without making the practice clear to users.

The company completed an independent privacy audit earlier this year. “We remain committed to ensuring the utmost privacy for our users and want to make it clear that Flo does not share health data with any company,” a spokeswoman said.

In a scenario where Roe is overturned, your digital breadcrumbs—including the kind that come from period trackers—could be used against you in states where laws criminalize aiding in or undergoing abortion, say legal experts.

“The importance of menstrual data is not merely speculative. It has been relevant to the government before, in investigations and restrictions,” said Leah Fowler, research director at University of Houston’s Health Law and Policy Institute. She cited a 2019 hearing where Missouri’s state health department admitted to keeping a spreadsheet of Planned Parenthood abortion patients, which included the dates of their last menstrual period.

Prosecutors have also obtained other types of digital information, including text messages and search histories, as evidence for abortion-related cases…(More)”.

To Play Is the Thing: How Game Design Principles Can Make Online Deliberation Compelling


Paper by John Gastil: “This essay draws from game design to improve the prospects of democratic deliberation during government consultation with the public. The argument begins by reviewing the problem of low-quality deliberation in contemporary discourse, then explains how games can motivate participants to engage in demanding behaviors, such as deliberation. Key design features include: the origin, governance, and oversight of the game; the networked small groups at the center of the game; the objectives of these groups; the purpose of artificial intelligence and automated metrics for measuring deliberation; the roles played by public officials and nongovernmental organizations during the game; and the long-term payoff of playing the game for both its convenors and its participants. The essay concludes by considering this project’s wider theoretical significance for deliberative democracy, the first steps for governments and nonprofit organizations adopting this design, and the hazards of using advanced digital technology…(More)”.

Are blockchains decentralized?


Trail of Bits report: “Blockchains can help push the boundaries of current technology in useful ways. However, to make good risk decisions involving exciting and innovative technologies, people need demonstrable facts that are arrived at through reproducible methods and open data.

We believe the risks inherent in blockchains and cryptocurrencies have been poorly described and are often ignored—or even mocked—by those seeking to cash in on this decade’s gold rush.

In response to recent market turmoil and plummeting prices, proponents of cryptocurrency point to the technology’s fundamentals as sound. Are they?

Over the past year, Trail of Bits was engaged by the Defense Advanced Research Projects Agency (DARPA) to examine the fundamental properties of blockchains and the cybersecurity risks associated with them. DARPA wanted to understand those security assumptions and determine to what degree blockchains are actually decentralized.

To answer DARPA’s question, Trail of Bits researchers performed analyses and meta-analyses of prior academic work and of real-world findings that had never before been aggregated, updating prior research with new data in some cases. They also did novel work, building new tools and pursuing original research.

The resulting report is a 30-thousand-foot view of what’s currently known about blockchain technology. Whether these findings affect financial markets is out of the scope of the report: our work at Trail of Bits is entirely about understanding and mitigating security risk.

The report also contains links to the substantial supporting and analytical materials. Our findings are reproducible, and our research is open-source and freely distributable. So you can dig in for yourself.

Key findings

  • Blockchain immutability can be broken not by exploiting cryptographic vulnerabilities, but instead by subverting the properties of a blockchain’s implementations, networking, and consensus protocols. We show that a subset of participants can garner undue, centralized control over the entire system:
    • While the encryption used within cryptocurrencies is for all intents and purposes secure, it does not guarantee security, as touted by proponents.
    • Bitcoin traffic is unencrypted; any third party on the network route between nodes (e.g., internet service providers, Wi-Fi access point operators, or governments) can observe and choose to drop any messages they wish.
    • Tor is now the largest network provider in Bitcoin; just about 55% of Bitcoin nodes were addressable only via Tor (as of March 2022). A malicious Tor exit node can modify or drop traffic….(More)”