Someone Put Facial Recognition Tech onto Meta’s Smart Glasses to Instantly Dox Strangers


Article by Joseph Cox: “A pair of students at Harvard have built what big tech companies refused to release publicly due to the overwhelming risks and danger involved: smart glasses with facial recognition technology that automatically looks up someone’s face and identifies them. The students have gone a step further too. Their customized glasses also pull other information about their subject from around the web, including their home address, phone number, and family members. 

The project is designed to raise awareness of what is possible with this technology, and the pair are not releasing their code, AnhPhu Nguyen, one of the creators, told 404 Media. But the experiment, tested in some cases on unsuspecting people in the real world according to a demo video, still shows the razor thin line between a world in which people can move around with relative anonymity, to one where your identity and personal information can be pulled up in an instant by strangers.

Nguyen and co-creator Caine Ardayfio call the project I-XRAY. It uses a pair of Meta’s commercially available Ray Ban smart glasses, and allows a user to “just go from face to name,” Nguyen said…(More)”.

From Bits to Biology: A New Era of Biological Renaissance powered by AI


Article by Milad Alucozai: “…A new wave of platforms is emerging to address these limitations. Designed with the modern scientist in mind, these platforms prioritize intuitive interfaces, enabling researchers with diverse computational backgrounds to easily navigate and analyze data. They emphasize collaboration, allowing teams to share data and insights seamlessly. And they increasingly incorporate artificial intelligence, offering powerful tools for accelerating analysis and discovery. This shift marks a move towards more user-centric, efficient, and collaborative computational biology, empowering researchers to tackle increasingly complex biological questions. 

Emerging Platforms: 

  • Seqera Labs: Spearheading a movement towards efficient and reproducible research, Seqera Labs provides a suite of tools, including the popular open-source workflow language Nextflow. Their platform empowers researchers to design scalable and reproducible data analysis pipelines, particularly for cloud environments. Seqera streamlines complex computational workflows across diverse biological disciplines by emphasizing automation and flexibility, making data-intensive research scalable, flexible, and collaborative. 
  • Form Bio: Aimed at democratizing access to computational biology, Form Bio provides a comprehensive tech suite built to enable accelerated cell and gene therapy development and computational biology at scale. Its emphasis on collaboration and intuitive design fosters a more inclusive research environment to help organizations streamline therapeutic development and reduce time-to-market.  
  • Code Ocean: Addressing the critical need for reproducibility in research, Code Ocean provides a unique platform for sharing and executing research code, data, and computational environments. By encapsulating these elements in a portable and reproducible format, Code Ocean promotes transparency and facilitates the reuse of research methods, ultimately accelerating scientific discovery. 
  • Pluto Biosciences: Championing a collaborative approach to biological discovery, Pluto Biosciences offers an interactive platform for visualizing and analyzing complex biological data. Its intuitive tools empower researchers to explore data, generate insights, and seamlessly share findings with collaborators. This fosters a more dynamic and interactive research process, facilitating knowledge sharing and accelerating breakthroughs. 

 Open Source Platform: 

  • Galaxy: A widely used open-source platform for bioinformatics analysis. It provides a user-friendly web interface and a vast collection of tools for various tasks, from sequence analysis to data visualization. Its open-source nature fosters community development and customization, making it a versatile tool for diverse research needs. 
  • Bioconductor is a prominent open-source platform for bioinformatics analysis, akin to Galaxy’s commitment to accessibility and community-driven development. It leverages the power of the R programming language, providing a wealth of packages for tasks ranging from genomic data analysis to statistical modeling. Its open-source nature fosters a collaborative environment where researchers can freely access, utilize, and contribute to a growing collection of tools…(More)”

Mapmatics: A Mathematician’s Guide to Navigating the World


Book by Paulina Rowińska: “Why are coastlines and borders so difficult to measure? How does a UPS driver deliver hundreds of packages in a single day? And where do elusive serial killers hide? The answers lie in the crucial connection between maps and math.

In Mapmatics, mathematician Paulina Rowińska leads us on a riveting journey around the globe to discover how maps and math are deeply entwined, and always have been. From a sixteenth-century map, an indispensable navigation tool that exaggerates the size of northern countries, to public transport maps that both guide and confound passengers, to congressional maps that can empower or silence whole communities, she reveals how maps and math have shaped not only our sense of space but our worldview. In her hands, we learn how to read maps like a mathematician—to extract richer information and, just as importantly, to question our conclusions by asking what we don’t see…(More)”.

Who Owns AI?


Paper by Amy Whitaker: “While artificial intelligence (AI) stands to transform artistic practice and creative industries, little has been theorized about who owns AI for creative work. Lawsuits brought against AI companies such as OpenAI and Meta under copyright law invite novel reconsideration of the value of creative work. This paper synthesizes across copyright, hybrid practice, and cooperative governance to work toward collective ownership and decision-making. This paper adds to research in arts entrepreneurship because copyright and shared value is so vital to the livelihood of working artists, including writers, filmmakers, and others in the creative industries. Sarah Silverman’s lawsuit against OpenAI is used as the main case study. The conceptual framework of material and machine, one and many, offers a lens onto value creation and shared ownership of AI. The framework includes a reinterpretation of the fourth factor of fair use under U.S. copyright law to refocus on the doctrinal language of value. AI uses the entirety of creative work in a way that is overlooked because of the small scale of one whole work relative to the overall size of the AI model. Yet a theory of value for creative work gives it dignity in its smallness, the way that one vote still has dignity in a national election of millions. As we navigate these frontiers of AI, experimental models pioneered by artists may be instructive far outside the arts…(More)”.

Scientists around the world call to protect research on one of humanity’s greatest short-term threats – Disinformation


Forum on Democracy and Information: “At a critical time for understanding digital communications’ impact on societies, research on disinformation is endangered. 

In August, researchers around the world bid farewell to CrowdTangle – the Meta-owned social media monitoring tool. The decision by Meta to close the number one platform used to track mis- and disinformation, in what is a major election year, only to present its alternative tool Meta Content Library and API, has been met with a barrage of criticism.

If, as suggested by the World Economic Forum’s 2024 global risk report, disinformation is one of the biggest short-term threats to humanity, our collective ability to understand how it spreads and impacts our society is crucial. Just as we would not impede scientific research into the spread of viruses and disease, nor into natural ecosystems or other historical and social sciences, disinformation research must be permitted to be carried out unimpeded and with access to information needed to understand its complexity. Understanding the political economy of disinformation as well as its technological dimensions is also a matter of public health, democratic resilience, and national security.

By directly affecting the research community’s ability to open social media black boxes, this radical decision will also, in turn, hamper public understanding of how technology affects democracy. Public interest scrutiny is also essential for the next era of technology, notably for the world’s largest AI systems, which are similarly proprietary and opaque. The research community is already calling on AI companies to learn from the mistakes of social media and guarantee protections for good faith research. The solution falls on multiple shoulders and the global scientific community, civil society, public institutions and philanthropies must come together to meaningfully foster and protect public interest research on information and democracy…(More)”.

Unlocking AI for All: The Case for Public Data Banks


Article by Kevin Frazier: “The data relied on by OpenAI, Google, Meta, and other artificial intelligence (AI) developers is not readily available to other AI labs. Google and Meta relied, in part, on data gathered from their own products to train and fine-tune their models. OpenAI used tactics to acquire data that now would not work or may be more likely to be found in violation of the law (whether such tactics violated the law when originally used by OpenAI is being worked out in the courts). Upstart labs as well as research outfits find themselves with a dearth of data. Full realization of the positive benefits of AI, such as being deployed in costly but publicly useful ways (think tutoring kids or identifying common illnesses), as well as complete identification of the negative possibilities of AI (think perpetuating cultural biases) requires that labs other than the big players have access to quality, sufficient data.

The proper response is not to return to an exploitative status quo. Google, for example, may have relied on data from YouTube videos without meaningful consent from users. OpenAI may have hoovered up copyrighted data with little regard for the legal and social ramifications of that approach. In response to these questionable approaches, data has (rightfully) become harder to acquire. Cloudflare has equipped websites with the tools necessary to limit data scraping—the process of extracting data from another computer program. Regulators have developed new legal limits on data scraping or enforced old ones. Data owners have become more defensive over their content and, in some cases, more litigious. All of these largely positive developments from the perspective of data creators (which is to say, anyone and everyone who uses the internet) diminish the odds of newcomers entering the AI space. The creation of a public AI training data bank is necessary to ensure the availability of enough data for upstart labs and public research entities. Such banks would prevent those new entrants from having to go down the costly and legally questionable path of trying to hoover up as much data as possible…(More)”.

The Deletion Remedy


Paper by Daniel Wilf-Townsend: “A new remedy has emerged in the world of technology governance. Where someone has wrongfully obtained or used data, this remedy requires them to delete not only that data, but also to delete tools such as machine learning models that they have created using the data. Model deletion, also called algorithmic disgorgement or algorithmic destruction, has been increasingly sought in both private litigation and public enforcement actions. As its proponents note, model deletion can improve the regulation of privacy, intellectual property, and artificial intelligence by providing more effective deterrence and better management of ongoing harms

But, this article argues, model deletion has a serious flaw. In its current form, it has the possibility of being a grossly disproportionate penalty. Model deletion requires the destruction of models whose training included illicit data in any degree, with no consideration of how much (or even whether) that data contributed to any wrongful gains or ongoing harms. Model deletion could thereby cause unjust losses in litigation and chill useful technologies.

This article works toward a well-balanced doctrine of model deletion by building on the remedy’s equitable origins. It identifies how traditional considerations in equity—such as a defendant’s knowledge and culpability, the balance of the hardships, and the availability of more tailored alternatives—can be applied in model deletion cases to mitigate problems of disproportionality. By accounting for proportionality, courts and agencies can develop a doctrine of model deletion that takes advantage of its benefits while limiting its potential excesses…(More)”.

Zillow introduces First Street’s comprehensive climate risk data on for-sale listings across the US


Press Release: “Zillow® is introducing climate risk data, provided by First Street…Home shoppers will gain insights into five key risks—flood, wildfire, wind, heat and air quality—directly from listing pages, complete with risk scores, interactive maps and insurance requirements.

Zillow® is introducing climate risk data, provided by First Street, the standard for climate risk financial modeling, on for-sale property listings across the U.S. Home shoppers will gain insights into five key risks—flood, wildfire, wind, heat and air quality—directly from listing pages, complete with risk scores, interactive maps and insurance requirements.

With more than 80% of buyers now considering climate risks when purchasing a home, this feature provides a clearer understanding of potential hazards, helping buyers to better assess long-term affordability and plan for the future. In assisting buyers to navigate the growing risk of climate change, Zillow is the only platform to feature tailored insurance recommendations alongside detailed historical insights, showing if or when a property has experienced past climate events, such as flooding or wildfires…
When using Zillow’s search map view, home shoppers can explore climate risk data through an interactive map highlighting five key risk categories: flood, wildfire, wind, heat and air quality. Each risk is color-coded and has its own color scale, helping consumers intuitively navigate their search. Informative labels give more context to climate data and link to First Street’s property-specific climate risk reports for full insights.

When viewing a for-sale property on Zillow, home shoppers will see a new climate risk section. This section includes a separate module for each risk category—flood, wildfire, wind, heat and air quality—giving detailed, property-specific data from First Street. This section not only shows how these risks might affect the home now and in the future, but also provides crucial information on wind, fire and flood insurance requirements.

Nationwide, more new listings came with major climate risk, compared to homes listed for sale five years ago, according to a Zillow analysis conducted in August. That trend holds true for all five of the climate risk categories Zillow analyzed. Across all new listings in August, 16.7% were at major risk of wildfire, while 12.8% came with a major risk of flooding…(More)”.

The paradox of climate data in West Africa: growing urgency coupled with diminishing accessibility


Cirad: “In 2022, a prolonged drought devastated maize crops in northern Burkina Faso, leaving two million people without sufficient food resources. This dramatic situation could have been better anticipated and its impacts could have been mitigated with the collection and equitable sharing of specific data: that of agrometeorology, the science that studies the effects of meteorological, climatological and hydrological factors on crops.

Although it is too late to prevent the 2022 drought, protecting people from future droughts remains an urgent priority, especially in Africa, a continent where climate change poses a serious threat to rainfed agriculture, its main agricultural and economic activity.

To anticipate these climate risks, it is essential to have access to reliable meteorological data, which is crucial for ensuring sustainable and resilient agricultural practices. Yet in West Africa, the accessibility and reliability of this data are increasingly threatened and face unprecedented diplomatic, economic and security challenges…(More)”.

Harnessing digital footprint data for population health: a discussion on collaboration, challenges and opportunities in the UK


Paper by Romana Burgess et al: “Digital footprint data are inspiring a new era in population health and well-being research. Linking these novel data with other datasets is critical for future research wishing to use these data for the public good. In order to succeed, successful collaboration among industry, academics and policy-makers is vital. Therefore, we discuss the benefits and obstacles for these stakeholder groups in using digital footprint data for research in the UK. We advocate for policy-makers’ inclusion in research efforts, stress the exceptional potential of digital footprint research to impact policy-making and explore the role of industry as data providers, with a focus on shared value, commercial sensitivity, resource requirements and streamlined processes. We underscore the importance of multidisciplinary approaches, consumer trust and ethical considerations in navigating methodological challenges and further call for increased public engagement to enhance societal acceptability. Finally, we discuss how to overcome methodological challenges, such as reproducibility and sharing of learnings, in future collaborations. By adopting a multiperspective approach to outlining the challenges of working with digital footprint data, our contribution helps to ensure that future research can navigate these challenges effectively while remaining reproducible, ethical and impactful…(More)”