Seeking data sovereignty, a First Nation introduces its own licence


Article by Caitrin Pilkington: “The Łı́ı́dlı̨ı̨ Kų́ę́ First Nation, or LKFN, says it is partnering with the nearby Scotty Creek research facility, outside Fort Simpson, to introduce a new application process for researchers. 

The First Nation, which also plans to create a compendium of all research gathered on its land, says the approach will be the first of its kind in the Northwest Territories.

LKFN says the current NWT-wide licensing system will still stand but a separate system addressing specific concerns was urgently required.

In the wake of a recent review of post-secondary education in the North, changes like this are being positioned as part of a larger shift in perspective about southern research taking place in the territory. 

LKFN’s initiative was approved by its council on February 7. As of April 1, any researcher hoping to study at Scotty Creek and in LKFN territory has been required to fill out a new application form. 

“When we get permits now, we independently review them and make sure certain topics are addressed in the application, so that researchers and students understand not just Scotty Creek, but the people on the land they’re on,” said Dieter Cazon, LKFN’s manager of lands and resources….

Currently, all research licensing goes through the Aurora Research Institute. The ARI’s form covers many of the same areas as the new LKFN form, but the institute has slightly different requirements for researchers.
The ARI application form asks researchers to:

  • share how they plan to release data, to ensure confidentiality;
  • describe their methodology; and
  • indicate which communities they expect to be affected by their work.

The Łı́ı́dlı̨ı̨ Kų́ę́ First Nation form asks researchers to:

  • explicitly declare that all raw data will be co-owned by the Łı́ı́dlı̨ı̨ Kų́ę́ First Nation;
  • disclose the specific equipment and infrastructure they plan to install on the land, lay out their demobilization plan, and note how often they will be travelling through the land for data collection; and
  • explain the steps they’ve taken to educate themselves about Łı́ı́dlı̨ı̨ Kų́ę́ First Nation customs and codes of research practice that will apply to their work with the community.

Cazon says the new approach will work in tandem with ARI’s system…(More)”.

Tech Inclusion for Excluded Communities


Essay by  Linda Jakob Sadeh & Smadar Nehab: “Companies often offer practical trainings to address the problem of diversity in high tech, acknowledging the disadvantages that members of excluded communities face and trying to level the playing field in terms of expertise and skills. But such trainings often fail in generating mass participation among excluded communities in tech professions. Beyond the professional knowledge and hands-on technical experience that these trainings provide, the fundamental social, ethnic, and economic barriers often remain unaddressed.

Thus, a paradoxical situation arises: On the one hand, certain communities are excluded from high tech and from the social mobility it affords. On the other hand, even when well-meaning companies wish to hire from these communities and implement diversity and inclusion measures that should make doing so possible, the pool of qualified and interested candidates often remains small. Members of the excluded communities remain discouraged from studying or training for these professions and from joining economic growth sectors, particularly high tech.

Tech Inclusion, the model we advance in this article, seeks to untangle this paradox. It takes a sincere look at the social and economic barriers that prevent excluded communities from participating in the tech industry. It suggests that the technology industry can be a driving force for inclusion if we turn the inclusion paradigm on its head, by bringing the industry to the excluded community, instead of trying to bring the excluded community to the industry, while cultivating a supportive environment for both potential candidates and firms…(More)”.

Magic Numbers


Essay by Alana Mohamed: “…The willingness to believe in the “algorithm” as though it were a kind of god is not entirely surprising. New technologies have long been incorporated into spiritual practices, especially during times of mass crisis. In the mid-to-late 19th century, emergent technologies from the lightbulb to the telephone called the limitations of the physical world into question. New spiritual leaders, beliefs, and full-blown religions cropped up, inspired by the invisible electric currents powering scientific developments. If we could summon light and sound by unseen forces, what other invisible specters lurked beneath the surface of everyday life?

The casualties of the U.S. Civil War gave birth to new spiritual practices, including contacting the dead through spirit photography and the telegraph dial. Practices like table rapping used fairly low-tech objects — walls, tables — as conduits to the spirit realm, where ghosts would tap out responses. The rapping noise was reminiscent of Morse code, leading to comparisons with the telegraph. In fact, in 1854, a U.S. senator campaigned for a scientific commission that would establish a “spiritual telegraph” between our world and the spiritual world. (He was unsuccessful.)

William Mumler’s practice of spirit photography is perhaps better known. Mumler claimed that he could photograph a dead relative or loved one when photographing a living subject. His most famous photograph depicts the widowed Mary Todd Lincoln with the shadowy image of her decreased husband holding her shoulder. Though widely debunked as a fraud, the practice itself continued on, even earning a book written in its defense by Sir Arthur Conan Doyle. 

Similar investigations into otherworldly communication and esoteric knowledge would be mainstreamed after World War I, bolstered by the creation of the radio and wireless telegraphy. Amid a boom in table rapping, spirit photography, and the host of usual suspects, Thomas Edison spoke openly about his hopes to create a machine, based on early gramophones, to communicate with the dead, specifically referencing the work of mediums and spiritualists. Radio, in particular, provided a new way to think about the physical and spiritual worlds, with its language of tuning in, channels, frequencies, and wavelengths still employed today…(More)”.

Facebook-owner Meta to share more political ad targeting data


Article by Elizabeth Culliford: “Facebook owner Meta Platforms Inc (FB.O) will share more data on targeting choices made by advertisers running political and social-issue ads in its public ad database, it said on Monday.

Meta said it would also include detailed targeting information for these individual ads in its “Facebook Open Research and Transparency” database used by academic researchers, in an expansion of a pilot launched last year.

“Instead of analyzing how an ad was delivered by Facebook, it’s really going and looking at an advertiser strategy for what they were trying to do,” said Jeff King, Meta’s vice president of business integrity, in a phone interview.

The social media giant has faced pressure in recent years to provide transparency around targeted advertising on its platforms, particularly around elections. In 2018, it launched a public ad library, though some researchers criticized it for glitches and a lack of detailed targeting data.Meta said the ad library will soon show a summary of targeting information for social issue, electoral or political ads run by a page….The company has run various programs with external researchers as part of its transparency efforts. Last year, it said a technical error meant flawed data had been provided to academics in its “Social Science One” project…(More)”.

The Era of Borderless Data Is Ending


David McCabe and Adam Satariano at the New York Times: “Every time we send an email, tap an Instagram ad or swipe our credit cards, we create a piece of digital data.

The information pings around the world at the speed of a click, becoming a kind of borderless currency that underpins the digital economy. Largely unregulated, the flow of bits and bytes helped fuel the rise of transnational megacompanies like Google and Amazon and reshaped global communications, commerce, entertainment and media.

Now the era of open borders for data is ending.

France, Austria, South Africa and more than 50 other countries are accelerating efforts to control the digital information produced by their citizens, government agencies and corporations. Driven by security and privacy concerns, as well as economic interests and authoritarian and nationalistic urges, governments are increasingly setting rules and standards about how data can and cannot move around the globe. The goal is to gain “digital sovereignty.”

Consider that:

  • In Washington, the Biden administration is circulating an early draft of an executive order meant to stop rivals like China from gaining access to American data.
  • In the European Union, judges and policymakers are pushing efforts to guard information generated within the 27-nation bloc, including tougher online privacy requirements and rules for artificial intelligence.
  • In India, lawmakers are moving to pass a law that would limit what data could leave the nation of almost 1.4 billion people.
  • The number of laws, regulations and government policies that require digital information to be stored in a specific country more than doubled to 144 from 2017 to 2021, according to the Information Technology and Innovation Foundation.

While countries like China have long cordoned off their digital ecosystems, the imposition of more national rules on information flows is a fundamental shift in the democratic world and alters how the internet has operated since it became widely commercialized in the 1990s.

The repercussions for business operations, privacy and how law enforcement and intelligence agencies investigate crimes and run surveillance programs are far-reaching. Microsoft, Amazon and Google are offering new services to let companies store records and information within a certain territory. And the movement of data has become part of geopolitical negotiations, including a new pact for sharing information across the Atlantic that was agreed to in principle in March…(More)”.

Digital Technology Demands A New Political Philosophy


Essay by Steven Hill: “…It’s not just that digital systems are growing more ubiquitous. They are becoming more capable. Allowing for skepticism of the hype around AI, it is unarguable that computers are increasingly able to do things that we would previously have seen as the sole province of human beings — and in some cases do them better than us. That trend is unlikely to reverse and appears to be speeding up.

The result is that increasingly capable technologies are going to be a fundamental part of 21st-century life. They mediate a growing number of our deeds, utterances and exchanges. Our access to basic social goods — credit, housing, welfare, educational opportunity, jobs — is increasingly determined by algorithms of hidden design and obscure provenance. Computer code has joined market forces, communal tradition and state coercion in the first rank of social forces. We’re in the early stages of the digital lifeworld: a delicate social system that links human beings, powerful machines and abundant data in a swirling web of great complexity.

The political implications are clear to anyone who wants to see them: those who own and control the most powerful digital technologies will increasingly write the rules of society itself. Software engineers are becoming social engineers. The digital is political….

For the last few decades, digital technology has not only been developed, but also regulated, within the same intellectual paradigm: that of market individualism. Within this paradigm, the market is seen not only as a productive source of innovation, but as a reliable regulator of market participants too: a self-correcting ecosystem which can be trusted to contain the worst excesses of its participants.

“The question is not whether Musk or Zuckerberg will make the ‘right’ decision with the power at their disposal — it’s why they are allowed that power at all.”

This way of thinking about technology emphasizes consumer choice (even when that choice is illusory), hostility to government power (but ambivalence about corporate power), and individual responsibility (even at the expense of collective wellbeing). In short, it treats digital technology as a chiefly economic phenomenon to be governed by the rules and norms of the marketplace, and not as a political phenomenon to be governed by the rules and norms of the forum.

The first step in becoming a digital republican is recognizing that this tension — between economics and politics, between capitalism and democracy — is likely to be among the foremost political battlegrounds of the digital age. The second step is to argue that the balance has swung too far to one side, and it is overdue for a correction….(More)”.

Artificial intelligence is breaking patent law


Article by Alexandra George & Toby Walsh: “In 2020, a machine-learning algorithm helped researchers to develop a potent antibiotic that works against many pathogens (see Nature https://doi.org/ggm2p4; 2020). Artificial intelligence (AI) is also being used to aid vaccine development, drug design, materials discovery, space technology and ship design. Within a few years, numerous inventions could involve AI. This is creating one of the biggest threats patent systems have faced.

Patent law is based on the assumption that inventors are human; it currently struggles to deal with an inventor that is a machine. Courts around the world are wrestling with this problem now as patent applications naming an AI system as the inventor have been lodged in more than 100 countries1. Several groups are conducting public consultations on AI and intellectual property (IP) law, including in the United States, United Kingdom and Europe.

If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge. Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions.

Rather than forcing old patent laws to accommodate new technology, we propose that national governments design bespoke IP law — AI-IP — that protects AI-generated inventions. Nations should also create an international treaty to ensure that these laws follow standardized principles, and that any disputes can be resolved efficiently. Researchers need to inform both steps….(More)”.

We Need to Take Back Our Privacy


Zeynep Tufekci in The New York Times: “…Congress, and states, should restrict or ban the collection of many types of data, especially those used solely for tracking, and limit how long data can be retained for necessary functions — like getting directions on a phone.

Selling, trading and merging personal data should be restricted or outlawed. Law enforcement could obtain it subject to specific judicial oversight.

Researchers have been inventing privacy-preserving methods for analyzing data sets when merging them is in the public interest but the underlying data is sensitive — as when health officials are tracking a disease outbreak and want to merge data from multiple hospitals. These techniques allow computation but make it hard, if not impossible, to identify individual records. Companies are unlikely to invest in such methods, or use end-to-end encryption as appropriate to protect user data, if they could continue doing whatever they want. Regulation could make these advancements good business opportunities, and spur innovation.

I don’t think people like things the way they are. When Apple changed a default option from “track me” to “do not track me” on its phones, few people chose to be tracked. And many who accept tracking probably don’t realize how much privacy they’re giving up, and what this kind of data can reveal. Many location collectors get their data from ordinary apps — could be weather, games, or anything else — that often bury that they will share the data with others in vague terms deep in their fine print.

Under these conditions, requiring people to click “I accept” to lengthy legalese for access to functions that have become integral to modern life is a masquerade, not informed consent.

Many politicians have been reluctant to act. The tech industry is generous, cozy with power, and politicians themselves use data analysis for their campaigns. This is all the more reason to press them to move forward…(More)”.

Behavioral Jurisprudence: Law Needs a Behavioral Revolution


Article by Benjamin van Rooij and Adam Fine: “Laws are supposed to protect us. At work, they should eliminate unsafe working conditions and harassment. On our streets, they should curb speeding, distracted driving, and driving under the influence. And throughout our countries, they should protect citizens against their own governments.

The law is the most important behavioral system we have. Yet it is designed and operated by behavioral novices. Lawyers draft legislation, interpret rules, and create policies, but legal training does not teach them how laws affect human and organizational behavior.

Law needs a behavioral revolution, like the one that rocked the field of economics. There is now a large body of empirical work that calls into question the traditional legal assumptions about how law shapes behavior. This empirical work also offers a path forward. It can help lawyers and others shaping the law understand the law’s behavioral impact and help align its intended influence on behavior to its actual effects.

For instance, the law has traditionally focused on punishment as a means to deal with harmful behavior. Yet there is no conclusive evidence that threats of incarceration or fines reduce misconduct. Most people do not understand or know the law, and thus never come to weigh the law’s incentives in deciding whether to comply with it.

The law also fails to account for the social and moral factors that affect how people interpret and follow it. For instance, social norms—what people see others do or think others hold they should do—can shape what we think the laws say. Research also shows that people are more likely to follow rules they deem legitimate, and that rules that are made and enforced in a procedurally just and fair manner enhance compliance.

And, traditionally, the law has focused on motivational aspects of wrongdoing. But behavioral responses to the law are highly situational. Here, work in criminology, particularly within environmental criminology, shows that criminal opportunities are a chief driver of criminal behavior. Relatedly, when people have their needs met, for instance when they have a livable wage or sufficient schooling, they are more likely to follow the law…(More)”.

How Secure Is Our Data, Really?


Essay by Michael Kende: “Stepping back, a 2019 study showed that 95 percent of such data breaches could have been prevented. There are two main causes of breaches that can be averted.

First, many breaches attack known vulnerabilities in online systems. We are all used to updating the operating system on our computer or phone. One of the reasons is to patch a defect that could allow a breach. But not all of us update each patch all of the time, and that leaves us exposed. Organizations operating hundreds or thousands of devices with different systems connecting them may not devote enough resources to security or may be worried about testing the compatibility of upgrades, and this leaves them exposed to hackers searching for systems that have not been updated. These challenges were exacerbated with employees working from home during pandemic restrictions, often on their own devices with less protected networks.

Second is the phenomenon known as social engineering in which an employee is tricked into providing their password. We have all received phishing emails asking us to log into a familiar site to address an urgent matter. Doing so allows the hacker to capture the user’s email address or user name and the associated password. The hacker can then use that information directly to enter the real version of the website or may find out where else the user may go and hope they use the same login details — which, human nature being what it is, is quite common. These phishing attacks highlight the asymmetric advantage held by the hackers. They can send out millions of emails and just need one person to click on the wrong link to start their attack.

Of course, if 95 percent of breaches are preventable, that means 5 percent are not. For instance, though many breaches result from known vulnerabilities in systems, a vulnerability is by definition unknown before it is discovered. Such a vulnerability, known as zero-day vulnerability, is valuable for hackers because it cannot be defended against, and they are often hoarded or sold, sometimes back to the company responsible so they can create a patch…(More)”.