Real-Time Incident Data Could Change Road Safety Forever


Skip Descant at GovTech: “Data collected from connected vehicles can offer near real-time insights into highway safety problem areas, identifying near-misses, troublesome intersections and other roadway dangers.

New research from Michigan State University and Ford Mobility, which tracked driving incidents on Ford vehicles outfitted with connected vehicle technology, points to a future of greatly expanded understanding of roadway events, far beyond simply reading crash data.

“Connected vehicle data allows us to know what’s happening now. And that’s a huge thing. And I think that’s where a lot of the potential is, to allow us to actively monitor the roadways,” said Meredith Nelson, connected and automated vehicles analyst with the Michigan Department of Transportation.

The research looked at data collected from Ford vehicles in the Detroit metro region equipped with connected vehicle technology from January 2020 to June 2020, drawing on data collected by Ford’s Safety Insights platform in partnership with StreetLight Data. The data offers insights into near-miss events like hard braking, hard acceleration and hard corners. In 2020 alone, Ford has measured more than a half-billion events from tens of millions of trips.

Traditionally, researchers relied on police-reported crash data, which had its drawbacks, in part, because of the delay in reporting, said Peter Savolainen, an engineering professor in the Department of Civil and Environmental Engineering at Michigan State University, with a research focus looking at road user behavior….(More)”.

Why People Are So Awful Online


Roxane Gay at the New York Times: “When I joined Twitter 14 years ago, I was living in Michigan’s Upper Peninsula, attending graduate school. I lived in a town of around 4,000 people, with few Black people or other people of color, not many queer people and not many writers. Online is where I found a community beyond my graduate school peers. I followed and met other emerging writers, many of whom remain my truest friends. I got to share opinions, join in on memes, celebrate people’s personal joys, process the news with others and partake in the collective effervescence of watching awards shows with thousands of strangers.

Something fundamental has changed since then. I don’t enjoy most social media anymore. I’ve felt this way for a while, but I’m loath to admit it.

Increasingly, I’ve felt that online engagement is fueled by the hopelessness many people feel when we consider the state of the world and the challenges we deal with in our day-to-day lives. Online spaces offer the hopeful fiction of a tangible cause and effect — an injustice answered by an immediate consequence. On Twitter, we can wield a small measure of power, avenge wrongs, punish villains, exalt the pure of heart….

Lately, I’ve been thinking that what drives so much of the anger and antagonism online is our helplessness offline. Online we want to be good, to do good, but despite these lofty moral aspirations, there is little generosity or patience, let alone human kindness. There is a desperate yearning for emotional safety. There is a desperate hope that if we all become perfect enough and demand the same perfection from others, there will be no more harm or suffering.

It is infuriating. It is also entirely understandable. Some days, as I am reading the news, I feel as if I am drowning. I think most of us do. At least online, we can use our voices and know they can be heard by someone.

It’s no wonder that we seek control and justice online. It’s no wonder that the tenor of online engagement has devolved so precipitously. It’s no wonder that some of us have grown weary of it….(More)”

Identity Tethering in an Age of Symbolic Politics


Mark Dunbar at the Hedgehog Review: “Identities are dangerous and paradoxical things. They are the beginning and the end of the self. They are how we define ourselves and how we are defined by others. One is a “nerd” or a “jock” or a “know-it-all.” One is “liberal” or “conservative,” “religious” or “secular,” “white” or “black.” Identities are the means of escape and the ties that bind. They direct our thoughts. They are modes of being. They are an ingredient of the self—along with relationships, memories, and role models—and they can also destroy the self. Consume it. The Jungians are right when they say people don’t have identities, identities have people. And the Lacanians are righter still when they say that our very selves—our wishes, desires, thoughts—are constituted by other people’s wishes, desires, and thoughts. Yes, identities are dangerous and paradoxical things. They are expressions of inner selves, and a way the outside gets in.

Our contemporary politics is diseased—that much is widely acknowledged—and the problem of identity is often implicated in its pathology, mostly for the wrong reasons. When it comes to its role in our politics, identity is the chief means by which we substitute behavior for action, disposition for conviction. Everything is rendered political—from the cars we drive to the beer we drink—and this rendering lays bare a political order lacking in democratic vitality. There is an inverse relationship between the rise of identity signaling and the decline of democracy. The less power people have to influence political outcomes, the more emphasis they will put on signifying their political desires. The less politics effects change, the more politics will affect mood.

Dozens of books (and hundreds of articles and essays) have been written about the rising threat of tribalism and group thinking, identity politics, and the politics of resentment….(More)”.

We need to regulate mind-reading tech before it exists


Abel Wajnerman Paz at Rest of the World: “Neurotechnology” is an umbrella term for any technology that can read and transcribe mental states by decoding and modulating neural activity. This includes technologies like closed-loop deep brain stimulation that can both detect neural activity related to people’s moods and can suppress undesirable symptoms, like depression, through electrical stimulation.

Despite their evident usefulness in education, entertainment, work, and the military, neurotechnologies are largely unregulated. Now, as Chile redrafts its constitution — disassociating it from the Pinochet surveillance regime — legislators are using the opportunity to address the need for closer protection of people’s rights from the unknown threats posed by neurotechnology. 

Although the technology is new, the challenge isn’t. Decades ago, similar international legislation was passed following the development of genetic technologies that made possible the collection and application of genetic data and the manipulation of the human genome. These included the Universal Declaration on the Human Genome and Human Rights in 1997 and the International Declaration on Human Genetic Data in 2003. The difference is that, this time, Chile is a leading light in the drafting of neuro-rights legislation.

In Chile, two bills — a constitutional reform bill, which is awaiting approval by the Chamber of Deputies, and a bill on neuro-protection — will establish neuro-rights for Chileans. These include the rights to personal identity, free will, mental privacy, equal access to cognitive enhancement technologies, and protection against algorithmic bias….(More)”.

COVID data is complex and changeable – expecting the public to heed it as restrictions ease is optimistic


Manuel León Urrutia at The Conversation: “I find it tempting to celebrate the public’s expanding access to data and familiarity with terms like “flattening the curve”. After all, a better informed society is a successful society, and the provision of data-driven information to the public seems to contribute to the notion that together we can beat COVID.

But increased data visibility shouldn’t necessarily be interpreted as increased data literacy. For example, at the start of the pandemic it was found that the portrayal of COVID deaths in logarithmic graphs confused the public. Logarithmic graphs control for data that’s growing exponentially by using a scale which increases by a factor of ten on the y, or vertical axis. This led some people to radically underestimate the dramatic rise in COVID cases.

Two graphs comparing linear with logorithmic curves
A logorithmic graph (on the right) flattens exponential curves, which can confuse the public. LSE

The vast amount of data we now have available doesn’t even guarantee consensus. In fact, instead of solving the problem, this data deluge can contribute to the polarisation of public discourseOne study recently found that COVID sceptics use orthodox data presentation techniques to spread their controversial views, revealing how more data doesn’t necessarily result in better understanding. Though data is supposed to be objective and empirical, it has assumed a political, subjective hue during the pandemic….

This is where educators come in. The pandemic has only strengthened the case presented by academics for data literacy to be included in the curriculum at all educational levels, including primary. This could help citizens navigate our data-driven world, protecting them from harmful misinformation and journalistic malpractice.

Data literacy does in fact already feature in many higher education roadmaps in the UK, though I’d argue it’s a skill the entire population should be equipped with from an early age. Misconceptions about vaccine efficacy and the severity of the coronavirus are often based on poorly presented, false or misinterpreted data. The “fake news” these misconceptions generate would spread less ferociously in a world of data literate citizens.

To tackle misinformation derived from the current data deluge, the European Commission has funded projects such as MediaFutures and YourDataStories….(More)”.

Political Science Has Its Own Lab Leaks


Paul Musgrave at Foreign Policy: “The idea of a lab leak has gone, well, viral. As a political scientist, I cannot assess whether the evidence shows that COVID-19 emerged naturally or from laboratory procedures (although many experts strenuously disagree). Yet as a political scientist, I do think that my discipline can learn something from thinking seriously about our own “lab leaks” and the damage they could cause.

A political science lab leak might seem as much of a punchline as the concept of a mad social scientist. Nevertheless, the notion that scholarly ideas and findings can escape the nuanced, cautious world of the academic seminar and transform into new forms, even becoming threats, becomes more of a compelling metaphor if you think of academics as professional crafters of ideas intended to survive in a hostile environment. Given the importance of what we study, from nuclear war to international economics to democratization and genocide, the escape of a faulty idea could have—and has had—dangerous consequences for the world.

Academic settings provide an evolutionarily challenging environment in which ideas adapt to survive. The process of developing and testing academic theories provides metaphorical gain-of-function accelerations of these dynamics. To survive peer review, an idea has to be extremely lucky or, more likely, crafted to evade the antibodies of academia (reviewers’ objections). By that point, an idea is either so clunky it cannot survive on its own—or it is optimized to thrive in a less hostile environment.

Think tanks and magazines like the Atlantic (or Foreign Policy) serve as metaphorical wet markets where wild ideas are introduced into new and vulnerable populations. Although some authors lament a putative decline of social science’s influence, the spread of formerly academic ideas like intersectionality and the use of quantitative social science to reshape electioneering suggest that ideas not only move from the academy but can flourish once transplanted. This is hardly new: Terms from disciplines including psychoanalysis (“ego”), evolution (“survival of the fittest”), and economics (the “free market” and Marxism both) have escaped from the confines of academic work before…(More)”.

Why You Should Care About Your Right to Repair Gadgets


Brian X. Chen at The New York Times: “When your car has problems, your instinct is probably to take it to a mechanic. But when something goes wrong with your smartphone — say a shattered screen or a depleted battery — you may wonder: “Is it time to buy a new one?”

That’s because even as our consumer electronics have become as vital as our cars, the idea of tech repair still hasn’t been sown into our collective consciousness. Studies have shown that when tech products begin to fail, most people are inclined to buy new things rather than fix their old ones.

“Repair is inconvenient and difficult, so people don’t seek it,” said Nathan Proctor, a director for the U.S. Public Interest Research Group, a consumer advocacy organization, who is working on legislation to make tech repair more accessible. “Because people don’t expect to repair things, they replace things when by far the most logical thing to do is to repair it.”

It doesn’t have to be this way. More of us could maintain our tech products, as we do with cars, if it were more practical to do so. If we all had more access to the parts, instructions and tools to revive products, repairs would become simpler and less expensive.

This premise is at the heart of the “right to repair” act, a proposed piece of legislation that activists and tech companies have fought over for nearly a decade. Recently, right-to-repair supporters scored two major wins. In May, the Federal Trade Commission published a report explaining how tech companies were harming competition by restricting repairs. And last Friday, President Biden issued an executive order that included a directive for the F.T.C. to place limits on how tech manufacturers could restrict repairs.

The F.T.C. is set to meet next week to discuss new policies about electronics repair. Here’s what you need to know about the fight over your right to fix gadgets…(More)”.

Concern trolls and power grabs: Inside Big Tech’s angry, geeky, often petty war for your privacy


Article by Issie Lapowsky: “Inside the World Wide Web Consortium, where the world’s top engineers battle over the future of your data….

The W3C’s members do it all by consensus in public GitHub forums and open Zoom meetings with meticulously documented meeting minutes, creating a rare archive on the internet of conversations between some of the world’s most secretive companies as they collaborate on new rules for the web in plain sight.

But lately, that spirit of collaboration has been under intense strain as the W3C has become a key battleground in the war over web privacy. Over the last year, far from the notice of the average consumer or lawmaker, the people who actually make the web run have converged on this niche community of engineers to wrangle over what privacy really means, how the web can be more private in practice and how much power tech giants should have to unilaterally enact this change.

On one side are engineers who build browsers at Apple, Google, Mozilla, Brave and Microsoft. These companies are frequent competitors that have come to embrace web privacy on drastically different timelines. But they’ve all heard the call of both global regulators and their own users, and are turning to the W3C to develop new privacy-protective standards to replace the tracking techniques businesses have long relied on.

On the other side are companies that use cross-site tracking for things like website optimization and advertising, and are fighting for their industry’s very survival. That includes small firms like Rosewell’s, but also giants of the industry, like Facebook.

Rosewell has become one of this side’s most committed foot soldiers since he joined the W3C last April. Where Facebook’s developers can only offer cautious edits to Apple and Google’s privacy proposals, knowing full well that every exchange within the W3C is part of the public record, Rosewell is decidedly less constrained. On any given day, you can find him in groups dedicated to privacy or web advertising, diving into conversations about new standards browsers are considering.

Rather than asking technical questions about how to make browsers’ privacy specifications work better, he often asks philosophical ones, like whether anyone really wants their browser making certain privacy decisions for them at all. He’s filled the W3C’s forums with concerns about its underlying procedures, sometimes a dozen at a time, and has called upon the W3C’s leadership to more clearly articulate the values for which the organization stands….(More)”.

Government algorithms are out of control and ruin lives



Nani Jansen Reventlow at Open Democracy: “Government services are increasingly being automated and technology is relied on more and more to make crucial decisions about our lives and livelihoods. This includes decisions about what type of support we can access in times of need: welfarebenefits, and other government services.

Technology has the potential to not only reproduce but amplify structural inequalities in our societies. If you combine this drive for automation with a broader context of criminalising poverty and systemic racism, this can have disastrous effects.

A recent example is the ‘child benefits scandal’ that brought down the Dutch government at the start of 2021. In the Netherlands, working parents are eligible for a government contribution toward the costs of daycare. This can run up to 90% of the actual costs for those with a low income. While contributions are often directly paid to childcare providers, parents are responsible for them. This means that, if the tax authorities determine that any allowance was wrongfully paid out, parents are liable for repaying them.

To detect cases of fraud, the Dutch tax authorities used a system that was outright discriminatory. An investigation by the Dutch Data Protection Authority last year showed that parents were singled out for special scrutiny because of their ethnic origin or dual nationality.  “The whole system was organised in a discriminatory manner and was also used as such,” it stated.

The fallout of these ‘fraud detection’ efforts was enormous. It is currently estimated that 46,000 parents were wrongly accused of having fraudulently claimed child care allowances. Families were forced to repay tens of thousands of euros, leading to financial hardship, loss of livelihood, homes, and in one case, even loss of life – one parent died by suicide. While we can still hope that justice for these families won’t be denied, it will certainly be delayed: this weekend, it became clear that it could take up to ten years to handle all claims. An unacceptable timeline, given how precarious the situation will be for many of those affected….(More)”.

Luxury Surveillance


Essay by Chris Gilliard and David Golumbia: One of the most troubling features of the digital revolution is that some people pay to subject themselves to surveillance that others are forced to endure and would, if anything, pay to be free of.

Consider a GPS tracker you can wear around one of your arms or legs. Make it sleek and cool — think the Apple Watch or FitBit —  and some will pay hundreds or even thousands of dollars for the privilege of wearing it. Make it bulky and obtrusive, and others, as a condition of release from jail or prison, being on probation, or awaiting an immigration hearing, will be forced to wear one — and forced to pay for it too.

In each case, the device collects intimate and detailed biometric information about its wearer and uploads that data to servers, communities, and repositories. To the providers of the devices, this data and the subsequent processing of it are the main reasons the devices exist. They are means of extraction: That data enables further study, prediction, and control of human beings and populations. While some providers certainly profit from the sale of devices, this secondary market for behavioral control and prediction is where the real money is — the heart of what Shoshana Zuboff rightly calls surveillance capitalism.

The formerly incarcerated person knows that their ankle monitor exists for that purpose: to predict and control their behavior. But the Apple Watch wearer likely thinks about it little, if at all — despite the fact that the watch has the potential to collect and analyze much more data about its user (e.g. health metrics like blood pressure, blood glucose levels, ECG data) than parole or probation officers are even allowed to gather about their “clients” without specific warrant. Fitness-tracker wearers are effectively putting themselves on parole and paying for the privilege.

Both the Apple Watch and the FitBit can be understood as examples of luxury surveillance: surveillance that people pay for and whose tracking, monitoring, and quantification features are understood by the user as benefits they are likely to celebrate. Google, which has recently acquired FitBit, is seemingly leaning into the category, launching a more expensive version of the device named the “Luxe.” Only certain people can afford luxury surveillance, but that is not necessarily a matter of money: In general terms, consumers of luxury surveillance see themselves as powerful and sovereign, and perhaps even immune from unwelcome monitoring and control. They see self-quantification and tracking not as disciplinary or coercive, but as a kind of care or empowerment. They understand it as something extra, something “smart.”…(More)”.