Crime Prediction Keeps Society Stuck in the Past


Article by Chris Gilliard: “…All of these policing systems operate on the assumption that the past determines the future. In Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition, digital media scholar Wendy Hui Kyong Chun argues that the most common methods used by technologies such as PredPol and Chicago’s heat list to make predictions do nothing of the sort. Rather than anticipating what might happen out of the myriad and unknowable possibilities on which the very idea of a future depends, machine learning and other AI-based methods of statistical correlation “restrict the future to the past.” In other words, these systems prevent the future in order to “predict” it—they ensure that the future will be just the same as the past was.

“If the captured and curated past is racist and sexist,” Chun writes, “these algorithms and models will only be verified as correct if they make sexist and racist predictions.” This is partly a description of the familiar garbage-in/garbage-out problem with all data analytics, but it’s something more: Ironically, the putatively “unbiased” technology sold to us by promoters is said to “work” precisely when it tells us that what is contingent in history is in fact inevitable and immutable. Rather than helping us to manage social problems like racism as we move forward, as the McDaniel case shows in microcosm, these systems demand that society not change, that things that we should try to fix instead must stay exactly as they are.

It’s a rather glaring observation that predictive policing tools are rarely if ever (with the possible exception of the parody “White Collar Crime Risk Zone” project) focused on wage theft or various white collar crimes, even though the dollar amounts of those types of offenses far outstrip property crimes in terms of dollar value by several orders of magnitude. This gap exists because of how crime exists in the popular imagination. For instance, news reports in recent weeks bludgeoned readers with reports of a so-called “crime wave” of shoplifting at high-end stores. Yet just this past February, Amazon agreed to pay regulators a whopping $61.7 million, the amount the FTC says the company shorted drivers in a two-and-a-half-year period. That story received a fraction of the coverage, and aside from the fine, there will be no additional charges.

The algorithmic crystal ball that promises to predict and forestall future crimes works from a fixed notion of what a criminal is, where crimes occur, and how they are prosecuted (if at all). Those parameters depend entirely on the power structure empowered to formulate them—and very often the explicit goal of those structures is to maintain existing racial and wealth hierarchies. This is the same set of carceral logics that allow the placement of children into gang databases, or the development of a computational tool to forecast which children will become criminals. The process of predicting the lives of children is about cementing existing realities rather than changing them. Entering children into a carceral ranking system is in itself an act of violence, but as in the case of McDaniel, it also nearly guarantees that the system that sees them as potential criminals will continue to enact violence on them throughout their lifetimes…(More)”.

Roe’s overturn is tech’s privacy apocalypse


Scott Rosenberg at Axios: “America’s new abortion reality is turning tech firms’ data practices into an active field of conflict — a fight that privacy advocates have long predicted and company leaders have long feared.

Why it matters: A long legal siege in which abortion-banning states battle tech companies, abortion-friendly states and their own citizens to gather criminal evidence is now a near certainty.

  • The once-abstract privacy argument among policy experts has transformed overnight into a concrete real-world problem, superheated by partisan anger, affecting vast swaths of the U.S. population, with tangible and easily understood consequences.

Driving the news: Google announced Friday a new program to automatically delete the location data of users who visit “particularly personal” locations like “counseling centers, domestic violence shelters, abortion clinics, fertility centers, addiction treatment facilities, weight loss clinics, cosmetic surgery clinics, and others.”

  • Google tracks the location of any user who turns on its “location services” — a choice that’s required to make many of its programs, like Google Search and Maps, more useful.
  • That tracking happens even when you’re logged into non-location-related Google services like YouTube, since Google long ago unified all its accounts.

Between the lines: Google’s move won cautious applause but left plenty of open concerns.

  • It’s not clear how, and how reliably, Google will identify the locations that trigger automatic data deletion.
  • The company will not delete search requests automatically — users who want to protect themselves will have to do so themselves.
  • A sudden gap in location data could itself be used as evidence in court…(More)”.

Algorithm Claims to Predict Crime in US Cities Before It Happens


Article by Carrington York: “A new computer algorithm can now forecast crime in a big city near you — apparently. 

The algorithm, which was formulated by social scientists at the University of Chicago and touts 90% accuracy, divides cities into 1,000-square-foot tiles, according to a study published in Nature Human Behavior. Researchers used historical data on violent crimes and property crimes from Chicago to test the model, which detects patterns over time in these tiled areas tries to predict future events. It performed just as well using data from other big cities, including Atlanta, Los Angeles and Philadelphia, the study showed. 

The new tool contrasts with previous models for prediction, which depict crime as emerging from “hotspots” that spread to surrounding areas. Such an approach tends to miss the complex social environment of cities, as well as the nuanced relationship between crime and the effects of police enforcement, thus leaving room for bias, according to the report.

“It is hard to argue that bias isn’t there when people sit down and determine which patterns they will look at to predict crime because these patterns, by themselves, don’t mean anything,” said Ishanu Chattopadhyay, Assistant Professor of Medicine at the University of Chicago and senior author of the study. “But now, you can ask the algorithm complex questions like: ‘What happens to the rate of violent crime if property crimes go up?”

But Emily M. Bender, professor of linguistics at the University of Washington, said in a series of tweets that the focus should be on targeting underlying inequities rather than on predictive policing, while also noting that the research appears to ignore securities fraud or environmental crimes…(More)”

What AI Can Tell Us About Intelligence


Essay by Yann LeCun and Jacob Browning: “If there is one constant in the field of artificial intelligence it is exaggeration: there is always breathless hype and scornful naysaying. It is helpful to occasionally take stock of where we stand.

The dominant technique in contemporary AI is deep learning (DL) neural networks, massive self-learning algorithms which excel at discerning and utilizing patterns in data. Since their inception, critics have prematurely argued that neural networks had run into an insurmountable wall — and every time, it proved a temporary hurdle. In the 1960s, they could not solve non-linear functions. That changed in the 1980s with backpropagation, but the new wall was how difficult it was to train the systems. The 1990s saw a rise of simplifying programs and standardized architectures which made training more reliable, but the new problem was the lack of training data and computing power.

In 2012, when contemporary graphics cards could be trained on the massive ImageNet dataset, DL went mainstream, handily besting all competitors. But then critics spied a new problem: DL required too much hand-labelled data for training. The last few years have rendered this criticism moot, as self-supervised learning has resulted in incredibly impressive systems, such as GPT-3, which do not require labeled data.

Today’s seemingly insurmountable wall is symbolic reasoning, the capacity to manipulate symbols in the ways familiar from algebra or logic. As we learned as children, solving math problems involves a step-by-step manipulation of symbols according to strict rules (e.g., multiply the furthest right column, carry the extra value to the column to the left, etc.). Gary Marcus, author of “The Algebraic Mind”and co-author (with Ernie Davis) of “Rebooting AI,recently argued that DL is incapable of further progress because neural networks struggle with this kind of symbol manipulation. By contrast, many DL researchers are convinced that DL is already engaging in symbolic reasoning and will continue to improve at it.

At the heart of this debate are two different visions of the role of symbols in intelligence, both biological and mechanical: one holds that symbolic reasoning must be hard-coded from the outset and the other holds it can be learned through experience, by machines and humans alike. As such, the stakes are not just about the most practical way forward, but also how we should understand human intelligence — and, thus, how we should pursue human-level artificial intelligence…(More)”.

We need smarter cities, not “smart cities”


Article by Riad Meddebarchive and Calum Handforth: “This more expansive concept of what a smart city is encompasses a wide range of urban innovations. Singapore, which is exploring high-tech approaches such as drone deliveries and virtual-reality modeling, is one type of smart city. Curitiba, Brazil—a pioneer of the bus rapid transit system—is another. Harare, the capital of Zimbabwe, with its passively cooled shopping center designed in 1996, is a smart city, as are the “sponge cities” across China that use nature-based solutions to manage rainfall and floodwater.

Where technology can play a role, it must be applied thoughtfully and holistically—taking into account the needs, realities, and aspirations of city residents. Guatemala City, in collaboration with our country office team at the UN Development Programme, is using this approach to improve how city infrastructure—including parks and lighting—is managed. The city is standardizing materials and designs to reduce costs and labor,  and streamlining approval and allocation processes to increase the speed and quality of repairs and maintenance. Everything is driven by the needs of its citizens. Elsewhere in Latin America, cities are going beyond quantitative variables to take into account well-being and other nuanced outcomes. 

In her 1961 book The Death and Life of Great American Cities, Jane Jacobs, the pioneering American urbanist, discussed the importance of sidewalks. In the context of the city, they are conduits for adventure, social interaction, and unexpected encounters—what Jacobs termed the “sidewalk ballet.” Just as literal sidewalks are crucial to the urban experience, so is the larger idea of connection between elements.

Truly smart cities recognize the ambiguity of lives and livelihoods, and they are driven by outcomes beyond the implementation of “solutions.”

However, too often we see “smart cities” focus on discrete deployments of technology rather than this connective tissue. We end up with cities defined by “use cases” or “platforms.” Practically speaking, the vision of a tech-centric city is conceptually, financially, and logistically out of reach for many places. This can lead officials and innovators to dismiss the city’s real and substantial potential to reduce poverty while enhancing inclusion and sustainability.

In our work at the UN Development Programme, we focus on the interplay between different components of a truly smart city—the community, the local government, and the private sector. We also explore the different assets made available by this broader definition: high-tech innovations, yes, but also low-cost, low-tech innovations and nature-based solutions. Big data, but also the qualitative, richer detail behind the data points. The connections and “sidewalks”—not just the use cases or pilot programs. We see our work as an attempt to start redefining smart cities and increasing the size, scope, and usefulness of our urban development tool kit…(More)”.

How football shirts chart the rise and fall of tech giants


Article by Ravi Hiranand and Leo Schwartz: “It’s the ultimate status symbol, a level of exposure achieved by few companies — but one available to any company that’s willing and able to pay a hefty price. It’s an honor that costs millions of dollars, and in return, your company’s logo is on the TV screens of millions of people every week.

Sponsoring a football club — proper football, that is — is more than just a business transaction. It’s about using the world’s most watched sport to promote your brand. Getting your company’s logo on the shirt of a team like Liverpool or Real Madrid means tying your brand to a global icon. And for decades, it’s been a route taken by emerging tech companies, flush with cash to burn and a name to earn.

But these sponsorships actually reveal something about the tech industry as a whole: when you trace the history of these commercial deals across the decades, patterns emerge. Rather than individual companies, entire sectors of the industry — from cars to consumer tech to gambling websites — seem to jump into the sport at once, signaling their rise to, or the desire to, dominate global markets where football is also part of everyday life. It’s no coincidence, for example, that mobile phone companies turned to sponsoring football clubs during the beginning of the new millenium: with handsets becoming increasingly common and 3G just around the corner, companies like Samsung and Vodafone wasted no time in paying record amounts to some of the most successful clubs in England.

Rest of World took a look at some of the more memorable shirt sponsorship deals in football — from Sony’s affiliation with Italy’s champions to Rakuten’s deal with a Spanish giant — and what they say about the rise and fall of the tech sectors those companies represented…(More)”.

What Happened to Consensus Reality?


Essay by Jon Askonas: “Do you feel that people you love and respect are going insane? That formerly serious thinkers or commentators are increasingly unhinged, willing to subscribe to wild speculations or even conspiracy theories? Do you feel that, even if there’s some blame to go around, it’s the people on the other side of the aisle who have truly lost their minds? Do you wonder how they can possibly be so blind? Do you feel bewildered by how absurd everything has gotten? Do many of your compatriots seem in some sense unintelligible to you? Do you still consider them your compatriots?

If you feel this way, you are not alone.

We have come a long way from the optimism of the 1990s and 2000s about how the Internet would usher in a new golden era, expanding the domain of the information society to the whole world, with democracy sure to follow. Now we hear that the Internet foments misinformation and erodes democracy. Yet as dire as these warnings are, they are usually followed with suggestions that with more scrutiny on tech CEOs, more aggressive content moderation, and more fact-checking,  Americans might yet return to accepting the same model of reality. Last year, a New York Times article titled “How the Biden Administration Can Help Solve Our Reality Crisis”  suggested creating a federal “reality czar.”

This is a fantasy. The breakup of consensus reality — a shared sense of facts, expectations, and concepts about the world — predates the rise of social media and is driven by much deeper economic and technological currents.

Postwar Americans enjoyed a world where the existence of an objective, knowable reality just seemed like common sense, where alternate facts belonged only to fringe realms of the deluded or deluding. But a shared sense of reality is not natural. It is the product of social institutions that were once so powerful they could hold together a shared picture of the world, but are now well along a path of decline. In the hope of maintaining their power, some have even begun to abandon the project of objectivity altogether.

Attempts to restore consensus reality by force — the current implicit project of the establishment — are doomed to failure. The only question now is how we will adapt our institutions to a life together where a shared picture of the world has been shattered.

This series aims to trace the forces that broke consensus reality. More than a history of the rise and fall of facts, these essays attempt to show a technological reordering of social reality unlike any before encountered, and an accompanying civilizational shift not seen in five hundred years…(More)”.

Unleashing the power of big data to guide precision medicine in China


Article by Yvaine Ye in Nature: “Precision medicine in China was given a boost in 2016 when the government included the field in its 13th five-year economic plan. The policy blueprint, which defined the country’s spending priorities until 2020, pledged to “spur innovation and industrial application” in precision medicine alongside other areas such as smart vehicles and new materials.

Precision medicine is part of the Healthy China 2030 plan, also launched in 2016. The idea is to use the approach to tackle some major health-care challenges the country faces, such as rising cancer rates and issues related to an ageing population. Current projections suggest that, by 2040, 28% of China’s population will be over 60 years old.

Following the announcement of the five-year plan, China’s Ministry of Science and Technology (MOST) launched a precision-medicine project as part of its National Key Research and Development Program. MOST has invested about 1.3 billion yuan (US$200.4 million) in more than 100 projects from 2016 to 2018. These range from finding new drug targets for chronic diseases such as diabetes to developing better sequencing technologies and building a dozen large population cohorts comprising hundreds of thousands of people from across China.

China’s population of 1.4 billion people means the country has great potential for using big data to study health issues, says Zhengming Chen, an epidemiologist and chronic-disease researcher at the University of Oxford, UK. “The advantage is especially prominent in the research of rare diseases, where you might not be able to have a data set in smaller countries like the United Kingdom, where only a handful of cases exist,” says Chen, who leads the China Kadoorie Biobank, a chronic-disease initiative that launched in 2004. It recruited more than 510,000 adults from 10 regions across China in its first 4 years, collecting data through questionnaires and by recording physical measurements and storing participants’ blood samples for future study. So far, the team has investigated whether some disease-related lifestyle factors that have been identified in the West apply to the Chinese population. They have just begun to dig into participants’ genetic data, says Chen.

Another big-data precision-medicine project launched in 2021, after Huijun Yuan, a physician who has been researching hereditary hearing loss for more than two decades, founded the Institute of Rare Diseases at West China Hospital in Chengdu, Sichuan province, in 2020. By 2025, the institute plans to set up a database of 100,000 people from China who have rare conditions, including spinal muscular atrophy and albinism. It will contain basic health information and data relating to biological samples, such as blood for gene sequencing. Rare diseases are hard to diagnose, because their incidences are low. But the development of technologies such as genetic testing and artificial intelligence driven by big data is providing a fresh approach to diagnosing these rare conditions, and could pave the way for therapies…(More)”.

A New Model for Saving Lives on Roads Around the World


Article by Krishen Mehta & Piyush Tewari: “…In 2016, SaveLIFE Foundation (SLF), an Indian non-profit organization, introduced the Zero Fatality Corridor (ZFC) solution, which has, since its inception, delivered an unprecedented reduction in road crash fatalities on the stretches of road where it has been deployed. The ZFC solution has adapted and added to the Safe System Approach, traditionally a western concept, to make it suitable for Indian conditions and requirements.

The Safe System Approach recognizes that people are fallible and can make mistakes that may be fatal for them or their fellow road-users—irrespective of how well they are trained.

The ZFC model, in turn, is an innovation designed specifically to accommodate the realities, resources, and existing infrastructure in low- and middle-income countries, which are vastly different from their developed counterparts. For example, unlike developed nations, people in low- and middle-income countries often live closer to the highways, and use them on a daily basis on foot or through traditional and slower modes of transportation. This gives rise to high crash conflict areas.

Some of the practices that are a part of the ZFC solution include optimized placement of ambulances at high-fatality locations, the utilization of drones to identify parked vehicles to preemptively prevent rear-end collisions, and road engineering solutions unique to the realities of countries like India. The ZFC model has helped create a secure environment specific to such countries with safer roads, safer vehicles, safer speeds, safer drivers, and rapid post-crash response.

The ZFC model was first deployed in 2016 on the Mumbai-Pune Expressway (MPEW) in Maharashtra, through a collaboration between SLF, Maharashtra State Road Development Corporation (MSRDC), and automaker Mahindra & Mahindra. From 2010 to 2016, the 95-kilometer stretch witnessed 2,579 crashes and 887 fatalities, making it one of India’s deadliest roads…(More)”.

How Covid Tracking Apps Are Pivoting for Commercial Profit


Article by Matt Reynolds and Morgan Meaker: “…At its peak, 2.4 million people tracked their symptoms using the Covid Symptom Tracker. It was one of three surveillance studies the UK government used to track and respond to new outbreaks. Data from the tracker led to the UK government adding loss of smell and taste to the official list of Covid-19 symptoms. Between August 2020 and March 2022, the app was funded with £5.1 million ($6.2 million) from the Department of Health and Social Care.

But in early May 2022, Zoe announced in an email to users that its Covid tracking app would no longer be just a place for people to report their Covid symptoms. The Covid Symptom Tracker was becoming the Zoe Health Study, which asks people to take 10 seconds a day to log their mental and physical health beyond Covid. People who agree to take part in this wider study are asked to establish their baseline health—reporting everything from hair loss to mouth ulcers—as well as providing daily health updates. The company says this data will be used to “fight the most important health issues of our time,” but that it might also be used to develop commercial health, nutrition, and lifestyle products. (Zoe also sells nutrition tests and subscriptions to a personalized nutrition platform.)

Zoe isn’t the only Covid app developer pivoting away from the pandemic. In Berlin, a contact-tracing app called Luca is reinventing itself as a payment system, while in northern Italy an app set up to track coronavirus cases now warns citizens about natural disasters. With the most urgent phase of the pandemic now over, developers are looking for ways to squeeze more value out of the users who have downloaded their apps. The great Covid-19 data pivot is well and truly underway…(More)”.