Reach is crowdsourcing street criminal incidents to reduce crime in Lima


Michael Krumholtz at LATAM Tech: “Unfortunately, in Latin America and many other places around the world, robberies are a part of urban life. Moisés Salazar of Lima has been a victim of petty crime in the streets, which is what led him to create Reach.

The application that markets itself as a kind of Waze for street crime alerts users through a map of incident reports or crimes that display visibly on your phone….

Salazar said that Reach helps users before, during and after incidents that could victimize them. That’s because the map allows users to avoid certain areas where a crime may have just happened or is being carried out.

In addition, there is a panic button that users can push if they find themselves in danger or in need of authorities. After the fact, that data then gets made public and can be analyzed by expert users or authorities wanting to see which incidents occur most commonly and where they occur.

Reach is very similar to the U.S. application Citizen, which is a crime avoidance tool used in major metropolitan areas in the U.S. like New York. That application alerts users to crime reports in their neighborhoods and gives them a forum to either record anything they witness or talk about it with other users….(More)”.

Odd Numbers: Algorithms alone can’t meaningfully hold other algorithms accountable


Frank Pasquale at Real Life Magazine: “Algorithms increasingly govern our social world, transforming data into scores or rankings that decide who gets credit, jobs, dates, policing, and much more. The field of “algorithmic accountability” has arisen to highlight the problems with such methods of classifying people, and it has great promise: Cutting-edge work in critical algorithm studies applies social theory to current events; law and policy experts seem to publish new articles daily on how artificial intelligence shapes our lives, and a growing community of researchers has developed a field known as “Fairness, Accuracy, and Transparency in Machine Learning.”

The social scientists, attorneys, and computer scientists promoting algorithmic accountability aspire to advance knowledge and promote justice. But what should such “accountability” more specifically consist of? Who will define it? At a two-day, interdisciplinary roundtable on AI ethics I recently attended, such questions featured prominently, and humanists, policy experts, and lawyers engaged in a free-wheeling discussion about topics ranging from robot arms races to computationally planned economies. But at the end of the event, an emissary from a group funded by Elon Musk and Peter Thiel among others pronounced our work useless. “You have no common methodology,” he informed us (apparently unaware that that’s the point of an interdisciplinary meeting). “We have a great deal of money to fund real research on AI ethics and policy”— which he thought of as dry, economistic modeling of competition and cooperation via technology — “but this is not the right group.” He then gratuitously lashed out at academics in attendance as “rent seekers,” largely because we had the temerity to advance distinctive disciplinary perspectives rather than fall in line with his research agenda.

Most corporate contacts and philanthrocapitalists are more polite, but their sense of what is realistic and what is utopian, what is worth studying and what is mere ideology, is strongly shaping algorithmic accountability research in both social science and computer science. This influence in the realm of ideas has powerful effects beyond it. Energy that could be put into better public transit systems is instead diverted to perfect the coding of self-driving cars. Anti-surveillance activism transmogrifies into proposals to improve facial recognition systems to better recognize all faces. To help payday-loan seekers, developers might design data-segmentation protocols to show them what personal information they should reveal to get a lower interest rate. But the idea that such self-monitoring and data curation can be a trap, disciplining the user in ever finer-grained ways, remains less explored. Trying to make these games fairer, the research elides the possibility of rejecting them altogether….(More)”.

From Code to Cure


David J. Craig at Columbia Magazine: “Armed with enormous amounts of clinical data, teams of computer scientists, statisticians, and physicians are rewriting the rules of medical research….

The deluge is upon us.

We are living in the age of big data, and with every link we click, every message we send, and every movement we make, we generate torrents of information.

In the past two years, the world has produced more than 90 percent of all the digital data that has ever been created. New technologies churn out an estimated 2.5 quintillion bytes per day. Data pours in from social media and cell phones, weather satellites and space telescopes, digital cameras and video feeds, medical records and library collections. Technologies monitor the number of steps we walk each day, the structural integrity of dams and bridges, and the barely perceptible tremors that indicate a person is developing Parkinson’s disease. These are the building blocks of our knowledge economy.

This tsunami of information is also providing opportunities to study the world in entirely new ways. Nowhere is this more evident than in medicine. Today, breakthroughs are being made not just in labs but on laptops, as biomedical researchers trained in mathematics, computer science, and statistics use powerful new analytic tools to glean insights from enormous data sets and help doctors prevent, treat, and cure disease.

“The medical field is going through a major period of transformation, and many of the changes are driven by information technology,” says George Hripcsak ’85PS,’00PH, a physician who chairs the Department of Biomedical Informatics at Columbia University Irving Medical Center (CUIMC). “Diagnostic techniques like genomic screening and high-resolution imaging are generating more raw data than we’ve ever handled before. At the same time, researchers are increasingly looking outside the confines of their own laboratories and clinics for data, because they recognize that by analyzing the huge streams of digital information now available online they can make discoveries that were never possible before.” …

Consider, for example, what the young computer scientist has been able to accomplish in recent years by mining an FDA database of prescription-drug side effects. The archive, which contains millions of reports of adverse drug reactions that physicians have observed in their patients, is continuously monitored by government scientists whose job it is to spot problems and pull drugs off the market if necessary. And yet by drilling down into the database with his own analytic tools, Tatonetti has found evidence that dozens of commonly prescribed drugs may interact in dangerous ways that have previously gone unnoticed. Among his most alarming findings: the antibiotic ceftriaxone, when taken with the heartburn medication lansoprazole, can trigger a type of heart arrhythmia called QT prolongation, which is known to cause otherwise healthy people to suddenly drop dead…(More)”

The Slippery Math of Causation


Pradeep Mutalik for Quanta Magazine: “You often hear the admonition “correlation does not imply causation.” But what exactly is causation? Unlike correlation, which has a specific mathematical meaning, causation is a slippery concept that has been debated by philosophers for millennia. It seems to get conflated with our intuitions or preconceived notions about what it means to cause something to happen. One common-sense definition might be to say that causation is what connects one prior process or agent — the cause — with another process or state — the effect. This seems reasonable, except that it is useful only when the cause is a single factor, and the connection is clear. But reality is rarely so simple.

Although we tend to credit or blame things on a single major cause, in nature and in science there are almost always multiple factors that have to be exactly right for an event to take place. For example, we might attribute a forest fire to the carelessly thrown cigarette butt, but what about the grassy tract leading to the forest, the dryness of the vegetation, the direction of the wind and so on? All of these factors had to be exactly right for the fire to start. Even though many tossed cigarette butts don’t start fires, we zero in on human actions as causes, ignoring other possibilities, such as sparks from branches rubbing together or lightning strikes, or acts of omission, such as failing to trim the grassy path short of the forest. And we tend to focus on things that can be manipulated: We overlook the direction of the wind because it is not something we can control. Our scientifically incomplete intuitive model of causality is nevertheless very useful in practice, and helps us execute remedial actions when causes are clearly defined. In fact, artificial intelligence pioneer Judea Pearl has published a new book about why it is necessary to teach cause and effect to intelligent machines.

However, clearly defined causes may not always exist. Complex, interdependent multifactorial causes arise often in nature and therefore in science. Most scientific disciplines focus on different aspects of causality in a simplified manner. Physicists may talk about causal influences being unable to propagate faster than the speed of light, while evolutionary biologists may discuss proximate and ultimate causes as mentioned in our previous puzzle on triangulation and motion sickness. But such simple situations are rare, especially in biology and the so-called “softer” sciences. In the world of genetics, the complex multifactorial nature of causality was highlighted in a recent Quanta article by Veronique Greenwood that described the intertwined effects of genes.

One well-known approach to understanding causality is to separate it into two types: necessary and sufficient….(More)”

Improving refugee integration through data-driven algorithmic assignment


Kirk Bansak, et al in Science Magazine: “Developed democracies are settling an increased number of refugees, many of whom face challenges integrating into host societies. We developed a flexible data-driven algorithm that assigns refugees across resettlement locations to improve integration outcomes. The algorithm uses a combination of supervised machine learning and optimal matching to discover and leverage synergies between refugee characteristics and resettlement sites.

The algorithm was tested on historical registry data from two countries with different assignment regimes and refugee populations, the United States and Switzerland. Our approach led to gains of roughly 40 to 70%, on average, in refugees’ employment outcomes relative to current assignment practices. This approach can provide governments with a practical and cost-efficient policy tool that can be immediately implemented within existing institutional structures….(More)”.

How Tenants Use Digital Mapping to Track Bad Landlords and Gentrification


Hannah Norman at Yes! Magazine: “When Teresa Salazar first encountered the notice posted to her front door—which offered tenants $10,000 to move out of their East Oakland, California, apartment building—she knew the place she called home was in jeopardy.

“All of us were surprised and afraid because it is not easy to move to some other place when the rents are so high,” Salazar said in a video produced by the Anti-Eviction Mapping Project. The project uses mapping as well as data analysis and digital storytelling as organizing tools for low-income tenants to combat eviction and displacement amid the Bay Area’s raging housing crisis.

The jarring move-out offer was left by the Bay Area Property Group, founded by landlord attorney Daniel Bornstein—known for holding landlord workshops on how to evict tenants. The property management firm buys and flips apartment buildings, Salazar said, driving gentrification in neighborhoods like hers. In fear of being displaced, Salazar and other tenants from her building met with counselors from Causa Justa :: Just Cause, a community legal services group. There, they learned about their rights under Oakland’s Just Cause of Eviction Ordinance. With this information, they successfully stood their ground and remained in their homes.

But not all Bay Area tenants are as fortunate as Salazar. Between 2005 and 2015, Oakland witnessed more than 32,402 unlawful detainers, or eviction proceedings, according to data obtained by AEMP through record requests. But AEMP hopes to change these statistics by arming tenants and housing advocates with map-based data to fight evictions and displacements and, ultimately, drive local and state policies on the issue. In addition to mapping, AEMP uses videos of tenants like Salazar to raise awareness of the human experience behind jaw-dropping statistics.

The project is part of a rising tide of social justice cartography, where maps are being harnessed for activism as the technology becomes more accessible….(More)”.

GovEx Launches First International Open Data Standards Directory


GT Magazine: “…A nonprofit gov tech group has created an international open data standards directory, aspiring to give cities a singular resource for guidance on formatting data they release to the public…The nature of municipal data is nuanced and diverse, and the format in which it is released often varies depending on subject matter. In other words, a format that works well for public safety data is not necessarily the same that works for info about building permits, transit or budgets. Not having a coordinated and agreed-upon resource to identify the best standards for these different types of info, Nicklin said, creates problems.

One such problem is that it can be time-consuming and challenging for city government data workers to research and identify ideal formats for data. Another is that the lack of info leads to discord between different jurisdictions, meaning one city might format a data set about economic development in an entirely different way than another, making collaboration and comparisons problematic.

What the directory does is provide a list of standards that are in use within municipal governments, as well as an evaluation based on how frequent that use is, whether the format is machine-readable, and whether users have to pay to license it, among other factors.

The directory currently contains 60 standards, some of which are in Spanish, and those involved with the project say they hope to expand their efforts to include more languages. There is also a crowdsourcing component to the directory, in that users are encouraged to make additions and updates….(More)”

Science’s Next Frontier? It’s Civic Engagement


Louise Lief at Discover Magazine: “…As a lay observer who has explored scientists’ relationship to the public, I have often wondered why many scientists and scientific institutions continue to rely on what is known as the “deficit model” of science communication, despite its well-documented shortcomings and even a backfire effect. This approach views the public as  “empty vessels” or “warped minds” ready to be set straight with facts. Perhaps many scientists continue to use it because it’s familiar and mimics classroom instruction. But it’s not doing the job.

Scientists spend much of their time with the public defending science, and little time building trust.

Many scientists also give low priority to trust building. At the 2016 American Association for the Advancement of Science conference, Michigan State University professor John C. Besley showed these results (right) of a survey of scientists’ priorities for engaging with the public online.

Scientists are focusing on the frustrating, reactive task of defending science, spending little time establishing bonds of trust with the public, which comes in last as a professional priority. How much more productive their interactions with the public – and through them, policymakers — would be if establishing trust was a top priority!

There is evidence that the public is hungry for such exchanges. When Research!America asked the public in 2016 how important is it for scientists to inform elected officials and the public about their research and its impact on society, 84 percent said it was very or somewhat important — a number that ironically mirrors the percentage of Americans who cannot name a scientist….

This means scientists need to go even further, venturing into unfamiliar local venues where science may not be mentioned but where communities gather to discuss their problems. Interesting new opportunities to do this are emerging nation wide. In 2014 the Chicago Community Trust, one of the nation’s largest community foundations, launched a series of dinners across the city through a program called On the Table, to discuss community problems and brainstorm possible solutions. In 2014, the first year, almost 10,000 city residents participated. In 2017, almost 100,000 Chicago residents took part. Recently the Trust added a grants component to the program, awarding more than $135,000 in small grants to help participants translate their ideas into action….(More)”.

How We Can Stop Earthquakes From Killing People Before They Even Hit


Justin Worland in Time Magazine: “…Out of that realization came a plan to reshape disaster management using big data. Just a few months later, Wani worked with two fellow Stanford students to create a platform to predict the toll of natural disasters. The concept is simple but also revolutionary. The One Concern software pulls geological and structural data from a variety of public and private sources and uses machine learning to predict the impact of an earthquake down to individual city blocks and buildings. Real-time information input during an earthquake improves how the system responds. And earthquakes represent just the start for the company, which plans to launch a similar program for floods and eventually other natural disasters….

Previous software might identify a general area where responders could expect damage, but it would appear as a “big red blob” that wasn’t helpful when deciding exactly where to send resources, Dayton says. The technology also integrates information from many sources and makes it easy to parse in an emergency situation when every moment matters. The instant damage evaluations mean fast and actionable information, so first responders can prioritize search and rescue in areas most likely to be worst-hit, rather than responding to 911 calls in the order they are received.

One Concern is not the only company that sees an opportunity to use data to rethink disaster response. The mapping company Esri has built rapid-response software that shows expected damage from disasters like earthquakes, wildfires and hurricanes. And the U.S. government has invested in programs to use data to shape disaster response at agencies like the National Oceanic and Atmospheric Administration (NOAA)….(More)”.

Software used to predict crime can now be scoured for bias


Dave Gershgorn in Quartz: “Predictive policing, or the idea that software can foresee where crime will take place, is being adopted across the country—despite being riddled with issues. These algorithms have been shown to disproportionately target minorities, and private companies won’t reveal how their software reached those conclusions.

In an attempt to stand out from the pack, predictive-policing startup CivicScape has released its algorithm and data online for experts to scour, according to Government Technology magazine. The company’s Github page is already populated with its code, as well as a variety of documents detailing how its algorithm interprets police data and what variables are included when predicting crime.

“By making our code and data open-source, we are inviting feedback and conversation about CivicScape in the belief that many eyes make our tools better for all,” the company writes on Github. “We must understand and measure bias in crime data that can result in disparate public safety outcomes within a community.”…

CivicScape claims to not use race or ethnic data to make predictions, although it is aware of other indirect indicators of race that could bias its software. The software also filters out low-level drug crimes, which have been found to be heavily biased against African Americans.

While this startup might be the first to publicly reveal the inner machinations of its algorithm and data practices, it’s not an assurance that predictive policing can be made fair and transparent across the board.

“Lots of research is going on about how algorithms can be transparent, accountable, and fair,” the company writes. “We look forward to being involved in this important conversation.”…(More)”.