How Twitter gives scientists a window into human happiness and health


 at the Conversation: “Since its public launch 10 years ago, Twitter has been used as a social networking platform among friends, an instant messaging service for smartphone users and a promotional tool for corporations and politicians.

But it’s also been an invaluable source of data for researchers and scientists – like myself – who want to study how humans feel and function within complex social systems.

By analyzing tweets, we’ve been able to observe and collect data on the social interactions of millions of people “in the wild,” outside of controlled laboratory experiments.

It’s enabled us to develop tools for monitoring the collective emotions of large populations, find the happiest places in the United States and much more.

So how, exactly, did Twitter become such a unique resource for computational social scientists? And what has it allowed us to discover?

Twitter’s biggest gift to researchers

On July 15, 2006, Twittr (as it was then known) publicly launched as a “mobile service that helps groups of friends bounce random thoughts around with SMS.” The ability to send free 140-character group texts drove many early adopters (myself included) to use the platform.

With time, the number of users exploded: from 20 million in 2009 to 200 million in 2012 and 310 million today. Rather than communicating directly with friends, users would simply tell their followers how they felt, respond to news positively or negatively, or crack jokes.

For researchers, Twitter’s biggest gift has been the provision of large quantities of open data. Twitter was one of the first major social networks to provide data samples through something called Application Programming Interfaces (APIs), which enable researchers to query Twitter for specific types of tweets (e.g., tweets that contain certain words), as well as information on users.

This led to an explosion of research projects exploiting this data. Today, a Google Scholar search for “Twitter” produces six million hits, compared with five million for “Facebook.” The difference is especially striking given that Facebook has roughly five times as many users as Twitter (and is two years older).

Twitter’s generous data policy undoubtedly led to some excellent free publicity for the company, as interesting scientific studies got picked up by the mainstream media.

Studying happiness and health

With traditional census data slow and expensive to collect, open data feeds like Twitter have the potential to provide a real-time window to see changes in large populations.

The University of Vermont’s Computational Story Lab was founded in 2006 and studies problems across applied mathematics, sociology and physics. Since 2008, the Story Lab has collected billions of tweets through Twitter’s “Gardenhose” feed, an API that streams a random sample of 10 percent of all public tweets in real time.

I spent three years at the Computational Story Lab and was lucky to be a part of many interesting studies using this data. For example, we developed a hedonometer that measures the happiness of the Twittersphere in real time. By focusing on geolocated tweets sent from smartphones, we were able to map the happiest places in the United States. Perhaps unsurprisingly, we found Hawaii to be the happiest state and wine-growing Napa the happiest city for 2013.

A map of 13 million geolocated U.S. tweets from 2013, colored by happiness, with red indicating happiness and blue indicating sadness. PLOS ONE, Author provided

These studies had deeper applications: Correlating Twitter word usage with demographics helped us understand underlying socioeconomic patterns in cities. For example, we could link word usage with health factors like obesity, so we built a lexicocalorimeter to measure the “caloric content” of social media posts. Tweets from a particular region that mentioned high-calorie foods increased the “caloric content” of that region, while tweets that mentioned exercise activities decreased our metric. We found that this simple measure correlates with other health and well-being metrics. In other words, tweets were able to give us a snapshot, at a specific moment in time, of the overall health of a city or region.

Using the richness of Twitter data, we’ve also been able to see people’s daily movement patterns in unprecedented detail. Understanding human mobility patterns, in turn, has the capacity to transform disease modeling, opening up the new field of digital epidemiology….(More)”

Designing an Active, Healthier City


Meera Senthilingam in the New York Times: “Despite a firm reputation for being walkers, New Yorkers have an obesity epidemic on their hands. Lee Altman, a former employee of New York City’s Department of Design and Construction, explains it this way: “We did a very good job at designing physical activity out of our daily lives.”

According to the city’s health department, more than half of the city’s adult population is either overweight (34 percent) or obese (22 percent), and the convenience of their environment has contributed to this. “Everything is dependent on a car, elevator; you sit in front of a computer,” said Altman, “not moving around a lot.”

This is not just a New York phenomenon. Mass urbanization has caused populations the world over to reduce the amount of time they spend moving their bodies. But the root of the problem runs deep in a city’s infrastructure.

Safety, graffiti, proximity to a park, and even the appeal of stairwells all play roles in whether someone chooses to be active or not. But only recently have urban developers begun giving enough priority to these factors.

Planners in New York have now begun employing a method known as “active design” to solve the problem. The approach is part of a global movement to get urbanites onto their streets and enjoying their surroundings on foot, bike or public transport.

“We can impact public health and improve health outcomes through the way that we design,” said Altman, a former active design coordinator for New York City. She now lectures as an adjunct assistant professor inColumbia University’s urban design program.

“The communities that have the least access to well-maintained sidewalks and parks have the highest risk of obesity and chronic disease,” said Joanna Frank, executive director of the nonprofit Center for Active Design; her work focuses on creating guidelines and reports, so that developers and planners are aware, for example, that people have been “less likely to walk down streets, less likely to bike, if they didn’t feel safe, or if the infrastructure wasn’t complete, so you couldn’t get to your destination.”

Even adding items as straightforward as benches and lighting to a streetscape can greatly increase the likelihood of someone’s choosing to walk, she said.

This may seem obvious, but without evidence its importance could be overlooked. “We’ve now established that’s actually the case,” said Frank.

How can things change? According to Frank, four areas are critical: transportation, recreation, buildings and access to food….(More)”

Kids learn about anti-discrimination via online soccer game


Springwise: “As Euro 2016 captures the attention of soccer fanatics around the world, a new app is tapping into the popularity of the event, and using it bring about positive education. EduKicks is a new game for kids that teaches anti-discrimination through gaming and soccer.

IMG_0972

Launched earlier this week, the multiplayer game focuses on personal, social, and health education for children aged between 9-13. After downloading the app on their smartphone or tablet, users take turns spinning a wheel, and face either a movement card or an education card. The movement cards asks players to complete a soccer-related activity, such as tick-tocking with the insides of their feet. Education cards require them to answer a question. For example, the app might ask “How many women working in the football industry have experienced sexism?” and users choose between 22 percent, 66 percent, or 51 percent. Topics cover racism, religious discrimination, sexism, homophobia, disability, and more. The aim is to use the momentum and popularity of football to make learning more engaging and enjoyable….(More)”

Bridging data gaps for policymaking: crowdsourcing and big data for development


 for the DevPolicyBlog: “…By far the biggest innovation in data collection is the ability to access and analyse (in a meaningful way) user-generated data. This is data that is generated from forums, blogs, and social networking sites, where users purposefully contribute information and content in a public way, but also from everyday activities that inadvertently or passively provide data to those that are able to collect it.

User-generated data can help identify user views and behaviour to inform policy in a timely way rather than just relying on traditional data collection techniques (census, household surveys, stakeholder forums, focus groups, etc.), which are often cumbersome, very costly, untimely, and in many cases require some form of approval or support by government.

It might seem at first that user-generated data has limited usefulness in a development context due to the importance of the internet in generating this data combined with limited internet availability in many places. However, U-Report is one example of being able to access user-generated data independent of the internet.

U-Report was initiated by UNICEF Uganda in 2011 and is a free SMS based platform where Ugandans are able to register as “U-Reporters” and on a weekly basis give their views on topical issues (mostly related to health, education, and access to social services) or participate in opinion polls. As an example, Figure 1 shows the result from a U-Report poll on whether polio vaccinators came to U-Reporter houses to immunise all children under 5 in Uganda, broken down by districts. Presently, there are more than 300,000 U-Reporters in Uganda and more than one million U-Reporters across 24 countries that now have U-Report. As an indication of its potential impact on policymaking,UNICEF claims that every Member of Parliament in Uganda is signed up to receive U-Report statistics.

Figure 1: U-Report Uganda poll results

Figure 1: U-Report Uganda poll results

U-Report and other platforms such as Ushahidi (which supports, for example, I PAID A BRIBE, Watertracker, election monitoring, and crowdmapping) facilitate crowdsourcing of data where users contribute data for a specific purpose. In contrast, “big data” is a broader concept because the purpose of using the data is generally independent of the reasons why the data was generated in the first place.

Big data for development is a new phrase that we will probably hear a lot more (see here [pdf] and here). The United Nations Global Pulse, for example, supports a number of innovation labs which work on projects that aim to discover new ways in which data can help better decision-making. Many forms of “big data” are unstructured (free-form and text-based rather than table- or spreadsheet-based) and so a number of analytical techniques are required to make sense of the data before it can be used.

Measures of Twitter activity, for example, can be a real-time indicator of food price crises in Indonesia [pdf] (see Figure 2 below which shows the relationship between food-related tweet volume and food inflation: note that the large volume of tweets in the grey highlighted area is associated with policy debate on cutting the fuel subsidy rate) or provide a better understanding of the drivers of immunisation awareness. In these examples, researchers “text-mine” Twitter feeds by extracting tweets related to topics of interest and categorising text based on measures of sentiment (positive, negative, anger, joy, confusion, etc.) to better understand opinions and how they relate to the topic of interest. For example, Figure 3 shows the sentiment of tweets related to vaccination in Kenya over time and the dates of important vaccination related events.

Figure 2: Plot of monthly food-related tweet volume and official food price statistics

Figure 2: Plot of monthly food-related Tweet volume and official food price statistics

Figure 3: Sentiment of vaccine related tweets in Kenya

Figure 3: Sentiment of vaccine-related tweets in Kenya

Another big data example is the use of mobile phone usage to monitor the movement of populations in Senegal in 2013. The data can help to identify changes in the mobility patterns of vulnerable population groups and thereby provide an early warning system to inform humanitarian response effort.

The development of mobile banking too offers the potential for the generation of a staggering amount of data relevant for development research and informing policy decisions. However, it also highlights the public good nature of data collected by public and private sector institutions and the reliance that researchers have on them to access the data. Building trust and a reputation for being able to manage privacy and commercial issues will be a major challenge for researchers in this regard….(More)”

Priorities for the National Privacy Research Strategy


James Kurose and Keith Marzullo at the White House: “Vast improvements in computing and communications are creating new opportunities for improving life and health, eliminating barriers to education and employment, and enabling advances in many sectors of the economy. The promise of these new applications frequently comes from their ability to create, collect, process, and archive information on a massive scale.

However, the rapid increase in the quantity of personal information that is being collected and retained, combined with our increased ability to analyze and combine it with other information, is creating concerns about privacy. When information about people and their activities can be collected, analyzed, and repurposed in so many ways, it can create new opportunities for crime, discrimination, inadvertent disclosure, embarrassment, and harassment.

This Administration has been a strong champion of initiatives to improve the state of privacy, such as the “Consumer Privacy Bill of Rights” proposal and the creation of the Federal Privacy Council. Similarly, the White House report Big Data: Seizing Opportunities, Preserving Values highlights the need for large-scale privacy research, stating: “We should dramatically increase investment for research and development in privacy-enhancing technologies, encouraging cross-cutting research that involves not only computer science and mathematics, but also social science, communications and legal disciplines.”

Today, we are pleased to release the National Privacy Research Strategy. Research agencies across government participated in the development of the strategy, reviewing existing Federal research activities in privacy-enhancing technologies, soliciting inputs from the private sector, and identifying priorities for privacy research funded by the Federal Government. The National Privacy Research Strategy calls for research along a continuum of challenges, from how people understand privacy in different situations and how their privacy needs can be formally specified, to how these needs can be addressed, to how to mitigate and remediate the effects when privacy expectations are violated. This strategy proposes the following priorities for privacy research:

  • Foster a multidisciplinary approach to privacy research and solutions;
  • Understand and measure privacy desires and impacts;
  • Develop system design methods that incorporate privacy desires, requirements, and controls;
  • Increase transparency of data collection, sharing, use, and retention;
  • Assure that information flows and use are consistent with privacy rules;
  • Develop approaches for remediation and recovery; and
  • Reduce privacy risks of analytical algorithms.

With this strategy, our goal is to produce knowledge and technology that will enable individuals, commercial entities, and the Federal Government to benefit from technological advancements and data use while proactively identifying and mitigating privacy risks. Following the release of this strategy, we are also launching a Federal Privacy R&D Interagency Working Group, which will lead the coordination of the Federal Government’s privacy research efforts. Among the group’s first public activities will be to host a workshop to discuss the strategic plan and explore directions of follow-on research. It is our hope that this strategy will also inspire parallel efforts in the private sector….(More)”

Why we no longer trust the experts


Gillian Tett in the Financial Times: “Last week, I decided to take a gaggle of kids for an end-of-school-year lunch in a New York neighbourhood that I did not know well. I duly began looking for a suitable restaurant. A decade ago, I would have done that by turning to a restaurant guide. In the world I grew up in, it was normal to seek advice from the “experts”.

But in Manhattan last week, it did not occur to me to consult Fodor’s. Instead, I typed what I needed into my cellphone, scrolled through a long list of online restaurant recommendations, including comments from people who had eaten in them — and picked one.

Yes, it was a leap of faith; those restaurant reviews might have been fake. But there were enough voices for me to feel able to trust the wisdom of the cyber crowds — and, as it happened, our lunch choice was very good.

This is a trivial example of a much bigger change that is under way, and one that has some thought-provoking implications in the wake of the Brexit vote. Before the referendum, British citizens were subjected to a blitz of advice about the potential costs of Brexit from “experts”: economists, central bankers, the International Monetary Fund and world leaders, among others. Indeed, the central strategy of the government (and other “Remainers”) appeared to revolve around wheeling out these experts, with their solemn speeches and statistics….

I suspect that it indicates something else: that citizens of the cyber world no longer have much faith in anything that experts say, not just in the political sphere but in numerous others too. At a time when we increasingly rely on crowd-sourced advice rather than official experts to choose a restaurant, healthcare and holidays, it seems strange to expect voters to listen to official experts when it comes to politics.

In our everyday lives, we are moving from a system based around vertical axes of trust, where we trust people who seem to have more authority than we do, to one predicated on horizontal axes of trust: we take advice from our peer group.

You can see this clearly if you look at the surveys conducted by groups such as the Pew Research Center. These show that faith in institutions such as the government, big business and the media has crumbled in recent years; indeed, almost the only institution in the US that has bucked the trend is the military.

What is even more interesting to look at, however, are the areas where trust remains high. In an annual survey conducted by the Edelman public relations firm, people in 20 countries are asked who they trust. They show rising confidence in the “a person like me” category, and surprisingly high trust in digital technology. We live in a world where we increasingly trust our Facebook friends and the Twitter crowd more than we do the IMF or the prime minister.

In some senses, this is good news. Relying on horizontal axes of trust should mean more democracy and empowerment for ordinary citizens. But the problem of this new world is that people can fall prey to social fads and tribalism — or groupthink…..

Either way, nobody is going to put this genie back into the bottle. So we all need to think about what creates the bonds of “trust” in today’s world. And recognise that the 20th-century model of politics, with its reverence for experts and fixed parties, may eventually seem as outdated as restaurant guides. We live in volatile time…(More)”

Data-Driven Justice Initiative, Disrupting Cycle of Incarceration


The White House: “Every year, more than 11 million people move through America’s 3,100 local jails, many on low-level, non-violent misdemeanors, costing local governments approximately $22 billion a year. In local jails, 64 percent of people suffer from mental illness, 68 percent have a substance abuse disorder, and 44 percent suffer from chronic health problems. Communities across the country have recognized that a relatively small number of these highly vulnerable people cycle repeatedly not just through local jails, but also hospital emergency rooms, shelters, and other public systems, receiving fragmented and uncoordinated care at great cost to American taxpayers, with poor outcomes.

For example, in Miami-Dade, Florida found that 97 people with serious mental illness accounted for $13.7 million in services over four years, spending more than 39,000 days in either jail, emergency rooms, state hospitals or psychiatric facilities in their county. In response, the county provided key mental health de-escalation training to their police officers and 911 dispatchers and, over the past five years, Miami-Dade police have responded to nearly 50,000 calls for service for people in mental health crisis, but have made only 109 arrests, diverting more than 10,000 people to services or safely stabilizing situations without arrest. The jail population fell from over 7000 to just over 4700 and the county was able to close an entire jail facility, saving nearly $12 million a year.

In addition, on any given day, more than 450,000 people are held in jail before trial, nearly 63 percent of the local jail population, even though they have not been convicted of a crime. A 2014 study of New York’s Riker’s Island jail found more than 86% percent of detained individuals were held on a bond of $500 or less. To tackle the challenges of bail, in 2014 Charlotte-Mecklenburg, NC began using a data-based risk assessment tool to identify low risk people in jail and find ways to release them safely. Since they began using the tool, the jail population has gone down 20 percent, significantly more low-risk individuals have been released from jail, and there has been no increase in reported crime.

To break this cycle of incarceration, the Administration has launched the Data-Driven Justice Initiative with a bipartisan coalition of city, county, and state governments who have committed to using data-driven strategies to divert low-level offenders with mental illness out of the criminal system and to change approaches to pre-trial incarceration so that low risk offenders no longer stay in jail simply because they cannot afford a bond. These innovative strategies, which have measurably reduced jail populations in several communities, help stabilize individuals and families, better serve communities, and, often, saves money in the process. DDJ communities commit to:

  1. combining data from across criminal justice and health systems to identify the individuals with the highest number of contacts with police, ambulance, emergency departments, and other services, and, leverage existing resources to link them to health, behavioral health, and social services in the community;
  2. equipping law enforcement and first responders to enable more rapid deployment of tools, approaches, and other innovations they need to safely and more effectively respond to people in mental health crisis and divert people with high needs to identified service providers instead of arrest; and
  3. working towards using objective, data-driven, validated risk assessment tools to inform the safe release of low-risk defendants from jails in order to reduce the jail population held pretrial….(More: FactSheet)”

The Surprising History of the Infographic


Clive Thompson at the Smithsonian magazine: “As the 2016 election approaches, we’re hearing a lot about “red states” and “blue states.” That idiom has become so ingrained that we’ve almost forgotten where it originally came from: a data visualization.

In the 2000 presidential election, the race between Al Gore and George W. Bush was so razor close that broadcasters pored over electoral college maps—which they typically colored red and blue. What’s more, they talked about those shadings. NBC’s Tim Russert wondered aloud how George Bush would “get those remaining 61 electoral red states, if you will,” and that language became lodged in the popular imagination. America became divided into two colors—data spun into pure metaphor. Now Americans even talk routinely about “purple” states, a mental visualization of political information.

We live in an age of data visualization. Go to any news website and you’ll see graphics charting support for the presidential candidates; open your iPhone and the Health app will generate personalized graphs showing how active you’ve been this week, month or year. Sites publish charts showing how the climate is changing, how schools are segregating, how much housework mothers do versus fathers. And newspapers are increasingly finding that readers love “dataviz”: In 2013, the New York Times’ most-read story for the entire year was a visualization of regional accents across the United States. It makes sense. We live in an age of Big Data. If we’re going to understand our complex world, one powerful way is to graph it.

But this isn’t the first time we’ve discovered the pleasures of making information into pictures. Over a hundred years ago, scientists and thinkers found themselves drowning in their own flood of data—and to help understand it, they invented the very idea of infographics.

**********

The idea of visualizing data is old: After all, that’s what a map is—a representation of geographic information—and we’ve had maps for about 8,000 years. But it was rare to graph anything other than geography. Only a few examples exist: Around the 11th century, a now-anonymous scribe created a chart of how the planets moved through the sky. By the 18th century, scientists were warming to the idea of arranging knowledge visually. The British polymath Joseph Priestley produced a “Chart of Biography,” plotting the lives of about 2,000 historical figures on a timeline. A picture, he argued, conveyed the information “with more exactness, and in much less time, than it [would take] by reading.”

Still, data visualization was rare because data was rare. That began to change rapidly in the early 19th century, because countries began to collect—and publish—reams of information about their weather, economic activity and population. “For the first time, you could deal with important social issues with hard facts, if you could find a way to analyze it,” says Michael Friendly, a professor of psychology at York University who studies the history of data visualization. “The age of data really began.”

An early innovator was the Scottish inventor and economist William Playfair. As a teenager he apprenticed to James Watt, the Scottish inventor who perfected the steam engine. Playfair was tasked with drawing up patents, which required him to develop excellent drafting and picture-drawing skills. After he left Watt’s lab, Playfair became interested in economics and convinced that he could use his facility for illustration to make data come alive.

“An average political economist would have certainly been able to produce a table for publication, but not necessarily a graph,” notes Ian Spence, a psychologist at the University of Toronto who’s writing a biography of Playfair. Playfair, who understood both data and art, was perfectly positioned to create this new discipline.

In one famous chart, he plotted the price of wheat in the United Kingdom against the cost of labor. People often complained about the high cost of wheat and thought wages were driving the price up. Playfair’s chart showed this wasn’t true: Wages were rising much more slowly than the cost of the product.

JULAUG2016_H04_COL_Clive.jpg
Playfair’s trade-balance time-series chart, published in his Commercial and Political Atlas, 1786 (Wikipedia)

“He wanted to discover,” Spence notes. “He wanted to find regularities or points of change.” Playfair’s illustrations often look amazingly modern: In one, he drew pie charts—his invention, too—and lines that compared the size of various country’s populations against their tax revenues. Once again, the chart produced a new, crisp analysis: The British paid far higher taxes than citizens of other nations.

Neurology was not yet a robust science, but Playfair seemed to intuit some of its principles. He suspected the brain processed images more readily than words: A picture really was worth a thousand words. “He said things that sound almost like a 20th-century vision researcher,” Spence adds. Data, Playfair wrote, should “speak to the eyes”—because they were “the best judge of proportion, being able to estimate it with more quickness and accuracy than any other of our organs.” A really good data visualization, he argued, “produces form and shape to a number of separate ideas, which are otherwise abstract and unconnected.”

Soon, intellectuals across Europe were using data visualization to grapple with the travails of urbanization, such as crime and disease….(More)”

This text-message hotline can predict your risk of depression or stress


Clinton Nguyen for TechInsider: “When counselors are helping someone in the midst of an emotional crisis, they must not only know how to talk – they also must be willing to text.

Crisis Text Line, a non-profit text-message-based counseling service, operates a hotline for people who find it safer or easier to text about their problems than make a phone call or send an instant message. Over 1,500 volunteers are on hand 24/7 to lend support about problems including bullying, isolation, suicidal thoughts, bereavement, self-harm, or even just stress.

But in addition to providing a new outlet for those who prefer to communicate by text, the service is gathering a wellspring of anonymized data.

“We look for patterns in historical conversations that end up being higher risk for self harm and suicide attempts,” Liz Eddy, a Crisis Text Line spokesperson, tells Tech Insider. “By grounding in historical data, we can predict the risk of new texters coming in.crisis-text-line-sms

According to Fortune, the organization is using machine learning to prioritize higher-risk individuals for quicker and more effective responses. But Crisis Text Line is also wielding the data it gathers in other ways – the company has published a page of trends that tells the public which hours or days people are more likely to be affected by certain issues, as well as which US states are most affected by specific crises or psychological states.

According to the data, residents of Alaska reach out to the Text Line for LGBTQ issues more than those in other states, and Maine is one of the most stressed out states. Physical abuse is most commonly reported in North Dakota and Wyoming, while depression is more prevalent in texters from Kentucky and West Virginia.

The research comes at an especially critical time. According to studies from the National Center for Health Statistics, US suicide rates have surged to a 30-year high. The study noted a rise in suicide rates for all demographics except black men over the age of 75. Alarmingly, the suicide rate among 10- to 14-year-old girls has tripled since 1999….(More)”

The Billions We’re Wasting in Our Jails


Stephen Goldsmith  and Jane Wiseman in Governing: “By using data analytics to make decisions about pretrial detention, local governments could find substantial savings while making their communities safer….

Few areas of local government spending present better opportunities for dramatic savings than those that surround pretrial detention. Cities and counties are wasting more than $3 billion a year, and often inducing crime and job loss, by holding the wrong people while they await trial. The problem: Only 10 percent of jurisdictions use risk data analytics when deciding which defendants should be detained.

As a result, dangerous people are out in our communities, while many who could be safely in the community are behind bars. Vast numbers of people accused of petty offenses spend their pretrial detention time jailed alongside hardened convicts, learning from them how to be better criminals….

In this era of big data, analytics not only can predict and prevent crime but also can discern who should be diverted from jail to treatment for underlying mental health or substance abuse issues. Avoided costs aggregating in the billions could be better spent on detaining high-risk individuals, more mental health and substance abuse treatment, more police officers and other public safety services.

Jurisdictions that do use data to make pretrial decisions have achieved not only lower costs but also greater fairness and lower crime rates. Washington, D.C., releases 85 percent of defendants awaiting trial. Compared to the national average, those released in D.C. are two and a half times more likely to remain arrest-free and one and a half times as likely to show up for court.

Louisville, Ky., implemented risk-based decision-making using a tool developed by the Laura and John Arnold Foundation and now releases 70 percent of defendants before trial. Those released have turned out to be twice as likely to return to court and to stay arrest-free as those in other jurisdictions. Mesa County, Colo., and Allegheny County, Pa., both have achieved significant savings from reduced jail populations due to data-driven release of low-risk defendants.

Data-driven approaches are beginning to produce benefits not only in the area of pretrial detention but throughout the criminal justice process. Dashboards now in use in a handful of jurisdictions allow not only administrators but also the public to see court waiting times by offender type and to identify and address processing bottlenecks….(More)”