Manju Bansal in MIT Technology Review: “The undocumented guys hanging out in the home-improvement-store parking lot looking for day labor, the neighborhood kids running a lemonade stand, and Al Qaeda terrorists plotting to do harm all have one thing in common: They operate in the underground economy, a shadowy zone where businesses, both legitimate and less so, transact in the currency of opportunity, away from traditional institutions and their watchful eyes.
One might think that this alternative economy is limited to markets that are low on the Transparency International rankings (such as sub-Saharan Africa and South Asia, for instance). However, a recent University of Wisconsin report estimates the value of the underground economy in the United States at about $2 trillion, about 15% of the total U.S. GDP. And a 2013 study coauthored by Friedrich Schneider, a noted authority on global shadow economies, estimated the European Union’s underground economy at more than 18% of GDP, or a whopping 2.1 trillion euros. More than two-thirds of the underground activity came from the most developed countries, including Germany, France, Italy, Spain, and the United Kingdom.
Underground economic activity is a multifaceted phenomenon, with implications across the board for national security, tax collections, public-sector services, and more. It includes the activity of any business that relies primarily on old-fashioned cash for most transactions — ranging from legitimate businesses (including lemonade stands) to drug cartels and organized crime.
Though it’s often soiled, heavy to lug around, and easy to lose to theft, cash is still king simply because it is so easy to hide from the authorities. With the help of the right bank or financial institution, “dirty” money can easily be laundered and come out looking fresh and clean, or at least legitimate. Case in point is the global bank HSBC, which agreed to pay U.S. regulators $1.9 billion in fines to settle charges of money laundering on behalf of Mexican drug cartels. According to a U.S. Senate subcommittee report, that process involved transferring $7 billion in cash from the bank’s branches in Mexico to those in the United States. Just for reference, each $100 bill weighs one gram, so to transfer $7 billion, HSBC had to physically transport 70 metric tons of cash across the U.S.-Mexican border.
The Financial Action Task Force, an intergovernmental body established in 1989, has estimated the total amount of money laundered worldwide to be around 2% to 5% of global GDP. Many of these transactions seem, at first glance, to be perfectly legitimate. Therein lies the conundrum for a banker or a government official: How do you identify, track, control, and, one hopes, prosecute money launderers, when they are hiding in plain sight and their business is couched in networked layers of perfectly defensible legitimacy?
Enter big-data tools, such as those provided by SynerScope, a Holland-based startup that is a member of the SAP Startup Focus program. This company’s solutions help unravel the complex networks hidden behind the layers of transactions and interactions.
Networks, good or bad, are near omnipresent in almost any form of organized human activity and particularly in banking and insurance. SynerScope takes data from both structured and unstructured data fields and transforms these into interactive computer visuals that display graphic patterns that humans can use to quickly make sense of information. Spotting of deviations in complex networked processes can easily be put to use in fraud detection for insurance, banking, e-commerce, and forensic accounting.
SynerScope’s approach to big-data business intelligence is centered on data-intense compute and visualization that extend the human “sense-making” capacity in much the same way that a telescope or microscope extends human vision.
To understand how SynerScope helps authorities track and halt money laundering, it’s important to understand how the networked laundering process works. It typically involves three stages.
1. In the initial, or placement, stage, launderers introduce their illegal profits into the financial system. This might be done by breaking up large amounts of cash into less-conspicuous smaller sums that are then deposited directly into a bank account, or by purchasing a series of monetary instruments (checks, money orders) that are then collected and deposited into accounts at other locations.
2. After the funds have entered the financial system, the launderer commences the second stage, called layering, which uses a series of conversions or transfers to distance the funds from their sources. The funds might be channeled through the purchase and sales of investment instruments, or the launderer might simply wire the funds through a series of accounts at various banks worldwide.
Such use of widely scattered accounts for laundering is especially prevalent in those jurisdictions that do not cooperate in anti-money-laundering investigations. Sometimes the launderer disguises the transfers as payments for goods or services.
3. Having successfully processed the criminal profits through the first two phases, the launderer then proceeds to the third stage, integration, in which the funds re-enter the legitimate economy. The launderer might invest the funds in real estate, luxury assets, or business ventures.
Current detection tools compare individual transactions against preset profiles and rules. Sophisticated criminals quickly learn how to make their illicit transactions look normal for such systems. As a result, rules and profiles need constant and costly updating.
But SynerScope’s flexible visual analysis uses a network angle to detect money laundering. It shows the structure of the entire network with data coming in from millions of transactions, a structure that launderers cannot control. With just a few mouse clicks, SynerScope’s relation and sequence views reveal structural interrelationships and interdependencies. When those patterns are mapped on a time scale, it becomes virtually impossible to hide abnormal flows.
Cyberlibertarians’ Digital Deletion of the Left
David Golumbia in Jacobin: “The digital revolution, we are told everywhere today, produces democracy. It gives “power to the people” and dethrones authoritarians; it levels the playing field for distribution of information critical to political engagement; it destabilizes hierarchies, decentralizes what had been centralized, democratizes what was the domain of elites.
Most on the Left would endorse these ends. The widespread availability of tools whose uses are harmonious with leftist goals would, one might think, accompany broad advancement of those goals in some form. Yet the Left today is scattered, nearly toothless in most advanced democracies. If digital communication technology promotes leftist values, why has its spread coincided with such a stark decline in the Left’s political fortunes?
Part of this disconnect between advancing technology and a retreating left can be explained by the advent of cyberlibertarianism, a view that widespread computerization naturally produces democracy and freedom.
In the 1990s, UK media theorists Richard Barbrook and Andy Cameron, US journalist Paulina Borsook, and US philosopher of technology Langdon Winner introduced the term to describe a prominent worldview in Silicon Valley and digital culture generally; a related analysis can be found more recently in Stanford communication scholar Fred Turner’s work. While cyberlibertarianism can be defined as a general digital utopianism, summed up by a simple slogan like “computerization will set us free” or “computers provide the solution to any and all problems,” these writers note a specific political formation — one Winner describes as “ecstatic enthusiasm for electronically mediated forms of living with radical, right-wing libertarian ideas about the proper definition of freedom, social life, economics, and politics.”
There are overt libertarians who are also digital utopians — figures like Jimmy Wales, Eric Raymond, John Perry Barlow, Kevin Kelly, Peter Thiel, Elon Musk, Julian Assange, Dread Pirate Roberts, and Sergey Brin, and the members of the Technology Liberation Front who explicitly describe themselves as cyberlibertarians. But the term also describes a wider ideological formation in which people embrace digital utopianism as compatible or even identical with leftist politics opposed to neoliberalism.
In perhaps the most pointed form of cyberlibertarianism, computer expertise is seen as directly applicable to social questions. In The Cultural Logic of Computation, I argue that computational practices are intrinsically hierarchical and shaped by identification with power. To the extent that algorithmic forms of reason and social organization can be said to have an inherent politics, these have long been understood as compatible with political formations on the Right rather than the Left.
Yet today, “hacktivists” and other promoters of the liberatory nature of mass computerization are prominent political voices, despite their overall political commitments remaining quite unclear. They are championed by partisans of both the Right and the Left as if they obviously serve the political ends of each. One need only reflect on the leftist support for a project like Open Source software to notice the strange and under-examined convergence of the Right and Left around specifically digital practices whose underlying motivations are often explicitly libertarian. Open Source is a deliberate commercialization of Richard Stallman’s largely noncommercial notion ofFree Software (see Stallman himself on the distinction). Open Source is widely celebrated by libertarians and corporations, and was started by libertarian Eric Raymond and programmer Bruce Perens, with support from businessman and corporate sympathizer Tim O’Reilly. Today the term Open Source has wide currency as a political imperative outside the software development community, despite its place on the Right-Left spectrum being at best ambiguous, and at worst explicitly libertarian and pro-corporate.
When computers are involved, otherwise brilliant leftists who carefully examine the political commitments of most everyone they side with suddenly throw their lot in with libertarians — even when those libertarians explicitly disavow Left principles in their work…”
Twitter Can Now Predict Crime, and This Raises Serious Questions
The system Greber has devised is an amalgam of both old and new techniques. Currently, many police departments target hot spots for criminal activity based on actual occurrences of crime. This approach, called kernel density estimation (KDE), involves pairing a historical crime record with a geographic location and using a probability function to calculate the possibility of future crimes occurring in that area. While KDE is a serviceable approach to anticipating crime, it pales in comparison to the dynamism of Twitter’s real-time data stream, according to Dr. Gerber’s research paper “Predicting Crime Using Twitter and Kernel Density Estimation”.
Dr. Greber’s approach is similar to KDE, but deals in the ethereal realm of data and language, not paperwork. The system involves mapping the Twitter environment; much like how police currently map the physical environment with KDE. The big difference is that Greber is looking at what people are talking about in real time, as well as what they do after the fact, and seeing how well they match up. The algorithms look for certain language that is likely to indicate the imminent occurrence of a crime in the area, Greber says. “We might observe people talking about going out, getting drunk, going to bars, sporting events, and so on—we know that these sort of events correlate with crime, and that’s what the models are picking up on.”
Once this data is collected, the GPS tags in tweets allows Greber and his team to pin them to a virtual map and outline hot spots for potential crime. However, everyone who tweets about hitting the club later isn’t necessarily going to commit a crime. Greber tests the accuracy of his approach by comparing Twitter-based KDE predictions with traditional KDE predictions based on police data alone. The big question is, does it work? For Greber, the answer is a firm “sometimes.” “It helps for some, and it hurts for others,” he says.
According to the study’s results, Twitter-based KDE analysis yielded improvements in predictive accuracy over traditional KDE for stalking, criminal damage, and gambling. Arson, kidnapping, and intimidation, on the other hand, showed a decrease in accuracy from traditional KDE analysis. It’s not clear why these crimes are harder to predict using Twitter, but the study notes that the issue may lie with the kind of language used on Twitter, which is characterized by shorthand and informal language that can be difficult for algorithms to parse.
This kind of approach to high-tech crime prevention brings up the familiar debate over privacy and the use of users’ date for purposes they didn’t explicitly agree to. The case becomes especially sensitive when data will be used by police to track down criminals. On this point, though he acknowledges post-Snowden societal skepticism regarding data harvesting for state purposes, Greber is indifferent. “People sign up to have their tweets GPS tagged. It’s an opt-in thing, and if you don’t do it, your tweets won’t be collected in this way,” he says. “Twitter is a public service, and I think people are pretty aware of that.”…
Is Participatory Budgeting Real Democracy?
Anna Clark in NextCity: “Drawing from a practice pioneered 25 years ago in Porto Alegre, Brazil and imported to North America via progressive leaders in Toronto and Quebec, participatory budgeting cracks open the closed-door process of fiscal decision-making in cities, letting citizens vote on exactly how government money is spent in their community. It’s an auspicious departure from traditional ways of allocating tax dollars, let alone in Chicago, which has long been known for deeply entrenched machine politics. As Alderman Joe Moore puts it, in Chicago, “so many decisions are made from the top down.”
Participatory budgeting works pretty simply in the 49th Ward. Instead of Moore deciding how to spend $1.3 million in “menu money” that is allotted annually to each of Chicago’s 50 council members for capital improvements, the councilman opens up a public process to determine how to spend $1 million of the allotment. The remaining $300,000 is socked away in the bank for emergencies and cost overruns.
And the unusual vote on $1 million in menu money is open to a wider swath of the community than your standard Election Day: you don’t have to be a citizen to cast a ballot, and the voting age is sixteen.
Thanks to the process, Rogers Park can now boast of a new community garden, dozens of underpass murals, heating shelters at three transit stations, hundreds of tree plantings, an outdoor shower at Loyola Park, a $110,000 dog park, and eye-catching “You Are Here” neighborhood information boards at transit station entrances.
…
Another prominent supporter of participatory budgeting? The White House. In December—about eight months after Joe Moore met with President Barack Obama about bringing participatory budgeting to the federal level—PB became an option for determining how to spend community development block-grant money from the Department of Housing and Urban Development. The Obama administration also declared that, in a yet-to-be-detailed partnership, it will help create tools that can be used for participatory budgeting on a local level.
All this activity has so far added up to $45 million in tax dollars allocated to 203 voter-approved projects across the country. Some 46,000 people and 500 organizations nationwide have been part of the decision-making, according to the nonprofit Participatory Budgeting Project.
….
But to fulfill this vision, the process needs resources behind it—enough funds for projects to demonstrate a visible community benefit, and ample capacity from the facilitators of the process (whether it’s district officials or city hall) to truly reach out to the community. Without intention and capacity, PB risks duplicating the process of elections for ordinary representative democracy, where white middle- and upper-class voters are far more likely to vote and therefore enjoy an outsized influence on their neighborhood.
…
Participatory budgeting works differently for every city. In Porto Alegre, Brazil, where the process was created a generation ago by The Worker’s Party to give disadvantaged people a stronger voice in government, as many as 50,000 people vote on how to spend public money each year. More than $700 million has been funneled through the process since its inception. Vallejo, Calif., embraced participatory budgeting in 2012 after emerging from bankruptcy as part of its citywide reinvention. In its first PB vote in May 2013, 3,917 residents voted over the course of a week at 13 polling locations. That translated into four percent of the city’s eligible voters—a tiny number, but a much higher percentage than previous PB processes in Chicago and New York.
But the 5th Ward in Hyde Park, a South Side neighborhood that’s home to the University of Chicago, dropped PB in December, citing low turnout in neighborhood assemblies and residents who felt the process was too much work to be worthwhile. “They said it was very time consuming, a lot of meetings, and that they thought the neighborhood groups that they had were active enough to do it without having all of the expenses that were associated with it,” Alderman Leslie Hairston told the Hyde Park Herald. In 2013, its first year with participatory budgeting, the 5th Ward held a PB vote that saw only 100 ballots cast.
Josh Lerner of the Participatory Budgeting Project says low turnout is a problem that can be solved through outreach and promotion. “It is challenging to do this without capacity,” he said. Internationally, according to Lerner, PB is part of a city administration, with a whole office coordinating the process. Without the backing from City Hall in Porto Alegre, participatory budgeting would have a hard time attracting the tens of thousands who now count themselves as part of the process. And even with the support from City Hall, the 50,000 participants represent less than one percent of the city’s population of 1.4 million.
…
So what’s next for participatory budgeting in Rogers Park and beyond?
Well, first off, Rahm Emanuel’s new Manager of Participatory Budgeting will be responsible for supporting council districts if and when they opt to go participatory. There won’t be a requirement to do so, but if a district wishes to follow the 49th, they will have high-level backup from City Hall.
But this new manager—as well as Chicago’s aldermen and engaged citizens—must understand that there is no one-size-fits-all formula for participatory budgeting. The process must be adapted to the unique needs and culture of each district if it is to resonate with locals. And timing is key for rolling out the process.
While still in the hazy early days, federal support through the new White House initiative may also prove crucial in streamlining the participatory budgeting process, easing the burden on local leaders and citizens, and ultimately generating better participation—and, therefore, better on-the-ground results in communities around the country.
One of the key lessons of participatory budgeting—as with democracy more broadly—is that efficiency is not the highest value in the public sphere. It would be much easier and more cost-effective for aldermen to return to the old days and simply check off the boxes for where he or she thinks menu money should be spent. “We could sign off on menu money in a couple hours, a couple days,” Vandercook said. By choosing the participatory path, aldermen effectively create more work for themselves. They risk low rates of participation and the possibility that winning projects may not be the most worthy. Scalability, too, is a problem — the larger the community served by the process, the more difficult it is to ensure that both the process and the resulting projects reflect the needs of the entire community.
Nonetheless, participatory budgeting serves a harder-to-measure purpose that may well be, in the final accounting, more important. It is a profound civic education for citizens, who dig into both the limits and possibilities of public money. They experience what their elected leaders must navigate every day. But it’s also a civic education for council members and city staff who may find that they are engaging with those they represent more than they ever had before, learning about what they value most. Owen Burgh, chief of staff for Alderman Joe Arena in Chicago’s 45th Ward, told the Participatory Budgeting Project, “I was really surprised by the amazing knowledge base we have among our volunteers. So many of our volunteers came to the process with a background where they understood some principles of traffic management, community development and urban planning. It was very refreshing. Usually, in an alderman’s office, people contact us to fix an isolated problem. Through this process, we discussed not just what needed to be fixed but what we wanted our community to be.”
The participatory budgeting process expands the scope and depth of civic spaces in the community, where elected leaders work with—not for—residents. Even for those who do not show up to vote, there is an empowerment that comes simply in knowing that they could; the sincere invitation to participate matters, whether or not it is accepted…”
The California Report Card
“The California Report Card (CRC) is an online platform developed by the CITRIS Data and Democracy Initiative at UC Berkeley and Lt. Governor Gavin Newsom that explores how smartphones and networks can enhance communication between the public and government leaders. The California Report Card allows visitors to grade issues facing California and to suggest issues for future report cards.
The CRC is a mobile-optimized web application that allows participants to advise the state government on timely policy issues. We are exploring how technology can streamline and structure input from the public to elected officials, to provide them with timely feedback on the changing opinions and priorities of their constituents.
Version 1.0 of the CRC was launched in California on 28 January 2014. Since then, over 7000 people from almost every county have assigned over 20,000 grades to the State of California and suggested issues for the next report card.
Lt. Governor Gavin Newsom: “The California Report Card is a new way for me to keep an ear to the ground. This new app/website makes it easy for Californians to assign grades and suggest pressing issues that merit our attention. In the first few weeks, participants conveyed that they approve of our rollout of Obamacare but are very concerned about the future of California schools and universities. I’m also gaining insights on issues ranging from speed limits to fracking to disaster preparedness.”
“This platform allows us to have our voices heard. The ability to review and grade what others suggest is important. It enables us and elected officials to hear directly how Californians feel.” – Matt Harris, Truck Driver, Ione, CA
“This is the first system that lets us directly express our feelings to government leaders. I also really enjoy reading and grading the suggestions from other participants.” – Patricia Ellis Pasko, Senior Care Giver, Apple Valley, CA
“Everyone knows that report cards can motivate learning by providing quantitative feedback on strengths and weaknesses. Similarly, the California Report Card has potential to motivate Californians and their leaders to learn from each other about timely issues. As researchers, the patterns of participation and how they vary over time and across geography will help us learn how to design future platforms.” – Prof. Ken Goldberg, UC Berkeley.
It takes only two minutes and works on all screens (best on mobile phones held vertically), just click “Participate“.
Anyone can participate by taking a few minutes to assign grades to the State of California on issues such as: Healthcare, Education, Marriage Equality, Immigrant Rights, and Marijuana Decriminalization. Participants are also invited to enter an online “cafe” to propose issues that they’d like to see included in the next report card (version 2.0 will come out later this Spring).
Lt. Gov. Gavin Newsom and UC Berkeley Professor Ken Goldberg reviewed the data and lessons learned from version 1.0 in a public forum at UC Berkeley on 20 March 2014 that included participants who actively contributed to identifying the most important issues for version 2.0. The event can be viewed at http://bit.ly/1kv6523.
We offer community outreach programs/workshops to train local leaders on how to use the CRC and how to reach and engage under-represented groups (low-income, rural, persons with disabilities, etc.). If you are interested in participating in or hosting a workshop, please contact Brandie Nonnecke at nonnecke@citris-uc.org”
New York Police Twitter Strategy Has Unforeseen Consequences
J. David Goodman in The New York Times: “The New York Police Department has long seen its crime-fighting strategies emulated across the country and around the world.
So when a departmental Twitter campaign, meant to elicit smiling snapshots, instead attracted tens of thousands of less flattering images of officers, it did not take long for the hashtag #myNYPD to spread far beyond the five boroughs.
By Wednesday, the public relations situation in New York City had sparked imitators from Los Angeles (#myLAPD) to Mexico (#MiPolicíaMexicana) and over the ocean to Greece (#myELAS), Germany (#DankePolizei) and France (#maPolice).
The images, including circles of police officers, in riot gear poised to strike a man on a bench, or hosing down protesters, closely resembled those posted on Tuesday by critics of the Police Department in New York, in which many of the most infamous moments in recent police history had been dredged up by Twitter users….”
The Right Colors Make Data Easier To Read
Sharon Lin And Jeffrey Heer at HBR Blog: “What is the color of money? Of love? Of the ocean? In the United States, most people respond that money is green, love is red and the ocean is blue. Many concepts evoke related colors — whether due to physical appearance, common metaphors, or cultural conventions. When colors are paired with the concepts that evoke them, we call these “semantically resonant color choices.”
Artists and designers regularly use semantically resonant colors in their work. And in the research we conducted with Julie Fortuna, Chinmay Kulkarni, and Maureen Stone, we found they can be remarkably important to data visualization.
Consider these charts of (fictional) fruit sales:
The only difference between the charts is the color assignment. The left-hand chart uses colors from a default palette. The right-hand chart has been assigned semantically resonant colors. (In this case, the assignment was computed automatically using an algorithm that analyzes the colors in relevant images retrieved from Google Image Search using queries for each data category name.)
Now, try answering some questions about the data in each of these charts. Which fruit had higher sales: blueberries or tangerines? How about peaches versus apples? Which chart do you find easier to read?…
To make effective visualization color choices, you need to take a number of factors into consideration. To name just two: All the colors need to be suitably different from one another, for instance, so that readers can tell them apart – what’s called “discriminability.” You also need to consider what the colors look like to the color blind — roughly 8% of the U.S. male population! Could the colors be distinguished from one another if they were reprinted in black and white?
One easy way to assign semantically resonant colors is to use colors from an existing color palette that has been carefully designed for visualization applications (ColorBrewer offers some options) but assign the colors to data values in a way that best matches concept color associations. This is the basis of our own algorithm, which acquires images for each concept and then analyzes them to learn concept color associations. However, keep in mind that color associations may vary across cultures. For example, in the United States and many western cultures, luck is often associated with green (four-leaf clovers), while red can be considered a color of danger. However, in China, luck is traditionally symbolized with the color red.
…
Semantically resonant colors can reinforce perception of a wide range of data categories. We believe similar gains would likely be seen for other forms of visualizations like maps, scatterplots, and line charts. So when designing visualizations for presentation or analysis, consider color choice and ask yourself how well the colors resonate with the underlying data.”
Can Government Play Moneyball?
David Bornstein in the New York Times: “…For all the attention it’s getting inside the administration, evidence-based policy-making seems unlikely to become a headline grabber; it lacks emotional appeal. But it does have intellectual heft. And one group that has been doing creative work to give the message broader appeal is Results for America, which has produced useful teaching aids under the banner “Moneyball for Government,” building on the popularity of the book and movie about Billy Beane’s Oakland A’s, and the rise of data-driven decision making in major league baseball. (Watch their video explainers here and here.)
Results for America works closely with leaders across political parties and social sectors, to build awareness about evidence-based policy making — drawing attention to key areas where government could dramatically improve people’s lives by augmenting well-tested models. They are also chronicling efforts by local governments around the country, to show how an emerging group of “Geek Cities,” including Baltimore, Denver, Miami, New York, Providence and San Antonio, are using data and evidence to drive improvements in various areas of social policy like education, youth development and employment.
“It seems like common sense to use evidence about what works to get better results,” said Michele Jolin, Results for America’s managing partner. “How could anyone be against it? But the way our system is set up, there are so many loud voices pushing to have dollars spent and policy shaped in the way that works for them. There has been no organized constituency for things that work.”
“The debate in Washington is usually about the quantity of resources,” said David Medina, a partner in Results for America. “We’re trying to bring it back to talking about quality.”
Not everyone will find this change appealing. “When you have a longstanding social service policy, there’s going to be a network of [people and groups] who are organized to keep that money flowing regardless of whether evidence suggests it’s warranted,” said Daniel Stid. “People in social services don’t like to think they’re behaving like other organized interests — like dairy farmers or mortgage brokers — but it leads to tremendous inertia in public policy.”
Beyond the politics, there are practical obstacles to overcome, too. Federal agencies lack sufficient budgets for evaluation or a common definition for what constitutes rigorous evidence. (Any lobbyist can walk into a legislator’s office and claim to have solid data to support an argument.) Up-to-date evidence also needs to be packaged in accessible ways and made available on a timely basis, so it can be used to improve programs, rather than to threaten them. Governments need to build regular evaluations into everything they do — not just conduct big, expensive studies every 10 years or so.
That means developing new ways to conduct quick and inexpensive randomized studies using data that is readily available, said Haskins, who is investigating this approach. “We should be running 10,000 evaluations a year, like they do in medicine.” That’s the only way to produce the rapid trial-and-error learning needed to drive iterative program improvements, he added. (I reported on a similar effort being undertaken by the Coalition for Evidence-Based Policy.)
Results for America has developed a scorecard to rank federal departments about how prepared they are to produce or incorporate evidence in their programs. It looks at whether a department has an office and a leader with the authority and budget to evaluate its programs. It asks: Does it make its data accessible to the public? Does it compile standards about what works and share them widely? Does it spend at least 1 percent of its budget evaluating its programs? And — most important — does it incorporate evidence in its big grant programs? For now, the Department of Education gets the top score.
The stakes are high. In 2011, for example, the Obama administration launched a process to reform Head Start, doing things like spreading best practices and forcing the worst programs to improve or lose their funding. This February, for the third time, the government released a list of Head Start providers (103 out of about 1,600) who will have to recompete for federal funding because of performance problems. That list represents tens of thousands of preschoolers, many of whom are missing out on the education they need to succeed in kindergarten — and life.
Improving flagship programs like Head Start, and others, is not just vital for the families they serve; it’s vital to restore trust in government. “I am a card-carrying member of the Republican Party and I want us to be governed well,” said Robert Shea, who pushed for better program evaluations as associate director of the Office of Management and Budget during the Bush administration, and continues to focus on this issue as chairman of the National Academy of Public Administration. “This is the most promising thing I know of to get us closer to that goal.”
“This idea has the prospect of uniting Democrats and Republicans,” said Haskins. “But it will involve a broad cultural change. It has to get down to the program administrators, board members and local staff throughout the country — so they know that evaluation is crucial to their operations.”
“There’s a deep mistrust of government and a belief that problems can’t be solved,” said Michele Jolin. “This movement will lead to better outcomes — and it will help people regain confidence in their public officials by creating a more effective, more credible way for policy choices to be made.”
Paying Farmers to Welcome Birds
Jim Robbins in The New York Times: “The Central Valley was once one of North America’s most productive wildlife habitats, a 450-mile-long expanse marbled with meandering streams and lush wetlands that provided an ideal stop for migratory shorebirds on their annual journeys from South America and Mexico to the Arctic and back.
Farmers and engineers have long since tamed the valley. Of the wetlands that existed before the valley was settled, about 95 percent are gone, and the number of migratory birds has declined drastically. But now an unusual alliance of conservationists, bird watchers and farmers have joined in an innovative plan to restore essential habitat for the migrating birds.
The program, called BirdReturns, starts with data from eBird, the pioneering citizen science project that asks birders to record sightings on a smartphone app and send the information to the Cornell Lab of Ornithology in upstate New York.
By crunching data from the Central Valley, eBird can generate maps showing where virtually every species congregates in the remaining wetlands. Then, by overlaying those maps on aerial views of existing surface water, it can determine where the birds’ need for habitat is greatest….
BirdReturns is an example of the growing movement called reconciliation ecology, in which ecosystems dominated by humans are managed to increase biodiversity.
“It’s a new ‘Moneyball,’ ” said Eric Hallstein, an economist with the Nature Conservancy and a designer of the auctions, referring to the book and movie about the Oakland Athletics’ data-driven approach to baseball. “We’re disrupting the conservation industry by taking a new kind of data, crunching it differently and contracting differently.”
The Transformative Impact of Data and Communication on Governance
Steven Livingston at Brookings: “How do digital technologies affect governance in areas of limited statehood – places and circumstances characterized by the absence of state provisioning of public goods and the enforcement of binding rules with a monopoly of legitimate force? In the first post in this series I introduced the limited statehood concept and then described the tremendous growth in mobile telephony, GIS, and other technologies in the developing world. In the second post I offered examples of the use of ICT in initiatives intended to fill at least some of the governance vacuum created by limited statehood. With mobile phones, for example, farmers are informed of market conditions, have access to liquidity through M-Pesa and similar mobile money platforms….
This brings to mind another type of ICT governance initiative. Rather than fill in for or even displace the state some ICT initiatives can strengthen governance capacity. Digital government – the use of digital technology by the state itself — is one important possibility. Other initiatives strengthen the state by exerting pressure. Countries with weak governance sometimes take the form of extractive states or those, which cater to the needs of an elite, leaving the majority of the population in poverty and without basic public services. This is what Daron Acemoglu and James A. Robinson call extractive political and economic institutions. Inclusive states, on the other hand, are pluralistic, bound by the rule of law, respectful of property rights, and, in general, accountable. Accountability mechanisms such as a free press and competitive multiparty elections are instrumental to discourage extractive institutions. What ICT-based initiatives might lend a hand in strengthening accountability? We can point to three examples.
Example One: Using ICT to Protect Human Rights
Nonstate actors now use commercial, high-resolution remote sensing satellites to monitor weapons programs and human rights violations. Amnesty International’s Remote Sensing for Human Rights offers one example, and Satellite Sentinel offers another. Both use imagery from DigitalGlobe, an American remote sensing and geospatial content company. Other organizations have used commercially available remote sensing imagery to monitor weapons proliferation. The Institute for Science and International Security, a Washington-based NGO, revealed the Iranian nuclear weapons program in 2003 using commercial satellite imagery…
Example Two: Crowdsourcing Election Observation
Others have used mobile phones and GIS to crowdsource election observation. For the 2011 elections in Nigeria, The Community Life Project, a civil society organization, created ReclaimNaija, an elections process monitoring system that relied on GIS and amateur observers with mobile phones to monitor the elections. Each of the red dots represents an aggregation of geo-located incidents reported to the ReclaimNaija platform. In a live map, clicking on a dot disaggregates the reports, eventually taking the reader to individual reports. Rigorous statistical analysis of ReclaimNaija results and the elections suggest it contributed to the effectiveness of the election process.
ReclaimNaija: Election Incident Reporting System Map
Example Three: Using Genetic Analysis to Identify War Crimes
In recent years, more powerful computers have led to major breakthroughs in biomedical science. The reduction in cost of analyzing the human genome has actually outpaced Moore’s Law. This has opened up new possibilities for the use of genetic analysis in forensic anthropology. In Guatemala, the Balkans, Argentina, Peru and in several other places where mass executions and genocides took place, forensic anthropologists are using genetic analysis to find evidence that is used to hold the killers – often state actors – accountable…”