New Paper by William Li, Pablo Azar, David Larochelle, Phil Hill & Andrew Lo: “The agglomeration of rules and regulations over time has produced a body of legal code that no single individual can fully comprehend. This complexity produces inefficiencies, makes the processes of understanding and changing the law difficult, and frustrates the fundamental principle that the law should provide fair notice to the governed. In this article, we take a quantitative, unbiased, and software-engineering approach to analyze the evolution of the United States Code from 1926 to today. Software engineers frequently face the challenge of understanding and managing large, structured collections of instructions, directives, and conditional statements, and we adapt and apply their techniques to the U.S. Code over time. Our work produces insights into the structure of the U.S. Code as a whole, its strengths and vulnerabilities, and new ways of thinking about individual laws. For example, we identify the first appearance and spread of important terms in the U.S. Code like “whistleblower” and “privacy.” We also analyze and visualize the network structure of certain substantial reforms, including the Patient Protection and Affordable Care Act (PPACA) and the Dodd-Frank Wall Street Reform and Consumer Protection Act, and show how the interconnections of references can increase complexity and create the potential for unintended consequences. Our work is a timely illustration of computational approaches to law as the legal profession embraces technology for scholarship, to increase efficiency, and to improve access to justice.”
Research Handbook On Transparency
New book edited by Padideh Ala’i and Robert G. Vaughn: ‘”Transparency” has multiple, contested meanings. This broad-ranging volume accepts that complexity and thoughtfully contrasts alternative views through conceptual pieces, country cases, and assessments of policies–such as freedom of information laws, whistleblower protections, financial disclosure, and participatory policymaking procedures.’
– Susan Rose-Ackerman, Yale University Law School, US
In the last two decades transparency has become a ubiquitous and stubbornly ambiguous term. Typically understood to promote rule of law, democratic participation, anti-corruption initiatives, human rights, and economic efficiency, transparency can also legitimate bureaucratic power, advance undemocratic forms of governance, and aid in global centralization of power. This path-breaking volume, comprising original contributions on a range of countries and environments, exposes the many faces of transparency by allowing readers to see the uncertainties, inconsistencies and surprises contained within the current conceptions and applications of the term….
The expert contributors identify the goals, purposes and ramifications of transparency while presenting both its advantages and shortcomings. Through this framework, they explore transparency from a number of international and comparative perspectives. Some chapters emphasize cultural and national aspects of the issue, with country-specific examples from China, Mexico, the US and the UK, while others focus on transparency within global organizations such as the World Bank and the WTO. A number of relevant legal considerations are also discussed, including freedom of information laws, financial disclosure of public officials and whistleblower protection…”
Mapping the Age of Every Building in Manhattan
Kriston Capps at CityLab: “The Harlem Renaissance was the epicenter of new movements in dance, poetry, painting, and literature, and its impact still registers in all those art forms. If you want to trace the Harlem Renaissance, though, best look to Harlem itself.
Many if not most of the buildings in Harlem today rose between 1900 and 1940—and a new mapping tool called Urban Layers reveals exactly where and when. Harlem boasts very few of the oldest buildings in Manhattan today, but it does represent the island’s densest concentration of buildings constructed during the Great Migration.
Thanks to Morphocode‘s Urban Layers, it’s possible to locate nearly every 19th-century building still standing in Manhattan today. That’s just one of the things that you can isolate with the map, which combines two New York City building datasets (PLUTO and Building Footprints) and Mapbox GL JS vector technology to generate an interactive architectural history.
So, looking specifically at Harlem again (with some of the Upper West Side thrown in for good measure), it’s easy to see that very few of the buildings that went up between 1765 to 1860 still stand today….”
Tell Everyone: Why We Share & Why It Matters
Alfred Hermida’s new book, Tell Everyone: Why We Share & Why It Matters, takes us through that research—and a pile more, from Pew Center data on the makeup of our friends lists to a Yahoo! study on the nature of social influencers. One of Hermida’s accomplishments is to have woven that research into a breezy narrative crammed with examples from recent headlines.
Not up on the concept of cognitive dissonance? Homophily? Pluralistic ignorance? Or situational awareness? Not a deal breaker. Just in time for Halloween, Tell Everyone (Doubleday Canada) is a social science literature review masquerading as light bedside reading from the business management section. Hermida has tucked the academic sourcing into 21 pages of endnotes and offered a highly readable 217-page tour of social movements, revolutions, journalistic gaffes and corporate PR disasters.
The UBC journalism professor moves easily from chronicling the activities of Boston Marathon Redditors to Tahrir Square YouTubers to Japanese earthquake tweeters. He dips frequently into the past for context, highlighting the roles of French Revolution-era salon “bloggers,” 18th-century Portuguese earthquake pamphleteers and First World War German pilots.
Indeed, this book is only marginally about journalism, made clear by the absence of a reference to “news” in its title. It is at least as much about sociology and marketing.
Mathew Ingram argued recently that journalism’s biggest competitors don’t look like journalism. Hermida would no doubt agree. The Daily Show’s blurring of comedy and journalism is now a familiar ingredient in people’s information diet, he writes. And with nearly every news event, “the reporting by journalists sits alongside the accounts, experiences, opinions and hopes of millions of others.” Journalistic accounts didn’t define Mitt Romney’s 2012 U.S. presidential campaign, he notes; thousands of users did, with their “binders full of women” meme.
Hermida devotes a chapter to chronicling the ways in which consumers are asserting themselves in the marketplace—and the ways in which brands are reacting. The communications team at Domino’s Pizza failed to engage YouTube users over a gross gag video made by two of its employees in 2009. But Lionsgate films effectively incorporated user-generated content into its promotions for the 2012 Hunger Games movie. Some of the examples are well known but their value lies in the considerable context Hermida provides.
Other chapters highlight the role of social media in the wake of natural disasters and how users—and researchers—are working to identify hoaxes.
Tell Everyone is the latest in a small but growing number of mass-market books aiming to distill social media research from the ivory tower. The most notable is Wharton School professor Jonah Berger’s 2013 book Contagious: Why Things Catch On. Hermida discusses the influential 2009 research conducted by Berger and his colleague Katherine Milkman into stories on the New York Times most-emailed list. Those conclusions now greatly influence the work of social media editors.
But, in this instance at least, the lively pacing of the book sacrifices some valuable detail.
Hermida explores the studies’ main conclusion: positive content is more viral than negative content, but the key is the presence of activating emotions in the user, such as joy or anger. However, the chapter gives only a cursory mention to a finding Berger discusses at length in Contagious—the surprisingly frequent presence of science stories in the list of most-emailed articles. The emotion at play is awe—what Berger characterizes as not quite joy, but a complex sense of surprise, unexpectedness or mystery. It’s an important aspect of our still-evolving understanding of how we use social media….”
Ebola and big data: Call for help
The Economist: “WITH at least 4,500 people dead, public-health authorities in west Africa and worldwide are struggling to contain Ebola. Borders have been closed, air passengers screened, schools suspended. But a promising tool for epidemiologists lies unused: mobile-phone data.
When people make mobile-phone calls, the network generates a call data record (CDR) containing such information as the phone numbers of the caller and receiver, the time of the call and the tower that handled it—which gives a rough indication of the device’s location. This information provides researchers with an insight into mobility patterns. Indeed phone companies use these data to decide where to build base stations and thus improve their networks, and city planners use them to identify places to extend public transport.
But perhaps the most exciting use of CDRs is in the field of epidemiology. Until recently the standard way to model the spread of a disease relied on extrapolating trends from census data and surveys. CDRs, by contrast, are empirical, immediate and updated in real time. You do not have to guess where people will flee to or move. Researchers have used them to map malaria outbreaks in Kenya and Namibia and to monitor the public response to government health warnings during Mexico’s swine-flu epidemic in 2009. Models of population movements during a cholera outbreak in Haiti following the earthquake in 2010 used CDRs and provided the best estimates of where aid was most needed.
Doing the same with Ebola would be hard: in west Africa most people do not own a phone. But CDRs are nevertheless better than simulations based on stale, unreliable statistics. If researchers could track population flows from an area where an outbreak had occurred, they could see where it would be likeliest to break out next—and therefore where they should deploy their limited resources. Yet despite months of talks, and the efforts of the mobile-network operators’ trade association and several smaller UN agencies, telecoms firms have not let researchers use the data (see article).
One excuse is privacy, which is certainly a legitimate worry, particularly in countries fresh from civil war, or where tribal tensions exist. But the phone data can be anonymised and aggregated in a way that alleviates these concerns. A bigger problem is institutional inertia. Big data is a new field. The people who grasp the benefits of examining mobile-phone usage tend to be young, and lack the clout to free them for research use.”
Ebola’s Information Paradox
It was a full seven days after Baby Lewis became ill, and four days after the Soho residents began dying in mass numbers, before the outbreak warranted the slightest mention in the London papers, a few short lines indicating that seven people had died in the neighborhood. (The report understated the growing death toll by an order of magnitude.) It took two entire weeks before the press began treating the outbreak as a major news event for the city.
Within Soho, the information channels were equally unreliable. Rumors spread throughout the neighborhood that the entire city had succumbed at the same casualty rate, and that London was facing a catastrophe on the scale of the Great Fire of 1666. But this proved to be nothing more than rumor. Because the Soho crisis had originated with a single-point source — the poisoned well — its range was limited compared with its intensity. If you lived near the Broad Street well, you were in grave danger. If you didn’t, you were likely to be unaffected.
Compare this pattern of information flow to the way news spreads now. On Thursday, Craig Spencer, a New York doctor, was given a diagnosis of Ebola after presenting a high fever, and the entire world learned of the test result within hours of the patient himself learning it. News spread with similar velocity several weeks ago with the Dallas Ebola victim, Thomas Duncan. In a sense, it took news of the cholera outbreak a week to travel the 20 blocks from Soho to Fleet Street in 1854; today, the news travels at nearly the speed of light, as data traverses fiber-optic cables. Thanks to that technology, the news channels have been on permanent Ebola watch for weeks now, despite the fact that, as the joke went on Twitter, more Americans have been married to Kim Kardashian than have died in the United States from Ebola.
As societies and technologies evolve, the velocities vary with which disease and information can spread. The tremendous population density of London in the 19th century enabled the cholera bacterium to spread through a neighborhood with terrifying speed, while the information about that terror moved more slowly. This was good news for the mental well-being of England’s wider population, which was spared the anxiety of following the death count as if it were a stock ticker. But it was terrible from a public health standpoint; the epidemic had largely faded before the official institutions of public health even realized the magnitude of the outbreak….
Information travels faster than viruses do now. This is why we are afraid. But this is also why we are safe.”
Privacy Identity Innovation: Innovator Spotlight
pii2014: “Every year, we invite a select group of startup CEOs to present their technologies on stage at Privacy Identity Innovation as part of the Innovator Spotlight program. This year’s conference (pii2014) is taking place November 12-14 in Silicon Valley, and we’re excited to announce that the following eight companies will be participating in the pii2014 Innovator Spotlight:
* BeehiveID – Led by CEO Mary Haskett, BeehiveID is a global identity validation service that enables trust by identifying bad actors online BEFORE they have a chance to commit fraud.
* Five – Led by CEO Nikita Bier, Five is a mobile chat app crafted around the experience of a house party. With Five, you can browse thousands of rooms and have conversations about any topic.
* Glimpse – Led by CEO Elissa Shevinsky, Glimpse is a private (disappearing) photo messaging app just for groups.
* Humin – Led by CEO Ankur Jain, Humin is a phone and contacts app designed to think about people the way you naturally do by remembering the context of your relationships and letting you search them the way you think.
* Kpass – Led by CEO Dan Nelson, Kpass is an identity platform that provides brands, apps and developers with an easy-to-implement technology solution to help manage the notice and consent requirements for the Children’s Online Privacy Protection Act (COPPA) laws.
* Meeco – Led by CEO Katryna Dow, Meeco is a Life Management Platform that offers an all-in-one solution for you to transact online, collect your own personal data, and be more anonymous with greater control over your own privacy.
* TrustLayers – Led by CEO Adam Towvim, TrustLayers is privacy intelligence for big data. TrustLayers enables confident use of personal data, keeping companies secure in the knowledge that the organization team is following the rules.
* Virtru – Led by CEO John Ackerly, Virtru is the first company to make email privacy accessible to everyone. With a single plug-in, Virtru empowers individuals and businesses to control who receives, reviews, and retains their digital information — wherever it travels, throughout its lifespan.
Learn more about the startups on the Innovator Spotlight page…”
Can Bottom-Up Institutional Reform Improve Service Delivery?
Working paper by Molina, Ezequiel: “This article makes three contributions to the literature. First, it provides new evidence of the impact of community monitoring interventions using a unique dataset from the Citizen Visible Audit (CVA) program in Colombia. In particular, this article studies the effect of social audits on citizens’ assessment of service delivery performance. The second contribution is the introduction a theoretical framework to understand the pathway of change, the necessary building blocks that are needed for social audits to be effective. Using this framework, the third contribution of this article is answering the following questions: i) under what conditions do citizens decide to monitor government activity and ii) under what conditions do governments facilitate citizen engagement and become more accountable.”
Quantifying the Livable City
Brian Libby at City Lab: “By the time Constantine Kontokosta got involved with New York City’s Hudson Yards development, it was already on track to be historically big and ambitious.
Over the course of the next decade, developers from New York’s Related Companies and Canada-based Oxford Properties Group are building the largest real-estate development in United States history: a 28-acre neighborhood on Manhattan’s far West Side over a Long Island Rail Road yard, with some 17 million square feet of new commercial, residential, and retail space.
Hudson Yards is also being planned as an innovative model of efficiency. Its waste management systems, for example, will utilize a vast vacuum-tube system to collect garbage from each building into a central terminal, meaning no loud garbage trucks traversing the streets by night. Onsite power generation will prevent blackouts like those during Hurricane Sandy, and buildings will be connected through a micro-grid that allows them to share power with each other.
Yet it was Kontokosta, the deputy director of academics at New York University’s Center for Urban Science and Progress (CUSP), who conceived of Hudson Yards as what is now being called the nation’s first “quantified community.” This entails an unprecedentedly wide array of data being collected—not just on energy and water consumption, but real-time greenhouse gas emissions and airborne pollutants, measured with tools like hyper-spectral imagery.
New York has led the way in recent years with its urban data collection. In 2009, Mayor Michael Bloomberg signed Local Law 84, which requires privately owned buildings over 50,000 square feet in size to provide annual benchmark reports on their energy and water use. Unlike a LEED rating or similar, which declares a building green when it opens, the city benchmarking is a continuous assessment of its operations…”
The government wants to study ‘social pollution’ on Twitter
Washington Post: “If you take to Twitter to express your views on a hot-button issue, does the government have an interest in deciding whether you are spreading “misinformation’’? If you tweet your support for a candidate in the November elections, should taxpayer money be used to monitor your speech and evaluate your “partisanship’’?
in theMy guess is that most Americans would answer those questions with a resounding no. But the federal government seems to disagree. The National Science Foundation , a federal agency whose mission is to “promote the progress of science; to advance the national health, prosperity and welfare; and to secure the national defense,” is funding a project to collect and analyze your Twitter data.
The project is being developed by researchers at Indiana University, and its purported aim is to detect what they deem “social pollution” and to study what they call “social epidemics,” including how memes — ideas that spread throughout pop culture — propagate. What types of social pollution are they targeting? “Political smears,” so-called “astroturfing” and other forms of “misinformation.”
Named “Truthy,” after a term coined by TV host Stephen Colbert, the project claims to use a “sophisticated combination of text and data mining, social network analysis, and complex network models” to distinguish between memes that arise in an “organic manner” and those that are manipulated into being.
But there’s much more to the story. Focusing in particular on political speech, Truthy keeps track of which Twitter accounts are using hashtags such as #teaparty and #dems. It estimates users’ “partisanship.” It invites feedback on whether specific Twitter users, such as the Drudge Report, are “truthy” or “spamming.” And it evaluates whether accounts are expressing “positive” or “negative” sentiments toward other users or memes…”