When Tech Culture And Urbanism Collide


John Tolva: “…We can build upon the success of the work being done at the intersection of technology and urban design, right now.

For one, the whole realm of social enterprise — for-profit startups that seek to solve real social problems — has a huge overlap with urban issues. Impact Engine in Chicago, for instance, is an accelerator squarely focused on meaningful change and profitable businesses. One of their companies, Civic Artworks, has set as its goal rebalancing the community planning process.

The Code for America Accelerator and Tumml, both located in San Francisco, morph the concept of social innovation into civic/urban innovation. The companies nurtured by CfA and Tumml are filled with technologists and urbanists working together to create profitable businesses. Like WorkHands, a kind of LinkedIn for blue collar trades. Would something like this work outside a city? Maybe. Are its effects outsized and scale-ready in a city? Absolutely. That’s the opportunity in urban innovation.

Scale is what powers the sharing economy and it thrives because of the density and proximity of cities. In fact, shared resources at critical density is one of the only good definitions for what a city is. It’s natural that entrepreneurs have overlaid technology on this basic fact of urban life to amplify its effects. Would TaskRabbit, Hailo or LiquidSpace exist in suburbia? Probably, but their effects would be minuscule and investors would get restless. The city in this regard is the platform upon which sharing economy companies prosper. More importantly, companies like this change the way the city is used. It’s not urban planning, but it is urban (re)design and it makes a difference.

A twist that many in the tech sector who complain about cities often miss is that change in a city is not the same thing as change in city government. Obviously they are deeply intertwined; change is mighty hard when it is done at cross-purposes with government leadership. But it happens all the time. Non-government actors — foundations, non-profits, architecture and urban planning firms, real estate developers, construction companies — contribute massively to the shape and health of our cities.

Often this contribution is powered through policies of open data publication by municipal governments. Open data is the raw material of a city, the vital signs of what has happened there, what is happening right now, and the deep pool of patterns for what might happen next.

Tech entrepreneurs would do well to look at the organizations and companies capitalizing on this data as the real change agents, not government itself. Even the data in many cases is generated outside government. Citizens often do the most interesting data-gathering, with tools like LocalData. The most exciting thing happening at the intersection of technology and cities today — what really makes them “smart” — is what is happening at the periphery of city government. It’s easy to belly-ache about government and certainly there are administrations that to do not make data public (or shut it down), but tech companies who are truly interested in city change should know that there are plenty of examples of how to start up and do it.

And yet, the somewhat staid world of architecture and urban-scale design presents the most opportunity to a tech community interested in real urban change. While technology obviously plays a role in urban planning — 3D visual design tools like Revit and mapping services like ArcGIS are foundational for all modern firms — data analytics as a serious input to design matters has only been used in specialized (mostly energy efficiency) scenarios. Where are the predictive analytics, the holistic models, the software-as-a-service providers for the brave new world of urban informatics and The Internet of Things? Technologists, it’s our move.

Something’s amiss when some city governments — rarely the vanguard in technological innovation — have more sophisticated tools for data-driven decision-making than the private sector firms who design the city. But some understand the opportunity. Vannevar Technology is working on it, as is Synthicity. There’s plenty of room for the most positive aspects of tech culture to remake the profession of urban planning itself. (Look to NYU’s Center for Urban Science and Progress and the University of Chicago’s Urban Center for Computation and Data for leadership in this space.)…”

Brainlike Computers, Learning From Experience


The New York Times: “Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.

The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.

“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.

Conventional computers are limited by what they have been programmed to do. Computer vision systems, for example, only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation.

But last year, Google researchers were able to get a machine-learning algorithm, known as a neural network, to perform an identification task without supervision. The network scanned a database of 10 million images, and in doing so trained itself to recognize cats.

In June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately.

The new approach, used in both hardware and software, is being driven by the explosion of scientific knowledge about the brain. Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, said that is also its limitation, as scientists are far from fully understanding how brains function.”

Crowdsourcing drug discovery: Antitumour compound identified


David Bradley in Spectroscopy.now: “American researchers have used “crowdsourcing” – the cooperation of a large number of interested non-scientists via the internet – to help them identify a new fungus. The species contains unusual metabolites, isolated and characterized, with the help of vibrational circular dichroism (VCD). One compound reveals itself to have potential antitumour activity.
So far, a mere 7 percent of the more than 1.5 million species of fungi thought to exist have been identified and an even smaller fraction of these have been the subject of research seeking bioactive natural products. …Robert Cichewicz of the University of Oklahoma, USA, and his colleagues hoped to remedy this situation by working with a collection of several thousand fungal isolates from three regions: Arctic Alaska, tropical Hawaii, and subtropical to semiarid Oklahoma. Collaborator Susan Mooberry of the University of Texas at San Antonio carried out biological assays on many fungal isolates looking for antitumor activity among the metabolites in Cichewicz’s collection. A number of interesting substances were identified…
However, the researchers realized quickly enough that the efforts of a single research team were inadequate if samples representing the immense diversity of the thousands of fungi they hoped to test were to be obtained and tested. They thus turned to the help of citizen scientists in a “crowdsourcing” initiative. In this approach, lay people with an interest in science, and even fellow scientists in other fields, were recruited to collect and submit soil from their gardens.
As the samples began to arrive, the team quickly found among them a previously unknown fungal strain – a Tolypocladium species – growing in a soil sample from Alaska. Colleague Andrew Miller of the University of Illinois did the identification of this new fungus, which was found to be highly responsive to making new compounds based on changes in its laboratory growth conditions. Moreover, extraction of the active chemicals from the isolate revealed a unique metabolite which was shown to have significant antitumour activity in laboratory tests. The team suggests that this novel substance may represent a valuable new approach to cancer treatment because it precludes certain biochemical mechanisms that lead to the emergence of drug resistance in cancer with conventional drugs…
The researchers point out the essential roles that citizen scientists can play. “Many of the groundbreaking discoveries, theories, and applied research during the last two centuries were made by scientists operating from their own homes,” Cichewicz says. “Although much has changed, the idea that citizen scientists can still participate in research is a powerful means for reinvigorating the public’s interest in science and making important discoveries,” he adds.”

A Bottom-Up Smart City?


Alicia Rouault at Data-Smart City Solutions: “America’s shrinking cities face a tide of disinvestment, abandonment, vacancy, and a shift toward deconstruction and demolition followed by strategic reinvestment, rightsizing, and a host of other strategies designed to renew once-great cities. Thriving megacity regions are experiencing rapid growth in population, offering a different challenge for city planners to redefine density, housing, and transportation infrastructure. As cities shrink and grow, policymakers are increasingly called to respond to these changes by making informed, data-driven decisions. What is the role of the citizen in this process of collecting and understanding civic data?
Writing for Forbes in “Open Sourcing the Neighborhood,” Professor of Sociology at Columbia University Saskia Sassen calls for “open source urbanism” as an antidote to the otherwise top-down smart city movement. This form of urbanism involves opening traditional verticals of information within civic and governmental institutions. Citizens can engage with and understand the logic behind decisions by exploring newly opened administrative data. Beyond opening these existing datasets, Sassen points out that citizen experts hold invaluable institutional memory that can serve as an alternate and legitimate resource for policymakers, economists, and urban planners alike.
In 2012, we created a digital platform called LocalData to address the production and use of community-generated data in a municipal context. LocalData is a digital mapping service used globally by universities, non-profits, and municipal governments to gather and understand data at a neighborhood scale. In contrast to traditional Census or administrative data, which is produced by a central agency and collected infrequently, our platform provides a simple method for both community-based organizations and municipal employees to gather real-time data on project-specific indicators: property conditions, building inspections, environmental issues or community assets. Our platform then visualizes data and exports it into formats integrated with existing systems in government to seamlessly provide accurate and detailed information for decision makers.
LocalData began as a project in Detroit, Michigan where the city was tackling a very real lack of standard, updated, and consistent condition information on the quality and status of vacant and abandoned properties. Many of these properties were owned by the city and county due to high foreclosure rates. One of Detroit’s strategies for combating crime and stabilizing neighborhoods is to demolish property in a targeted fashion. This strategy serves as a political win as much as providing an effective way to curb the secondary effects of vacancy: crime, drug use, and arson. Using LocalData, the city mapped critical corridors of emergent commercial property as an analysis tool for where to place investment, and documented thousands of vacant properties to understand where to target demolition.
Vacancy is not unique to the Midwest. Following our work with the Detroit Mayor’s office and planning department, LocalData has been used in dozens of other cities in the U.S. and abroad. Currently the Smart Chicago Collaborative is using LocalData to conduct a similar audit of vacant and abandoned property in southwest Chicagos. Though an effective tool for capturing building-specific information, LocalData has also been used to capture behavior and movement of goods. The MIT Megacities Logistics Lab has used LocalData to map and understand the intensity of urban supply chains by interviewing shop owners and mapping delivery routes in global megacities in Mexico, Colombia, Brazil and the U.S. The resulting information has been used with analytical models to help both city officials and companies to design better city logistics policies and operations….”

Using Social Media in Rulemaking: Possibilities and Barriers


New paper by Michael Herz (Cardozo Legal Studies Research Paper No. 417): “Web 2.0” is characterized by interaction, collaboration, non-static web sites, use of social media, and creation of user-generated content. In theory, these Web 2.0 tools can be harnessed not only in the private sphere but as tools for an e-topia of citizen engagement and participatory democracy. Notice-and-comment rulemaking is the pre-digital government process that most approached (while still falling far short of) the e-topian vision of public participation in deliberative governance. The notice-and-comment process for federal agency rulemaking has now changed from a paper process to an electronic one. Expectations for this switch were high; many anticipated a revolution that would make rulemaking not just more efficient, but also more broadly participatory, democratic, and dialogic. In the event, the move online has not produced a fundamental shift in the nature of notice-and-comment rulemaking. At the same time, the online world in general has come to be increasingly characterized by participatory and dialogic activities, with a move from static, text-based websites to dynamic, multi-media platforms with large amounts of user-generated content. This shift has not left agencies untouched. To the contrary, agencies at all levels of government have embraced social media – by late 2013 there were over 1000 registered federal agency twitter feeds and over 1000 registered federal agency Facebook pages, for example – but these have been used much more as tools for broadcasting the agency’s message than for dialogue or obtaining input. All of which invites the questions whether agencies could or should directly rely on social media in the rulemaking process.
This study reviews how federal agencies have been using social media to date and considers the practical and legal barriers to using social media in rulemaking, not just to raise the visibility of rulemakings, which is certainly happening, but to gather relevant input and help formulate the content of rules.
The study was undertaken for the Administrative Conference of the United States and is the basis for a set of recommendations adopted by ACUS in December 2013. Those recommendations overlap with but are not identical to the recommendations set out herein.”

Open Data in Action


Nick Sinai at the White House: “Over the past few years, the Administration has launched a series of Open Data Initiatives, which, have released troves of valuable data in areas such as health, energy, education, public safety, finance, and global development…
Today, in furtherance of this exciting economic dynamic, The Governance Lab (The GovLab) —a research institution at New York University—released the beta version of its Open Data 500 project—an initiative designed to identify, describe, and analyze companies that use open government data in order to study how these data can serve business needs more effectively. As part of this effort, the organization is compiling a list of 500+ companies that use open government data to generate new business and develop new products and services.
This working list of 500+ companies, from sectors ranging from real estate to agriculture to legal services, shines a spotlight on surprising array of innovative and creative ways that open government data is being used to grow the economy – across different company sizes, different geographies, and different industries. The project includes information about  the companies and what government datasets they have identified as critical resources for their business.
Some of examples from the Open Data 500 Project include:
  • Brightscope, a San Diego-based company that leverages data from the Department of Labor, the Security and Exchange Commission, and the Census Bureau to rate consumers’ 401k plans objectively on performance and fees, so companies can choose better plans and employees can make better decisions about their retirement options.
  • AllTuition, a  Chicago-based startup that provides services—powered by data from Department of Education on Federal student financial aid programs and student loans— to help students and parents manage the financial-aid process for college, in part by helping families keep track of deadlines, and walking them through the required forms.
  • Archimedes, a San Francisco healthcare modeling and analytics company, that leverages  Federal open data from the National Institutes of Health, the Centers for Disease Control and Prevention, and the Center for Medicaid and Medicare Services, to  provide doctors more effective individualized treatment plans and to enable patients to make informed health decisions.
You can learn more here about the project and view the list of open data companies here.

See also:
Open Government Data: Companies Cash In

NYU project touts 500 top open-data firms”

Open data and transparency: a look back at 2013


Zoe Smith in the Guardian on the open data and development in 2013: “The clarion call for a “data revolution” made in the post-2015 high level panel report is a sign of a growing commitment to see freely flowing data become a tool for social change.

Web-based technology continued to offer increasing numbers of people the ability to share standardised data and statistics to demand better governance and strengthen accountability. 2013 seemed to herald the moment that the open data/transparency movement entered the mainstream.
Yet for those who have long campaigned on the issue, the call was more than just a catchphrase, it was a unique opportunity. “If we do get a global drive towards open data in relation to development or anything else, that would be really transformative and it’s quite rare to see such bold statements at such an early stage of the process. I think it set the tone for a year in which transparency was front and centre of many people’s agendas,” says David Hall Matthews, of Publish What You Fund.
This year saw high level discussions translated into commitments at the policy level. David Cameron used the UK’s presidency of the G8 to trigger international action on the three Ts (tax, trade and transparency) through the IF campaign. The pledge at Lough Erne, in Scotland, reaffirmed the commitment to the Busan open data standard as well as the specific undertaking that all G8 members would implement International Aid Transparency Index (IATI) standards by the end of 2015.
2013 was a particularly good year for the US Millenium Challenge Corporation (MCC) which topped the aid transparency index. While at the very top MCC and UK’s DfID were examples of best practice, there was still much room for improvement. “There is a really long tail of agencies who are not really taking transparency at all, yet. This includes important donors, the whole of France and the whole of Japan who are not doing anything credible,” says Hall-Matthews.
Yet given the increasing number of emerging and ‘frontier‘ markets whose growth is driven in large part by wealth derived from natural resources, 2013 saw a growing sense of urgency for transparency to be applied to revenues from oil, gas and mineral resources that may far outstrip aid. In May, the new Extractive Industries Transparency Initiative standard (EITI) was adopted, which is said to be far broader and deeper than its previous incarnation.
Several countries have done much to ensure that transparency leads to accountability in their extractive industries. In Nigeria, for example, EITI reports are playing an important role in the debate about how resources should be managed in the country. “In countries such as Nigeria they’re taking their commitment to transparency and EITI seriously, and are going beyond disclosing information but also ensuring that those findings are acted upon and lead to accountability. For example, the tax collection agency has started to collect more of the revenues that were previously missing,” says Jonas Moberg, head of the EITI International Secretariat.
But just the extent to which transparency and open data can actually deliver on its revolutionary potential has also been called into question. Governments and donors agencies can release data but if the power structures within which this data is consumed and acted upon do not shift is there really any chance of significant social change?
The complexity of the challenge is illustrated by the case of Mexico which, in 2014, will succeed Indonesia as chair of the Open Government Partnership. At this year’s London summit, Mexico’s acting civil service minister, spoke of the great strides his country has made in opening up the public procurement process, which accounts for around 10% of GDP and is a key area in which transparency and accountability can help tackle corruption.
There is, however, a certain paradox. As SOAS professor, Leandro Vergara Camus, who has written extensively on peasant movements in Mexico, explains: “The NGO sector in Mexico has more of a positive view of these kinds of processes than the working class or peasant organisations. The process of transparency and accountability have gone further in urban areas then they have in rural areas.”…
With increasing numbers of organisations likely to jump on the transparency bandwagon in the coming year the greatest challenge is using it effectively and adequately addressing the underlying issues of power and politics.

Top 2013 transparency publications

Open data, transparency and international development, The North South Institute
Data for development: The new conflict resource?, Privacy International
The fix-rate: a key metric for transparency and accountability, Integrity Action
Making UK aid more open and transparent, DfID
Getting a seat at the table: Civil Society advocacy for budget transparency in “untransparent” countries, International Budget Partnership

The dates that mattered

23-24 May: New Extractive Industries Transparency Index standard adopted
30 May: Post 2015 high level report calling for a ‘data revolution’ is published
17-18 June: UK premier, David Cameron, campaigns for tax, trade and transparency during the G8
24 October: US Millenium Challenge Corporation tops the aid transparency index”
30 October – 1 November: Open Government Partnership in London gathers civil society, governments and data experts

Open Budgets Portal


About: “The Open Budgets Portal is the first effort to create a one-stop shop for budget data worldwide with the hope of bringing visibility to countries’ efforts in this field, facilitating access and promoting use of spending data, and motivating other countries into action.

The portal offers the opportunity to showcase a subset of developing countries and subnational entities (identified by blue markers in the map) that have excelled in the exploration of new frontiers of fiscal transparency by choosing to disseminate their entire public spending datasets in accessible formats (i.e., soft copy), with the expectation that these efforts could motivate other countries into action . Users will be able to download the entire public expenditure landscape of the members of the portal in consolidated files, all of which were rigorously collected, cleaned and verified through the BOOST Initiative.

For each of these countries, the site also includes links to their original open data portals, which provide additional important information (i.e., higher frequencies other than annual, links to output data and other socio economic indicators, etc.). While every effort has been done to certify the quality of these databases according to BOOST approach and methodology, users are encouraged to refer back to the country-owned open data portals to ensure complete consistency of data with published official figures, as well as consult accompanying user manuals for potential caveats on uses of the data.

This portal represents a starting point to build momentum within the growing interest around fiscal transparency and the importance of data for enhanced decision-making processes and improved budget outcomes and accountability. While most initiatives on open budgets rightfully center on availability of key documents, little focus has been given to the quality of data dissemination and to the importance of its analytical use for incorporation into evidence-based decision-making processes.

This Open Budgets Portal aims to fill this gap by providing access to budget data worldwide and particularly to the most disaggregated and comprehensive data collected through the BOOST Initiative. The portal combines this information with a variety of tools, manuals, reports and best practices aimed at stimulating use by intermediaries, as well as easier to interpret visualization for non-experts. Our objective is to encourage all potential uses of this data to unbind the analytical power of such data.

The Open Budgets Portal was launched at the event “Boosting Fiscal Transparency for Better Policy Outcomes,” held on December 17, 2013 in Washington, DC. The following presentations were shown at the event:

Presentation of the Open Budgets Portal by Massimo Mastruzzi, Senior Economist, Open Goverment, World Bank.

Building a Citizen’s Budget Understanding – BudgetStories.md by Victoria Vlad, Economist of “Expert-Grup” from the Republic of Moldova.”

Selected Readings on Data Visualization


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of data visualization was originally published in 2013.

Data visualization is a response to the ever-increasing amount of  information in the world. With big data, informatics and predictive analytics, we have an unprecedented opportunity to revolutionize policy-making. Yet data by itself can be overwhelming. New tools and techniques for visualizing information can help policymakers clearly articulate insights drawn from data. Moreover, the rise of open data is enabling those outside of government to create informative and visually arresting representations of public information that can be used to support decision-making by those inside or outside governing institutions.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)

Duke, D.J., K.W. Brodlie, D.A. Duce and I. Herman. “Do You See What I Mean? [Data Visualization].” IEEE Computer Graphics and Applications 25, no. 3 (2005): 6–9. http://bit.ly/1aeU6yA.

  • In this paper, the authors argue that a more systematic ontology for data visualization to ensure the successful communication of meaning. “Visualization begins when someone has data that they wish to explore and interpret; the data are encoded as input to a visualization system, which may in its turn interact with other systems to produce a representation. This is communicated back to the user(s), who have to assess this against their goals and knowledge, possibly leading to further cycles of activity. Each phase of this process involves communication between two parties. For this to succeed, those parties must share a common language with an agreed meaning.”
  • That authors “believe that now is the right time to consider an ontology for visualization,” and “as visualization move from just a private enterprise involving data and tools owned by a research team into a public activity using shared data repositories, computational grids, and distributed collaboration…[m]eaning becomes a shared responsibility and resource. Through the Semantic Web, there is both the means and motivation to develop a shared picture of what we see when we turn and look within our own field.”

Friendly, Michael. “A Brief History of Data Visualization.” In Handbook of Data Visualization, 15–56. Springer Handbooks Comp.Statistics. Springer Berlin Heidelberg, 2008. http://bit.ly/17fM1e9.

  • In this paper, Friendly explores the “deep roots” of modern data visualization. “These roots reach into the histories of the earliest map making and visual depiction, and later into thematic cartography, statistics and statistical graphics, medicine and other fields. Along the way, developments in technologies (printing, reproduction), mathematical theory and practice, and empirical observation and recording enabled the wider use of graphics and new advances in form and content.”
  • Just as the general the visualization of data is far from a new practice, Friendly shows that the graphical representation of government information has a similarly long history. “The collection, organization and dissemination of official government statistics on population, trade and commerce, social, moral and political issues became widespread in most of the countries of Europe from about 1825 to 1870. Reports containing data graphics were published with some regularity in France, Germany, Hungary and Finland, and with tabular displays in Sweden, Holland, Italy and elsewhere.”

Graves, Alvaro and James Hendler. “Visualization Tools for Open Government Data.” In Proceedings of the 14th Annual International Conference on Digital Government Research, 136–145. Dg.o ’13. New York, NY, USA: ACM, 2013. http://bit.ly/1eNSoXQ.

  • In this paper, the authors argue that, “there is a gap between current Open Data initiatives and an important part of the stakeholders of the Open Government Data Ecosystem.” As it stands, “there is an important portion of the population who could benefit from the use of OGD but who cannot do so because they cannot perform the essential operations needed to collect, process, merge, and make sense of the data. The reasons behind these problems are multiple, the most critical one being a fundamental lack of expertise and technical knowledge. We propose the use of visualizations to alleviate this situation. Visualizations provide a simple mechanism to understand and communicate large amounts of data.”
  • The authors also describe a prototype of a tool to create visualizations based on OGD with the following capabilities:
    • Facilitate visualization creation
    • Exploratory mechanisms
    • Viralization and sharing
    • Repurpose of visualizations

Hidalgo, César A. “Graphical Statistical Methods for the Representation of the Human Development Index and Its Components.” United Nations Development Programme Human Development Reports, September 2010. http://bit.ly/166TKur.

  • In this paper for the United Nations Human Development Programme, Hidalgo argues that “graphical statistical methods could be used to help communicate complex data and concepts through universal cognitive channels that are heretofore underused in the development literature.”
  • To support his argument, representations are provided that “show how graphical methods can be used to (i) compare changes in the level of development experienced by countries (ii) make it easier to understand how these changes are tied to each one of the components of the Human Development Index (iii) understand the evolution of the distribution of countries according to HDI and its components and (iv) teach and create awareness about human development by using iconographic representations that can be used to graphically narrate the story of countries and regions.”

Stowers, Genie. “The Use of Data Visualization in Government.” IBM Center for The Business of Government, Using Technology Series, 2013. http://bit.ly/1aame9K.

  • This report seeks “to help public sector managers understand one of the more important areas of data analysis today — data visualization. Data visualizations are more sophisticated, fuller graphic designs than the traditional spreadsheet charts, usually with more than two variables and, typically, incorporating interactive features.”
  • Stowers also offers numerous examples of “visualizations that include geographical and health data, or population and time data, or financial data represented in both absolute and relative terms — and each communicates more than simply the data that underpin it. In addition to these many examples of visualizations, the report discusses the history of this technique, and describes tools that can be used to create visualizations from many different kinds of data sets.”

Lessons in the crowdsourced verification of news from Storyful and Reddit’s Syria forum


at GigaOm: “One of the most powerful trends in media over the past year is the crowdsourced verification of news, whether it’s the work of a blogger like Brown Moses or former NPR journalist Andy Carvin. Two other interesting efforts in this area are the “open newsroom” approach taken by Storyful — which specializes in verifying social-media reports for mainstream news entities — and a Reddit forum devoted to crowdsourcing news coverage of the civil war in Syria.
Storyful journalist Joe Galvin recently looked at some of the incidents that the company has helped either debunk or verify over the past year — including a fake tweet from the official account of the Associated Press about explosions at the White House (which sent the Dow Jones index plummeting before it was corrected), a claim from Russian authorities that a chemical attack in Syria had been pre-meditated, and a report from investigative journalist Seymour Hersh about the same attack that questioned whether the government had been involved….
Reddit, meanwhile, has been conducting some “open newsroom”-style experiments of its own around a number of news events, including the Syrian civil war. The site has come under fire in the past for some of those efforts — including the attempt to identify the bombers in the Boston bombings case, which went badly awry — but the Syrian thread in particular is a good example of how a smart aggregator can make sense of an ongoing news event. In a recent post at a site called Dissected News, one of the moderators behind the /r/SyrianCivilWar sub-Reddit — a 22-year-old law student named Christopher Kingdon (or “uptodatepronto” as he is known on the site) — wrote about his experiences with the forum, which is trying to be a broadly objective source for breaking news and information about the conflict….
Some of what the moderators do in the forum is similar to the kind of verification that Storyful or the BBC’s “user-generated content desk” do — checking photos and video for obvious signs of fakery and hoaxes. But Kingdon also describes how much effort his team of volunteers puts into ensuring that the sub-Reddit doesn’t degenerate into trolling or flame-wars. Strict rules are enforced “to prevent personal attacks, offensive and violent language and racism” and the moderators favor posts that “utilize sources, background information and a dash of common sense.”