Transforming Our Conversation of Information Architecture with Structure


Nathaniel Davis: Information architecture has been characterized as both an art and a science. Because there’s more evidence of the former than the latter, the academic and research community is justified in hesitating to give the practice of information architecture more attention.
If you probe the history of information architecture for the web, its foundation appears to be rooted in library science. But you’ll also find a pattern of borrowing methods and models from many other disciplines like architecture and urban planning, linguistics and ethnography, cognition and psychology, to name a few. This history leads many to wonder if the practice of information architecture is anything other than an art of induction for solving problems of architecture and design for the web…
Certainly, there is one concept that has persisted under the radar for many years with limited exploration. It is littered throughout countless articles, books and papers and is present in the most cited IA practice definitions. It may be the single concept that truly bridges practitioner and academic interests around a central and worthwhile topic. That concept is structure.”

Accountability.Org: Online Disclosure by Nonprofits


Paper by Joannie Tremblay-Boire and Aseem Prakash: “Why do some nonprofits signal their accountability via unilateral website disclosures? We develop an Accountability Index to examine the websites of 200 U.S. nonprofits ranked by the Chronicle of Philanthropy. We expect nonprofits’ incentives for website disclosures will be shaped by their organizational and sectoral characteristics. Our analysis suggests that nonprofits appearing frequently in the media disclose more accountability information while nonprofits larger in size disclose less. Religion-related nonprofits tend to disclose less information, suggesting that religious bonding enhances trust and reduce incentives for self-disclosure. Health nonprofits disclose less information, arguably because government-mandated disclosures reduce marginal benefits from voluntary disclosures. Education nonprofits, on the other hand, tend to disclose more accountability information perhaps because they supply credence goods. This research contributes to the emerging literature on websites as accountability mechanisms by developing a new index for scholars to use and proposing new hypotheses based on the corporate social responsibility literature.”

Understanding Smart Data Disclosure Policy Success: The Case of Green Button


New Paper by Djoko Sigit Sayogo and Theresa Pardo: “Open data policies are expected to promote innovations that stimulate social, political and economic change. In pursuit of innovation potential, open datahas expanded to wider environment involving government, business and citizens. The US government recently launched such collaboration through a smart data policy supporting energy efficiency called Green Button. This paper explores the implementation of Green Button and identifies motivations and success factors facilitating successful collaboration between public and private organizations to support smart disclosure policy. Analyzing qualitative data from semi-structured interviews with experts involved in Green Button initiation and implementation, this paper presents some key findings. The success of Green Button can be attributed to the interaction between internal and external factors. The external factors consist of both market and non-market drivers: economic factors, technology related factors, regulatory contexts and policy incentives, and some factors that stimulate imitative behavior among the adopters. The external factors create the necessary institutional environment for the Green Button implementation. On the other hand, the acceptance and adoption of Green Button itself is influenced by the fit of Green Button capability to the strategic mission of energy and utility companies in providing energy efficiency programs. We also identify the different roles of government during the different stages of Green Button implementation.”
[Recipient of Best Management/Policy Paper Award, dgo2013]

What Happens When Everyone Makes Maps?


Laura Mallonee in the Atlantic: “On a spring Sunday in a Soho penthouse, ten people have gathered for a digital mapping “Edit-A-Thon.” Potted plants grow to the ceiling and soft cork carpets the floor. At a long wooden table, an energetic woman named Liz Barry is showing me how to map my neighborhood. “This is what you’ll see when you look at OpenStreetMap,” she says.
williamburg_570.jpg
Though visually similar to Google’s, the map on the screen gives users unfettered access to its underlying data — anyone can edit it. Barry lives in Williamsburg, and she’s added many of the neighborhood’s boutiques and restaurants herself. “Sometimes when I’m tired at the end of the day and can’t work anymore, I just edit OpenStreetMap,” she says. “Kind of a weird habit.” Barry then shows me the map’s “guts.” I naively assume it will be something technical and daunting, but it’s just an editable version of the same map, with tools that let you draw roads, identify landmarks, and even label your own house.”

BaltimoreCode.org


Press Release: “The City of Baltimore’s Chief Technology Officer Chris Tonjes and the non-partisan, non-profit OpenGov Foundation announced today the launch of BaltimoreCode.org, a free software platform that empowers all Baltimore residents to discover, access, and use local laws when they want, and how they want.

BaltimoreCode.org lifts and ‘liberates’ the Baltimore City Charter and Code from unalterable, often hard to find online files —such as PDFs—by inserting them into user-friendly, organized and modern website formats.  This straightforward switch delivers significant results:  more clarity, context, and public understanding of the laws’ impact on Baltimore citizens’ daily lives. For the first-time, BaltimoreCode.org allows  uninhibited reuse of City law data by everyday Baltimore residents to use, share, and spread as they see fit. Simply, BaltimoreCode.org gives citizens the information they need, on their terms.”

Next.Data.gov


Nick Sinai at the White House Blog: “Today, we’re excited to share a sneak preview of a new design for Data.gov, called Next.Data.gov. The upgrade builds on the President’s May 2013 Open Data Executive Order that aims to fuse open-data practices into the Federal Government’s DNA. Next.Data.gov is far from complete (think of it as a very early beta), but we couldn’t wait to share our design approach and the technical details behind it – knowing that we need your help to make it even better.  Here are some key features of the new design:
 

OSTP_nextdata_1 

Leading with Data: The Data.gov team at General Services Administration (GSA), a handful of Presidential Innovation Fellows, and OSTP staff designed Next.Data.Gov to put data first. The team studied the usage patterns on Data.gov and found that visitors were hungry for examples of how data are used. The team also noticed many sources, such as tweets and articles outside of Data.gov featuring Federal datasets in action. So Next.Data.gov includes a rich stream that enables each data community to communicate how its datasets are impacting companies and the public.

OSTP_nextdata_2 

In this dynamic stream, you’ll find blog posts, tweets, quotes, and other features that more fully showcase the wide range of information assets that exist within the vaults of government.
Powerful Search: The backend of Next.Data.gov is CKAN and is powered by Solr—a powerful search engine that will make it even easier to find relevant datasets online. Suggested search terms have been added to help users find (and type) things faster. Next.Data.gov will start to index datasets from agencies that publish their catalogs publicly, in line with the President’s Open Data Executive Order. The early preview launching today features datasets from the Department of Health and Human Services—one of the first Federal agencies to publish a machine-readable version of its data catalog.
Rotating Data Visualizations: Building on the theme of leading with data, even the  masthead-design for Next.Data.gov is an open-data-powered visualization—for now, it’s a cool U.S. Geological Survey earthquake plot showing the magnitude of earthquake measurements collected over the past week, around the globe.

OSTP_nextdata_3 

This particular visualization was built using D3.js. The visualization will be updated periodically to spotlight different ways open data is used and illustrated….
We encourage you to collaborate in the design process by creating pull requests or providing feedback via Quora or Twitter.”

Metrics for Government Reform


Geoff Mulgan: “How do you measure a programme of government reform? What counts as evidence that it’s working or not? I’ve been asked this question many times, so this very brief note suggests some simple answers – mainly prompted by seeing a few writings on this question which I thought confused some basic points.”
Any type of reform programme will combine elements at very different levels. These may include:

  • A new device – for example, adjusting the wording in an official letter or a call centre script to see what impact this has on such things as tax compliance.
  • A new kind of action – for example a new way of teaching maths in schools, treating patients with diabetes, handling prison leavers.
  • A new kind of policy – for example opening up planning processes to more local input; making welfare payments more conditional.
  • A new strategy – for example a scheme to cut carbon in cities, combining retrofitting of housing with promoting bicycle use; or a strategy for public health.
  • A new approach to strategy – for example making more use of foresight, scenarios or big data.
  • A new approach to governance – for example bringing hitherto excluded groups into political debate and decision-making.

This rough list hopefully shows just how different these levels are in their nature. Generally as we go down the list the following things rise:

  • The number of variables and the complexity of the processes involved
  • The timescales over which any judgements can be made
  • The difficultness involved in making judgements about causation
  • The importance of qualitative relative to quantitative assessment”

Crowdsourcing—Harnessing the Masses to Advance Health and Medicine


A Systematic Review of the literature in the Journal of General Internal Medicine: “Crowdsourcing research allows investigators to engage thousands of people to provide either data or data analysis. However, prior work has not documented the use of crowdsourcing in health and medical research. We sought to systematically review the literature to describe the scope of crowdsourcing in health research and to create a taxonomy to characterize past uses of this methodology for health and medical research..
Twenty-one health-related studies utilizing crowdsourcing met eligibility criteria. Four distinct types of crowdsourcing tasks were identified: problem solving, data processing, surveillance/monitoring, and surveying. …
Utilizing crowdsourcing can improve the quality, cost, and speed of a research project while engaging large segments of the public and creating novel science. Standardized guidelines are needed on crowdsourcing metrics that should be collected and reported to provide clarity and comparability in methods.”

Internet Association's New Website Lets Users Comment on Bills


Mashable: “The Internet Association, the lobbying conglomerate of big tech companies like Google, Amazon and Facebook, has launched a new website that allows users to comment on proposed bills.
The association unveiled its redesigned website on Monday, and it hopes its new, interactive features will give citizens a way to speak up…
In the “Take Action” section of the website, under “Leave Your Mark,” the association plans to upload bills, declarations and other context documents for netizens to peruse and, most importantly, interact with. After logging in, a user can comment on the bill in general, and even make line edits.”

The Durkheim Project


Co.Labs: “A new project, newly launched by DARPA and Dartmouth University, is trying something new: Data-mining social networks to spot patterns indicating suicidal behavior.
Called The Durkheim Project, named for the Victorian-era psychologist, it is asking veterans to offer their Twitter and Facebook authorization keys for an ambitious effort to match social media behavior with indications of suicidal thought. Veterans’ online behavior is then fed into a real-time analytics dashboard which predicts suicide risks and psychological episodes… The Durkheim Project is led by New Hampshire-based Patterns and Predictions, a Dartmouth University spin-off with close ties to academics there…
The Durkheim Project is part of DARPA’s Detection and Computational Analysis of Psychological Signals (DCAPS) project. DCAPS is a larger effort designed to harness predictive analytics for veteran mental health–and not just from social media. According to DARPA’s Russell Shilling’s program introduction, DCAPS is also developing algorithms that can data mine voice communications, daily eating and sleeping patterns, in-person social interactions, facial expressions, and emotional states for signs of suicidal thought. While participants in Durkheim won’t receive mental health assistance directly from the project, their contributions will go a long way toward treating suicidal veterans in the future….
The project launched on July 1; the number of veterans participating is not currently known but the finished number is expected to hover around 100,000.”