USAID establishes its first open data policy


Billy Mitchell at FedScoop: “The U.S. Agency for International Development jumped on the open data wave last week, announcing its first-ever policy to share its data sets and tools with the public on a central repository.

Referred to as Automated Directives System 579, the open data policy is a hat tip to President Barack Obama’s directive on transparency and open government five years ago and comes after the agency’s Frontiers in Development Forum in September addressing pathways for innovation for its mission to provide support to impoverished countries. With the new policy, USAID will provide a framework to open its agency-funded data to the public and publish it in a central location, making it easy to consume and use.
“USAID has long been a data-driven and evidence-based Agency, but never has the need been greater to share our data with a diverse set of partners—including the general public—to improve development outcomes,” wrote Angelique Crumbly, USAID’s performance improvement officer, and Brandon Pustejovsky, chief data officer for USAID, in a blog post. “For the first time in history, we have the tools, technologies and approaches to end extreme poverty within two decades. And while many of these new innovations were featured at our recent Frontiers in Development Forum, we also recognize that they largely rely on an ongoing stream of data (and new insights generated by that data) to ensure their appropriate application.”…

USAID’s DDL and open data will be hosted on the USAID website, where there’s already a long list of databases hosted. USAID also started a GitHub page for any feedback on the data”

From the smart city to the wise city: The role of universities in place-based leadership


Paper by Hambleton, R.: “For a variety of reasons the notion of the smart city has grown in popularity and some even claim that all cities now have to be ‘smart’. For example, some digital enthusiasts argue that advances in Information and Communication Technologies (ICT) are ushering in a new era in which pervasive electronic connections will inevitably lead to significant changes that make cities more liveable and more democratic. This paper will cast a critical eye over these claims. It will unpack the smart city rhetoric and show that, in fact, three competing perspectives are struggling for ascendancy within the smart cities discourse: 1) The digital city (emphasising a strong commitment to the use of ICT in governance), 2) The green city (reflecting the growing use of the US phrase smart growth, which is concerned to apply sound urban planning principles), and 3) The learning city (emphasising the way in which cities learn, network and innovate). Five digital danger zones will be identified and discussed. This analysis will suggest that scholars and policy makers who wish to improve the quality of life in cities should focus their attention on wisdom, not smartness. Civic leaders need to exercise judgement based on values if they are to create inclusive, sustainable cities. It is not enough to be clever, quick, ingenious, nor will it help if Big Data is superseded by Even Bigger Data. Universities can play a much more active role in place-based leadership in the cities where they are located. To do this effectively they need to reconsider the nature of modern scholarship. The paper will show how a growing number of universities are doing precisely this. Two respected examples will be presented to show how urban universities, if they are committed to engaged scholarship, can make a significant contribution to the creation of the wise city.”

Privacy Identity Innovation: Innovator Spotlight


pii2014: “Every year, we invite a select group of startup CEOs to present their technologies on stage at Privacy Identity Innovation as part of the Innovator Spotlight program. This year’s conference (pii2014) is taking place November 12-14 in Silicon Valley, and we’re excited to announce that the following eight companies will be participating in the pii2014 Innovator Spotlight:
* BeehiveID – Led by CEO Mary Haskett, BeehiveID is a global identity validation service that enables trust by identifying bad actors online BEFORE they have a chance to commit fraud.
* Five – Led by CEO Nikita Bier, Five is a mobile chat app crafted around the experience of a house party. With Five, you can browse thousands of rooms and have conversations about any topic.
* Glimpse – Led by CEO Elissa Shevinsky, Glimpse is a private (disappearing) photo messaging app just for groups.
* Humin – Led by CEO Ankur Jain, Humin is a phone and contacts app designed to think about people the way you naturally do by remembering the context of your relationships and letting you search them the way you think.
* Kpass – Led by CEO Dan Nelson, Kpass is an identity platform that provides brands, apps and developers with an easy-to-implement technology solution to help manage the notice and consent requirements for the Children’s Online Privacy Protection Act (COPPA) laws.
* Meeco – Led by CEO Katryna Dow, Meeco is a Life Management Platform that offers an all-in-one solution for you to transact online, collect your own personal data, and be more anonymous with greater control over your own privacy.
* TrustLayers – Led by CEO Adam Towvim, TrustLayers is privacy intelligence for big data. TrustLayers enables confident use of personal data, keeping companies secure in the knowledge that the organization team is following the rules.
* Virtru – Led by CEO John Ackerly, Virtru is the first company to make email privacy accessible to everyone. With a single plug-in, Virtru empowers individuals and businesses to control who receives, reviews, and retains their digital information — wherever it travels, throughout its lifespan.
Learn more about the startups on the Innovator Spotlight page…”

European Union Open Data Portal


About: “The European Union Open Data Portal is the single point of access to a growing range of data from the institutions and other bodies of the European Union (EU). Data are free for you to use and reuse for commercial or non-commercial purposes.
By providing easy and free access to data, the portal aims to promote their innovative use and unleash their economic potential. It also aims to help foster the transparency and the accountability of the institutions and other bodies of the EU.
The EU Open Data Portal is managed by the Publications Office of the European Union. Implementation of the EU’s open data policy is the responsibility of the Directorate-General for Communications Networks, Content and Technology of the European Commission.
What can I find on the portal?
The portal provides a metadata catalogue giving access to data from the institutions and other bodies of the EU. To facilitate reuse, these metadata are based on common encoding rules and standardized vocabularies.To learn more, see Linked Data.
Data are available in both human and machine readable formats for immediate reuse. You will also find a selection of applications built around EU data.To learn more, see Applications.How can I reuse these data?
As a general principle, you can reuse data free of charge, provided that the source is acknowledged (see legal notice).Specific conditions on reuse, related mostly to the protection of third-party intellectual property rights, apply to a small number of data. A link to these conditions is displayed on the relevant data pages.
How can I participate in the portal?
Another important goal of the portal is to engage with the user community around EU open data. You can participate by:

  • suggesting datasets,
  • giving your feedback and suggestions, and
  • sharing your apps or the use you have made with the data from the portal.

Get in touch with us!

From Information to Smart Society


New book edited by Mola, Lapo, Pennarola, Ferdinando,  and Za, Stefano: “This book presents a collection of research papers focusing on issues emerging from the interaction of information technologies and organizational systems. In particular, the individual contributions examine digital platforms and artifacts currently adopted in both the business world and society at large (people, communities, firms, governments, etc.). The topics covered include: virtual organizations, virtual communities, smart societies, smart cities, ecological sustainability, e-healthcare, e-government, and interactive policy-making (IPM)…”

Open Access Button


About the Open Access Button: “The key functions of the Open Access Button are finding free research, making more research available and also advocacy. Here’s how each works.

Finding free papers

Research published in journals that require you to pay to read can sometimes be accessed free in other places. These other copies are often very similar to the published version, but may lack nice formatting or be a version prior to peer review. These copies can be found in research repositories, on authors websites and many other places because they’re archived. To find these versions we identify the paper a user needs and effectively search on Google Scholar and CORE to find these copies and link them to the users.

Making more research, or information about papers available

If a free copy isn’t available we aim to make one. This is not a simple task and so we have to use a few different innovative strategies. First, we email the author of the research and ask them to make a copy of the research available – once they do this we’ll send it to everyone who needs it. Second, we create pages for each paper needed which, if shared, viewed, and linked to an author could see and provide their paper on. Third, we’re building ways to find associated information about a paper such as the facts contained, comments from people who’ve read it, related information and lay summaries.

Advocacy

Unfortunately the Open Access Button can only do so much, and isn’t a perfect or long term solution to this problem. The data and stories collected by the Button are used to help make the changes required to really solve this issue. We also support campaigns and grassroots advocates with this at openaccessbutton.org/action..”

The government wants to study ‘social pollution’ on Twitter


in the Washington Post: “If you take to Twitter to express your views on a hot-button issue, does the government have an interest in deciding whether you are spreading “misinformation’’? If you tweet your support for a candidate in the November elections, should taxpayer money be used to monitor your speech and evaluate your “partisanship’’?

My guess is that most Americans would answer those questions with a resounding no. But the federal government seems to disagree. The National Science Foundation , a federal agency whose mission is to “promote the progress of science; to advance the national health, prosperity and welfare; and to secure the national defense,” is funding a project to collect and analyze your Twitter data.
The project is being developed by researchers at Indiana University, and its purported aim is to detect what they deem “social pollution” and to study what they call “social epidemics,” including how memes — ideas that spread throughout pop culture — propagate. What types of social pollution are they targeting? “Political smears,” so-called “astroturfing” and other forms of “misinformation.”
Named “Truthy,” after a term coined by TV host Stephen Colbert, the project claims to use a “sophisticated combination of text and data mining, social network analysis, and complex network models” to distinguish between memes that arise in an “organic manner” and those that are manipulated into being.

But there’s much more to the story. Focusing in particular on political speech, Truthy keeps track of which Twitter accounts are using hashtags such as #teaparty and #dems. It estimates users’ “partisanship.” It invites feedback on whether specific Twitter users, such as the Drudge Report, are “truthy” or “spamming.” And it evaluates whether accounts are expressing “positive” or “negative” sentiments toward other users or memes…”

Open data for open lands


at Radar: “President Obama’s well-publicized national open data policy (pdf) makes it clear that government data is a valuable public resource for which the government should be making efforts to maximize access and use. This policy was based on lessons from previous government open data success stories, such as weather data and GPS, which form the basis for countless commercial services that we take for granted today and that deliver enormous value to society. (You can see an impressive list of companies reliant on open government data via GovLab’s Open Data 500 project.)
Based on this open data policy, I’ve been encouraging entrepreneurs to invest their time and ingenuity to explore entrepreneurial opportunities based on government data. I’ve even invested (through O’Reilly AlphaTech Ventures) in one such start-up, Hipcamp, which provides user-friendly interfaces to making reservations at national and state parks.
A better system is sorely needed. The current reservation system, managed by the Active Network / Reserve America is clunky and almost unusable. Hipcamp changes all that, making it a breeze to reserve camping spots.
But now this is under threat. Active Network / Reserve America’s 10-year contract is up for renewal, and the Department of the Interior had promised an RFP for a new contract that conformed with the open data mandate. Ideally, that RFP would require an API so that independent companies could provide alternate interfaces, just like travel sites provide booking interfaces for air travel, hotels, and more. That explosion of consumer convenience should be happening for customers of our nation’s parks as well, don’t you think?…”

Chicago uses big data to save itself from urban ills


Aviva Rutkin in the New Scientist: “THIS year in Chicago, some kids will get lead poisoning from the paint or pipes in their homes. Some restaurants will cook food in unsanitary conditions and, here and there, a street corner will be suddenly overrun with rats. These kinds of dangers are hard to avoid in a city of more than 2.5 million people. The problem is, no one knows for certain where or when they will pop up.

The Chicago city government is hoping to change that by knitting powerful predictive models into its everyday city inspections. Its latest project, currently in pilot tests, analyses factors such as home inspection records and census data, and uses the results to guess which buildings are likely to cause lead poisoning in children – a problem that affects around 500,000 children in the US each year. The idea is to identify trouble spots before kids are exposed to dangerous lead levels.

“We are able to prevent problems instead of just respond to them,” says Jay Bhatt, chief innovation officer at the Chicago Department of Public Health. “These models are just the beginning of the use of predictive analytics in public health and we are excited to be at the forefront of these efforts.”

Chicago’s projects are based on the thinking that cities already have what they need to raise their municipal IQ: piles and piles of data. In 2012, city officials built WindyGrid, a platform that collected data like historical facts about buildings and up-to-date streams such as bus locations, tweets and 911 calls. The project was designed as a proof of concept and was never released publicly but it led to another, called Plenario, that allowed the public to access the data via an online portal.

The experience of building those tools has led to more practical applications. For example, one tool matches calls to the city’s municipal hotline complaining about rats with conditions that draw rats to a particular area, such as excessive moisture from a leaking pipe, or with an increase in complaints about garbage. This allows officials to proactively deploy sanitation crews to potential hotspots. It seems to be working: last year, resident requests for rodent control dropped by 15 per cent.

Some predictions are trickier to get right. Charlie Catlett, director of the Urban Center for Computation and Data in Chicago, is investigating an old axiom among city cops: that violent crime tends to spike when there’s a sudden jump in temperature. But he’s finding it difficult to test its validity in the absence of a plausible theory for why it might be the case. “For a lot of things about cities, we don’t have that underlying theory that tells us why cities work the way they do,” says Catlett.

Still, predictive modelling is maturing, as other cities succeed in using it to tackle urban ills….Such efforts can be a boon for cities, making them more productive, efficient and safe, says Rob Kitchin of Maynooth University in Ireland, who helped launched a real-time data site for Dublin last month called the Dublin Dashboard. But he cautions that there’s a limit to how far these systems can aid us. Knowing that a particular street corner is likely to be overrun with rats tomorrow doesn’t address what caused the infestation in the first place. “You might be able to create a sticking plaster or be able to manage it more efficiently, but you’re not going to be able to solve the deep structural problems….”

Traversing Digital Babel


New book by Alon Peled: “The computer systems of government agencies are notoriously complex. New technologies are piled on older technologies, creating layers that call to mind an archaeological dig. Obsolete programming languages and closed mainframe designs offer barriers to integration with other agency systems. Worldwide, these unwieldy systems waste billions of dollars, keep citizens from receiving services, and even—as seen in interoperability failures on 9/11 and during Hurricane Katrina—cost lives. In this book, Alon Peled offers a groundbreaking approach for enabling information sharing among public sector agencies: using selective incentives to “nudge” agencies to exchange information assets. Peled proposes the establishment of a Public Sector Information Exchange (PSIE), through which agencies would trade information.
After describing public sector information sharing failures and the advantages of incentivized sharing, Peled examines the U.S. Open Data program, and the gap between its rhetoric and results. He offers examples of creative public sector information sharing in the United States, Australia, Brazil, the Netherlands, and Iceland. Peled argues that information is a contested commodity, and draws lessons from the trade histories of other contested commodities—including cadavers for anatomical dissection in nineteenth-century Britain. He explains how agencies can exchange information as a contested commodity through a PSIE program tailored to an individual country’s needs, and he describes the legal, economic, and technical foundations of such a program. Touching on issues from data ownership to freedom of information, Peled offers pragmatic advice to politicians, bureaucrats, technologists, and citizens for revitalizing critical information flows.”