Civilized Discourse Construction Kit


Jeff Atwood at “Coding Horror“: “Forum software? Maybe. Let’s see, it’s 2013, has forum software advanced at all in the last ten years? I’m thinking no.
Forums are the dark matter of the web, the B-movies of the Internet. But they matter. To this day I regularly get excellent search results on forum pages for stuff I’m interested in. Rarely a day goes by that I don’t end up on some forum, somewhere, looking for some obscure bit of information. And more often than not, I find it there….

At Stack Exchange, one of the tricky things we learned about Q&A is that if your goal is to have an excellent signal to noise ratio, you must suppress discussion. Stack Exchange only supports the absolute minimum amount of discussion necessary to produce great questions and great answers. That’s why answers get constantly re-ordered by votes, that’s why comments have limited formatting and length and only a few display, and so forth….

Today we announce the launch of Discourse, a next-generation, 100% open source discussion platform built for the next decade of the Internet.

Discourse-logo-big

The goal of the company we formed, Civilized Discourse Construction Kit, Inc., is exactly that – to raise the standard of civilized discourse on the Internet through seeding it with better discussion software:

  • 100% open source and free to the world, now and forever.
  • Feels great to use. It’s fun.
  • Designed for hi-resolution tablets and advanced web browsers.
  • Built in moderation and governance systems that let discussion communities protect themselves from trolls, spammers, and bad actors – even without official moderators.”

6 Things You May Not Know About Open Data


GovTech: “On Friday, May 3, Palo Alto, Calif., CIO Jonathan Reichental …said that when it comes to making data more open, “The invisible becomes visible,” and he outlined six major points that identify and define what open data really is:

1.  It’s the liberation of peoples’ data

The public sector collects data that pertains to government, such as employee salaries, trees or street information, and government entities are therefore responsible for liberating that data so the constituent can view it in an accessible format. Though this practice has become more commonplace in recent years, Reichental said government should have been doing this all along.

2.  Data has to be consumable by a machine

Piecing data together from a spreadsheet to a website or containing it in a PDF isn’t the easiest way to retrieve data. To make data more open, in needs to be in a readable format so users don’t have to go through additional trouble of finding or reading it.

3.  Data has a derivative value

When data is made available to the public, people like app developers, arichitects or others are able to analyze the data. In some cases, data can be used in city planning to understand what’s happening at the city scale.

4.  It eliminates the middleman

For many states, public records laws require them to provide data when a public records request is made. But oftentimes, complying with such request regulations involves long and cumbersome processes. Lawyers and other government officials must process paperwork, and it can take weeks to complete a request. By having data readily available, these processes can be eliminated, thus also eliminating the middleman responsible for processing the requests. Direct access to the data saves time and resources.

5.  Data creates deeper accountability

Since government is expected to provide accessible data, it is therefore being watched, making it more accountable for its actions — everything from emails, salaries and city council minutes can be viewed by the public.

6.  Open Data builds trust

When the community can see what’s going on in its government through the access of data, Reichtental said individuals begin to build more trust in their government and feel less like the government is hiding information.”

Linking open data to augmented intelligence and the economy


Open Data Institute and Professor Nigel Shadbolt (@Nigel_Shadbolt) interviewed by by (@digiphile):  “…there are some clear learnings. One that I’ve been banging on about recently has been that yes, it really does matter to turn the dial so that governments have a presumption to publish non-personal public data. If you would publish it anyway, under a Freedom of Information request or whatever your local legislative equivalent is, why aren’t you publishing it anyway as open data? That, as a behavioral change. is a big one for many administrations where either the existing workflow or culture is, “Okay, we collect it. We sit on it. We do some analysis on it, and we might give it away piecemeal if people ask for it.” We should construct publication process from the outset to presume to publish openly. That’s still something that we are two or three years away from, working hard with the public sector to work out how to do and how to do properly.
We’ve also learned that in many jurisdictions, the amount of [open data] expertise within administrations and within departments is slight. There just isn’t really the skillset, in many cases. for people to know what it is to publish using technology platforms. So there’s a capability-building piece, too.
One of the most important things is it’s not enough to just put lots and lots of datasets out there. It would be great if the “presumption to publish” meant they were all out there anyway — but when you haven’t got any datasets out there and you’re thinking about where to start, the tough question is to say, “How can I publish data that matters to people?”
The data that matters is revealed in the fact that if we look at the download stats on these various UK, US and other [open data] sites. There’s a very, very distinctive parallel curve. Some datasets are very, very heavily utilized. You suspect they have high utility to many, many people. Many of the others, if they can be found at all, aren’t being used particularly much. That’s not to say that, under that long tail, there isn’t large amounts of use. A particularly arcane open dataset may have exquisite use to a small number of people.
The real truth is that it’s easy to republish your national statistics. It’s much harder to do a serious job on publishing your spending data in detail, publishing police and crime data, publishing educational data, publishing actual overall health performance indicators. These are tough datasets to release. As people are fond of saying, it holds politicians’ feet to the fire. It’s easy to build a site that’s full of stuff — but does the stuff actually matter? And does it have any economic utility?”
there are some clear learnings. One that I’ve been banging on about recently has been that yes, it really does matter to turn the dial so that governments have a presumption to publish non-personal public data. If you would publish it anyway, under a Freedom of Information request or whatever your local legislative equivalent is, why aren’t you publishing it anyway as open data? That, as a behavioral change. is a big one for many administrations where either the existing workflow or culture is, “Okay, we collect it. We sit on it. We do some analysis on it, and we might give it away piecemeal if people ask for it.” We should construct publication process from the outset to presume to publish openly. That’s still something that we are two or three years away from, working hard with the public sector to work out how to do and how to do properly.
We’ve also learned that in many jurisdictions, the amount of [open data] expertise within administrations and within departments is slight. There just isn’t really the skillset, in many cases. for people to know what it is to publish using technology platforms. So there’s a capability-building piece, too.
One of the most important things is it’s not enough to just put lots and lots of datasets out there. It would be great if the “presumption to publish” meant they were all out there anyway — but when you haven’t got any datasets out there and you’re thinking about where to start, the tough question is to say, “How can I publish data that matters to people?”
The data that matters is revealed in the fact that if we look at the download stats on these various UK, US and other [open data] sites. There’s a very, very distinctive parallel curve. Some datasets are very, very heavily utilized. You suspect they have high utility to many, many people. Many of the others, if they can be found at all, aren’t being used particularly much. That’s not to say that, under that long tail, there isn’t large amounts of use. A particularly arcane open dataset may have exquisite use to a small number of people.
The real truth is that it’s easy to republish your national statistics. It’s much harder to do a serious job on publishing your spending data in detail, publishing police and crime data, publishing educational data, publishing actual overall health performance indicators. These are tough datasets to release. As people are fond of saying, it holds politicians’ feet to the fire. It’s easy to build a site that’s full of stuff — but does the stuff actually matter? And does it have any economic utility?

An API for "We the People"


WeThePeopleThe White House Blog: “We can’t talk about We the People without getting into the numbers — more than 8 million users, more than 200,000 petitions, more than 13 million signatures. The sheer volume of participation is, to us, a sign of success.
And there’s a lot we can learn from a set of data that rich and complex, but we shouldn’t be the only people drawing from its lessons.
So starting today, we’re making it easier for anyone to do their own analysis or build their own apps on top of the We the People platform. We’re introducing the first version of our API, and we’re inviting you to use it.
Get started here: petitions.whitehouse.gov/developers
This API provides read-only access to data on all petitions that passed the 150 signature threshold required to become publicly-available on the We the People site. For those who don’t need real-time data, we plan to add the option of a bulk data download in the near future. Until that’s ready, an incomplete sample data set is available for download here.”

Cities and Data


20130427_USC502The Economist: “Many cities around the country find themselves in a similar position: they are accumulating data faster than they know what to do with. One approach is to give them to the public. For example, San Francisco, New York, Philadelphia, Boston and Chicago are or soon will be sharing the grades that health inspectors give to restaurants with an online restaurant directory.
Another way of doing it is simply to publish the raw data and hope that others will figure out how to use them. This has been particularly successful in Chicago, where computer nerds have used open data to create many entirely new services. Applications are now available that show which streets have been cleared after a snowfall, what time a bus or train will arrive and how requests to fix potholes are progressing.
New York and Chicago are bringing together data from departments across their respective cities in order to improve decision-making. When a city holds a parade it can combine data on street closures, bus routes, weather patterns, rubbish trucks and emergency calls in real time.”

Open Data and Civil Society


Nick Hurd, UK Minister for Civil Society, on the potential of open data for the third sector in The Guardian:

“Part of the value of civil society is holding power to account, and if this can be underpinned by good quality data, we will have a very powerful tool indeed….The UK is absolutely at the vanguard of the global open data movement, and NGOs have a great sense that this is something they want to play a part in.There is potential to help them do more of what they do, and to do it better, but they’re going to need a lot of help in terms of information and access to events where they can exchange ideas and best practice.”

Also in the article: “The competitive marketplace and bilateral nature of funding awards make this issue perhaps even more significant in the charity sector, and it is in changing attitudes and encouraging this warts-and-all approach that movement leadership bodies such as the Open Data Institute (ODI) will play their biggest role….Joining the ODI in driving and overseeing wider adoption of these practices is the Open Knowledge Foundation (OKFN). One of its first projects was a partnership with an organisation called Publish What You Fund, the aim of which was to release data on the breakdown of funding to sectors and departments in Uganda according to source – government or aid.
…Open data can often take the form of complex databases that need to be interrogated by a data specialist, and many charities simply do not have these technical resources sitting untapped. OKFN is foremost among a number of organisations looking to bridge this gap by training members of the public in data mining and analysis techniques….
“We’re all familiar with the phrase ‘knowledge is power’, and in this case knowledge means insight gained from this newly available data. But data doesn’t turn into insight or knowledge magically. It takes people, it takes skills, it takes tools to become knowledge, data and change.
“We set up the School of Data in partnership with Peer 2 Peer University just over a year and a half ago with the aim of enabling citizens to carry out this process, and what we really want to do is empower charities to use data in the same way”, said Pollock.”

The Value of Open Data – Don’t Measure Growth, Measure Destruction


David Eaves: “…And that is my main point. The real impact of open data will likely not be in the economic wealth it generates, but rather in its destructive power. I think the real impact of open data is going to be in the value it destroys and so in the capital it frees up to do other things. Much like Red Hat is fraction of the size of Microsoft, Open Data is going to enable new players to disrupt established data players.

What do I mean by this?
Take SeeClickFix. Here is a company that, leveraging the Open311 standard, is able to provide many cities with a 311 solution that works pretty much out of the box. 20 years ago, this was a $10 million+ problem for a major city to solve, and wasn’t even something a small city could consider adopting – it was just prohibitively expensive. Today, SeeClickFix takes what was a 7 or 8 digit problem, and makes it a 5 or 6 digit problem. Indeed, I suspect SeeClickFix almost works better in a small to mid-sized government that doesn’t have complex work order software and so can just use SeeClickFix as a general solution. For this part of the market, it has crushed the cost out of implementing a solution.
Another example. And one I’m most excited. Look at CKAN and Socrata. Most people believe these are open data portal solutions. That is a mistake. These are data management companies that happen to have simply made “sharing (or “open”) a core design feature. You know who does data management? SAP. What Socrata and CKAN offer is a way to store, access, share and engage with data previously gathered and held by companies like SAP at a fraction of the cost. A SAP implementation is a 7 or 8 (or god forbid, 9) digit problem. And many city IT managers complain that doing anything with data stored in SAP takes time and it takes money. CKAN and Socrata may have only a fraction of the features, but they are dead simple to use, and make it dead simple to extract and share data. More importantly they make these costly 7 and 8 digital problems potentially become cheap 5 or 6 digit problems.
On the analysis side, again, I do hope there will be big wins – but what I really think open data is going to do is lower the costs of creating lots of small wins – crazy numbers of tiny efficiencies….
Don’t look for the big bang, and don’t measure the growth in spending or new jobs. Rather let’s try to measure the destruction and cumulative impact of a thousand tiny wins. Cause that is where I think we’ll see it most.”

Open Data for Agriculture


USDA News Release: “Agriculture Secretary Tom Vilsack, along with Bill Gates, and U.S. Chief Technology Officer Todd Park, today kicked off a two-day international open data conference, saying that data “is among the most important commodities in agriculture” and sharing it openly increases its value.
Secretary Vilsack, as head of the U.S. Government delegation to the conference, announced the launch of a new “virtual community” as part of a suite of actions, including the release of new data, that the United States is taking to give farmers and ranchers, scientists, policy makers and other members of the public easy access to publicly funded data to help increase food security and nutrition.
“The digital revolution fueled by open data is starting to do for the modern world of agriculture what the industrial revolution did for agricultural productivity over the past century,” said Vilsack. “Open access to data will help combat food insecurity today while laying the groundwork for a sustainable agricultural system to feed a population that is projected to be more than nine billion by 2050.”
The virtual Food, Agriculture, and Rural data community launched today on Data.gov-the U.S. Government’s data sharing website-to catalogue America’s publicly available agricultural data and increase the ability of the public to find, download, and use datasets that are generated and held by the Federal Government. The data community features a collection of more than 300 newly cataloged datasets, databases, and raw data sources related to food, agriculture, and rural issues from agencies across the U.S. Government. In addition to the data catalog, the virtual community shares a number of applications, maps and tools designed to help farmers, scientists and policymakers improve global food security and nutrition….
The conference and the U.S. actions supporting open agricultural data fulfill the Open Data for Agriculture commitment made as part of the New Alliance for Food Security and Nutrition, which was launched by President Obama and G-8 partners at the 2012 G-8 Leaders Summit last year at Camp David, Maryland.”

G-8 Open Data for Agriculture Conference Aims to Help Feed a Growing Population and Fulfill New Alliance for Food Security and Nutrition Commitment
Secretary Vilsack Announces Launch of a Virtual Community to Give Increased Public Access to Food, Agriculture, and Rural Data

Better Cities Competition


oi-logoAnnouncement: Do you want to make our cities of the future better? Want to help improve quality of life in your home, your work and your public life? Have an idea how? Capture it in a short video and be in with a chance to win one of our amazing prizes!
As a part of Open Innovation 2.0: Sustainable Economy & Society collaboration  Intel Labs Europe, Dublin City Council, Trinity College Dublin and European Commission Open Innovation and Strategy Policy Group are delighted to announce that the 2013 Better Cities competition is now open.
The theme of the competition is how to make our cities more socially and economically sustainable, through use of open data and information technology.  Particular focus should be given to how citizens can engage and contribute to the innovation process.

Open Data Research Announced


WWW Foundation Press Release:  “Speaking at an Open Government Partnership reception last night in London, Sir Tim Berners-Lee, founder of the World Wide Web Foundation (Web Foundation) and inventor of the Web, unveiled details of the first ever in-depth study into how the power of open data could be harnessed to tackle social challenges in the developing world. The 14 country study is funded by Canada’s International Development Research Centre (IDRC) and will be overseen by the Web Foundation’s world-leading open data experts. An interim progress update will be made at an October 2013 meeting of the Open Government Partnership, with in-depth results expected in 2014…

Sir Tim Berners-Lee, founder of the World Wide Web Foundation and inventor of the Web said:

“Open Data, accessed via a free and open Web, has the potential to create a better world. However, best practice in London or New York is not necessarily best practice in Lima or Nairobi.  The Web Foundation’s research will help to ensure that Open Data initiatives in the developing world will unlock real improvements in citizens’ day-to-day lives.”

José M. Alonso, program manager at the World Wide Web Foundation, added:

“Through this study, the Web Foundation hopes not only to contribute to global understanding of open data, but also to cultivate the ability of developing world researchers and development workers to understand and apply open data for themselves.”

Further details on the project, including case study outlines are available here: http://oddc.opendataresearch.org/