Mark Hall: “Chances are that when you think about the word government, it is with a negative connotation.Your less-than-stellar opinion of government may be caused by everything from Washington’s dirty politics to the long lines at your local DMV.Regardless of the reason, local, state and national politics have frequently garnered a bad reputation. People feel like governments aren’t working for them.We have limited information, visibility and insight into what’s going on and why. Yes, the data is public information but it’s difficult to access and sift through.
Good news. Things are changing fast.
Innovative startups are emerging and they are changing the way we access government information at all levels.
Here are three tech startups that are taking a unique approach to opening up government data:
1. OpenGov is a Mountain View-based software company that enables government officials and local residents to easily parse through the city’s financial data.
Founded by a team with extensive technology and finance experience, this startup has already racked up some of the largest cities to join the movement, including the City of Los Angeles.OpenGov’s approach pairs data with good design in a manner that makes it easy to use.Historically, information like expenditures of public funds existed in a silo within the mayor’s office or city manager, diminishing the accountability of public employees.Imagine you are a citizen who is interested in seeing how much your city spent on a particular matter?
Now you can find out within just a few clicks.
This data is always of great importance but could also become increasingly critical during events like local elections.This level of openness and accessibility to data will be game-changing.
2. FiscalNote is a one-year old startup that uses analytical signals and intelligent government data to map legislation and predict an outcome.
Headquartered in Washington D.C., the company has developed a search layer and unique algorithm that makes tracking legislative data extremely easy. If you are an organization that has vested interests in specific legislative bills, tools by FiscalNote can give you insights into its progress and likelihood of being passed or held up. Want to know if your local representative favors a bill that could hurt your industry? Find out early and take the steps necessary to minimize the impact. Large corporations and special interest groups have traditionally held lobbying power with elected officials. This technology is important because small businesses, nonprofits and organizations now have an additional tool to see a changing legislative landscape in ways that were previously unimaginable.
3. Civic Industries is a San Francisco startup that allows citizens and local government officials to easily access data that previously required you to drive down to city hall. Building permits, code enforcements, upcoming government projects and construction data is now openly available within a few clicks.
Civic Insight maps various projects in your community and enables you to see all the projects with the corresponding start and completion dates, along with department contacts.
Accountability of public planning is no longer concealed to the city workers in the back-office. Responsibility is made clear. The startup also pushes underutilized city resources like empty storefronts and abandoned buildings to the forefront in an effort to drive action, either by residents or government officials.
So What’s Next?
While these three startups using data to push government transparency in the right direction, more work is needed…”
'Big Data' Will Change How You Play, See the Doctor, Even Eat
We’re entering an age of personal big data, and its impact on our lives will surpass that of the Internet. Data will answer questions we could never before answer with certainty—everyday questions like whether that dress actually makes you look fat, or profound questions about precisely how long you will live.
Every 20 years or so, a powerful technology moves from the realm of backroom expertise and into the hands of the masses. In the late-1970s, computing made that transition—from mainframes in glass-enclosed rooms to personal computers on desks. In the late 1990s, the first web browsers made networks, which had been for science labs and the military, accessible to any of us, giving birth to the modern Internet.
Each transition touched off an explosion of innovation and reshaped work and leisure. In 1975, 50,000 PCs were in use worldwide. Twenty years later: 225 million. The number of Internet users in 1995 hit 16 million. Today it’s more than 3 billion. In much of the world, it’s hard to imagine life without constant access to both computing and networks.
The 2010s will be the coming-out party for data. Gathering, accessing and gleaning insights from vast and deep data has been a capability locked inside enterprises long enough. Cloud computing and mobile devices now make it possible to stand in a bathroom line at a baseball game while tapping into massive computing power and databases. On the other end, connected devices such as the Nest thermostat or Fitbit health monitor and apps on smartphones increasingly collect new kinds of information about everyday personal actions and habits, turning it into data about ourselves.
More than 80 percent of data today is unstructured: tangles of YouTube videos, news stories, academic papers, social network comments. Unstructured data has been almost impossible to search for, analyze and mix with other data. A new generation of computers—cognitive computing systems that learn from data—will read tweets or e-books or watch video, and comprehend its content. Somewhat like brains, these systems can link diverse bits of data to come up with real answers, not just search results.
Such systems can work in natural language. The progenitor is the IBM Watson computer that won on Jeopardy in 2011. Next-generation Watsons will work like a super-powered Google. (Google today is a data-searching wimp compared with what’s coming.)
Sports offers a glimpse into the data age. Last season the NBA installed in every arena technology that can “watch” a game and record, in 48 minutes of action, more than 4 million data points about every movement and shot. That alone could yield new insights for NBA coaches, such as which group of five players most efficiently passes the ball around….
Think again about life before personal computing and the Internet. Even if someone told you that you’d eventually carry a computer in your pocket that was always connected to global networks, you would’ve had a hard time imagining what that meant—imagining WhatsApp, Siri, Pandora, Uber, Evernote, Tinder.
As data about everything becomes ubiquitous and democratized, layered on top of computing and networks, it will touch off the most spectacular technology explosion yet. We can see the early stages now. “Big data” doesn’t even begin to describe the enormity of what’s coming next.”
How Thousands Of Dutch Civil Servants Built A Virtual 'Government Square' For Online Collaboration
Federico Guerrini at Forbes: “Democracy needs a reboot, or as the founders of Democracy Os, an open source platform for political debate say, “a serious upgrade”. They are not alone in trying to change the way citizens and governments communicate with each other. Not long ago, I covered on this blog a Greek platform, VouliWatch, which aims at boosting civic engagement following the model of other similar initiatives in countries like Germany, France and Austria, all running thanks to a software called Parliament Watch.
Other decision making tools, used by activists and organizations that try to reduce the distance between the people and their representatives include Liquid Feedback, and Airesis. But the quest for disintermediation doesn’t regard only the relationship between governments and citizens: it’s changing the way public organisations work internally as well. Civil servants are starting to develop and use their internal “social networks”, to exchange ideas, discussing issues and collaborate on projects.
One such thing is happening in the Netherlands: thousands of civil servants belonging to all government organizations have built their own “intranet” using Pleio (“government square”, in Dutch) a platform that runs on the open source networking engine Elgg.
It all started in 2010, thanks to the work of a group of four founders, Davied van Berlo, Harrie Custers, Wim Essers and Marcel Ziemerink. Growth has been steady and now Pleio can count on some 75.000 users spread in about 800 subsites. The nice thing about the platform, in fact, is that it is modular: subscribers can collaborate on a group and then start a sub group to get in more depth with a smaller team. To learn a little more about this unique experience, I reached out for van Berlo, who kindly answered a few questions. Check the interview below.
Where did the Pleio idea come from?Were you inspired by other experiences?
The idea came mainly from the developments around us: the whole web 2.0 movement at the time. This has shown us the power of platforms to connect people, bring them together and let them cooperate. I noticed that civil servants were looking for ways of collaborating across organisational borders and many were using the new online tools. That’s why I started the Civil Servant 2.0 network, so they could exchange ideas and experiences in this new way of working.
However, these tools are not always the ideal solution. They’re commercial for one, which can get in the way of the public goals we work for. They’re often American, where other laws and practices apply. You can’t change them or add to them. Usually you have to get another tool (and login) for different functionalities. And they were outright forbidden by some government agencies. I noticed there was a need for a platform where different tools were integrated, where people from different organisations and outside government could work together and where all information would remain in the Netherlands and in the hands of the original owner. Since there was no such platform we started one of our own….”
Chief Executive of Nesta on the Future of Government Innovation
Interview between Rahim Kanani and Geoff Mulgan, CEO of NESTA and member of the MacArthur Research Network on Opening Governance: “Our aspiration is to become a global center of expertise on all kinds of innovation, from how to back creative business start-ups and how to shape innovations tools such as challenge prizes, to helping governments act as catalysts for new solutions,” explained Geoff Mulgan, chief executive of Nesta, the UK’s innovation foundation. In an interview with Mulgan, we discussed their new report, published in partnership with Bloomberg Philanthropies, which highlights 20 of the world’s top innovation teams in government. Mulgan and I also discussed the founding and evolution of Nesta over the past few years, and leadership lessons from his time inside and outside government.
Rahim Kanani: When we talk about ‘innovations in government’, isn’t that an oxymoron?
Geoff Mulgan: Governments have always innovated. The Internet and World Wide Web both originated in public organizations, and governments are constantly developing new ideas, from public health systems to carbon trading schemes, online tax filing to high speed rail networks. But they’re much less systematic at innovation than the best in business and science. There are very few job roles, especially at senior levels, few budgets, and few teams or units. So although there are plenty of creative individuals in the public sector, they succeed despite, not because of the systems around them. Risk-taking is punished not rewarded. Over the last century, by contrast, the best businesses have learned how to run R&D departments, product development teams, open innovation processes and reasonably sophisticated ways of tracking investments and returns.
Kanani: This new report, published in partnership with Bloomberg Philanthropies, highlights 20 of the world’s most effective innovation teams in government working to address a range of issues, from reducing murder rates to promoting economic growth. Before I get to the results, how did this project come about, and why is it so important?
Mulgan: If you fail to generate new ideas, test them and scale the ones that work, it’s inevitable that productivity will stagnate and governments will fail to keep up with public expectations, particularly when waves of new technology—from smart phones and the cloud to big data—are opening up dramatic new possibilities. Mayor Bloomberg has been a leading advocate for innovation in the public sector, and in New York he showed the virtues of energetic experiment, combined with rigorous measurement of results. In the UK, organizations like Nesta have approached innovation in a very similar way, so it seemed timely to collaborate on a study of the state of the field, particularly since we were regularly being approached by governments wanting to set up new teams and asking for guidance.
Kanani: Where are some of the most effective innovation teams working on these issues, and how did you find them?
Mulgan: In our own work at Nesta, we’ve regularly sought out the best innovation teams that we could learn from and this study made it possible to do that more systematically, focusing in particular on the teams within national and city governments. They vary greatly, but all the best ones are achieving impact with relatively slim resources. Some are based in central governments, like Mindlab in Denmark, which has pioneered the use of design methods to reshape government services, from small business licensing to welfare. SITRA in Finland has been going for decades as a public technology agency, and more recently has switched its attention to innovation in public services. For example, providing mobile tools to help patients manage their own healthcare. In the city of Seoul, the Mayor set up an innovation team to accelerate the adoption of ‘sharing’ tools, so that people could share things like cars, freeing money for other things. In south Australia the government set up an innovation agency that has been pioneering radical ways of helping troubled families, mobilizing families to help other families.
Kanani: What surprised you the most about the outcomes of this research?
Mulgan: Perhaps the biggest surprise has been the speed with which this idea is spreading. Since we started the research, we’ve come across new teams being created in dozens of countries, from Canada and New Zealand to Cambodia and Chile. China has set up a mobile technology lab for city governments. Mexico City and many others have set up labs focused on creative uses of open data. A batch of cities across the US supported by Bloomberg Philanthropy—from Memphis and New Orleans to Boston and Philadelphia—are now showing impressive results and persuading others to copy them.
Selected Readings on Sentiment Analysis
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of sentiment analysis was originally published in 2014.
Sentiment Analysis is a field of Computer Science that uses techniques from natural language processing, computational linguistics, and machine learning to predict subjective meaning from text. The term opinion mining is often used interchangeably with Sentiment Analysis, although it is technically a subfield focusing on the extraction of opinions (the umbrella under which sentiment, evaluation, appraisal, attitude, and emotion all lie).
The rise of Web 2.0 and increased information flow has led to an increase in interest towards Sentiment Analysis — especially as applied to social networks and media. Events causing large spikes in media — such as the 2012 Presidential Election Debates — are especially ripe for analysis. Such analyses raise a variety of implications for the future of crowd participation, elections, and governance.
Selected Reading List (in alphabetical order)
- Choi, Tan, Lee, Danescu-Niculescu-Mizil, Spindel — Hedge Detection as a Lens on Framing in the GMO Debates: A Position Paper — a position paper to suggest looking at hedge detection in whether adopting a “scientific tone” indicates an opinion in the debate on GMOs.
- Christina Michael, Francesca Toni, and Krysia Broda — Sentiment Analysis for Debates — a paper looking at several techniques and applications of Sentiment Analysis on online debates.
- Akiko Murakami, Rudy Raymond — Support or Oppose? Classifying Positions in Online Debates from Reply Activities and Opinion Expressions — a paper seeking to identify the general positions of users in online debates by exploiting local information in their remarks within the debate, and using Sentiment Analysis on the text.
- Bo Pang, Lillian Lee — Opinion Mining & Sentiment Analysis — a general survey on Sentiment Analysis and approaches, with examples of applications.
- Ranade, Gupta, Varma, Mamidi — Online debate summarization using topic directed sentiment analysis — a paper aiming to summarize online debates by extracting highly topic relevant and sentiment rich sentences.
- Jodi Schneider — Automated argumentation mining to the rescue? Envisioning argumentation and decision-making support for debates in open online collaboration communities — a paper describing a new possible domain for argumentation mining: debates in open online collaboration communities.
Annotated Selected Reading List (in alphabetical order)
Choi, Eunsol et al. “Hedge detection as a lens on framing in the GMO debates: a position paper.” Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics 13 Jul. 2012: 70-79. http://bit.ly/1wweftP
- Understanding the ways in which participants in public discussions frame their arguments is important for understanding how public opinion is formed. This paper adopts the position that it is time for more computationally-oriented research on problems involving framing. In the interests of furthering that goal, the authors propose the following question: In the controversy regarding the use of genetically-modified organisms (GMOs) in agriculture, do pro- and anti-GMO articles differ in whether they choose to adopt a more “scientific” tone?
- Prior work on the rhetoric and sociology of science suggests that hedging may distinguish popular-science text from text written by professional scientists for their colleagues. The paper proposes a detailed approach to studying whether hedge detection can be used to understand scientific framing in the GMO debates, and provides corpora to facilitate this study. Some of the preliminary analyses suggest that hedges occur less frequently in scientific discourse than in popular text, a finding that contradicts prior assertions in the literature.
Michael, Christina, Francesca Toni, and Krysia Broda. “Sentiment analysis for debates.” (Unpublished MSc thesis). Department of Computing, Imperial College London (2013). http://bit.ly/Wi86Xv
- This project aims to expand on existing solutions used for automatic sentiment analysis on text in order to capture support/opposition and agreement/disagreement in debates. In addition, it looks at visualizing the classification results for enhancing the ease of understanding the debates and for showing underlying trends. Finally, it evaluates proposed techniques on an existing debate system for social networking.
Murakami, Akiko, and Rudy Raymond. “Support or oppose?: classifying positions in online debates from reply activities and opinion expressions.” Proceedings of the 23rd International Conference on Computational Linguistics: Posters 23 Aug. 2010: 869-875. https://bit.ly/2Eicfnm
- In this paper, the authors propose a method for the task of identifying the general positions of users in online debates, i.e., support or oppose the main topic of an online debate, by exploiting local information in their remarks within the debate. An online debate is a forum where each user posts an opinion on a particular topic while other users state their positions by posting their remarks within the debate. The supporting or opposing remarks are made by directly replying to the opinion, or indirectly to other remarks (to express local agreement or disagreement), which makes the task of identifying users’ general positions difficult.
- A prior study has shown that a link-based method, which completely ignores the content of the remarks, can achieve higher accuracy for the identification task than methods based solely on the contents of the remarks. In this paper, it is shown that utilizing the textual content of the remarks into the link-based method can yield higher accuracy in the identification task.
Pang, Bo, and Lillian Lee. “Opinion mining and sentiment analysis.” Foundations and trends in information retrieval 2.1-2 (2008): 1-135. http://bit.ly/UaCBwD
- This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Its focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. It includes material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided.
Ranade, Sarvesh et al. “Online debate summarization using topic directed sentiment analysis.” Proceedings of the Second International Workshop on Issues of Sentiment Discovery and Opinion Mining 11 Aug. 2013: 7. http://bit.ly/1nbKtLn
- Social networking sites provide users a virtual community interaction platform to share their thoughts, life experiences and opinions. Online debate forum is one such platform where people can take a stance and argue in support or opposition of debate topics. An important feature of such forums is that they are dynamic and grow rapidly. In such situations, effective opinion summarization approaches are needed so that readers need not go through the entire debate.
- This paper aims to summarize online debates by extracting highly topic relevant and sentiment rich sentences. The proposed approach takes into account topic relevant, document relevant and sentiment based features to capture topic opinionated sentences. ROUGE (Recall-Oriented Understudy for Gisting Evaluation, which employ a set of metrics and a software package to compare automatically produced summary or translation against human-produced onces) scores are used to evaluate the system. This system significantly outperforms several baseline systems and show improvement over the state-of-the-art opinion summarization system. The results verify that topic directed sentiment features are most important to generate effective debate summaries.
Schneider, Jodi. “Automated argumentation mining to the rescue? Envisioning argumentation and decision-making support for debates in open online collaboration communities.” http://bit.ly/1mi7ztx
- Argumentation mining, a relatively new area of discourse analysis, involves automatically identifying and structuring arguments. Following a basic introduction to argumentation, the authors describe a new possible domain for argumentation mining: debates in open online collaboration communities.
- Based on our experience with manual annotation of arguments in debates, the authors propose argumentation mining as the basis for three kinds of support tools, for authoring more persuasive arguments, finding weaknesses in others’ arguments, and summarizing a debate’s overall conclusions.
What ‘urban physics’ could tell us about how cities work
Ruth Graham at Boston Globe: “What does a city look like? If you’re walking down the street, perhaps it looks like people and storefronts. Viewed from higher up, patterns begin to emerge: A three-dimensional grid of buildings divided by alleys, streets, and sidewalks, nearly flat in some places and scraping the sky in others. Pull back far enough, and the city starts to look like something else entirely: a cluster of molecules.
At least, that’s what it looks like to Franz-Josef Ulm, an engineering professor at the Massachusetts Institute of Technology. Ulm has built a career as an expert on the properties, patterns, and environmental potential of concrete. Taking a coffee break at MIT’s Stata Center late one afternoon, he and a colleague were looking at a large aerial photograph of a city when they had a “eureka” moment: “Hey, doesn’t that look like a molecular structure?”
With colleagues, Ulm began analyzing cities the way you’d analyze a material, looking at factors such as the arrangement of buildings, each building’s center of mass, and how they’re ordered around each other. They concluded that cities could be grouped into categories: Boston’s structure, for example, looks a lot like an “amorphous liquid.” Seattle is another liquid, and so is Los Angeles. Chicago, which was designed on a grid, looks like glass, he says; New York resembles a highly ordered crystal.
So far Ulm and his fellow researchers have presented their work at conferences, but it has not yet been published in a scientific journal. If the analogy does hold up, Ulm hopes it will give planners a new tool to understand a city’s structure, its energy use, and possibly even its resilience to climate change.
Ulm calls his new work “urban physics,” and it places him among a number of scientists now using the tools of physics to analyze the practically infinite amount of data that cities produce in the 21st century, from population density to the number of patents produced to energy bill charges. Physicist Marta González, Ulm’s colleague at MIT, recently used cellphone data to analyze traffic patterns in Boston with unprecedented complexity, for example. In 2012, a theoretical physicist was named founding director of New York University’s Center for Urban Science and Progress, whose research is devoted to “urban informatics”; one of its first projects is helping to create the country’s first “quantified community” on the West Side of Manhattan.
In Ulm’s case, he and his colleagues have used freely available data, including street layouts and building coordinates, to plot the structures of 12 cities and analogize them to existing complex materials. In physics, an “order parameter” is a number between 0 and 1 that describes how atoms are arranged in relationship to other atoms nearby; Ulm applies this idea to city layouts. Boston, he says, has an “order parameter” of .52, equivalent to that of a liquid like water. This means its structure is notably disordered, which may have something to do with how it developed. “Boston has grown organically,” he said. “The city, in the way its buildings are organized today, carries that information from its historical evolution.”…
When Technologies Combine, Amazing Innovation Happens
FastCoexist: “Innovation occurs both within fields, and in combinations of fields. It’s perhaps the latter that ends up being most groundbreaking. When people of disparate expertise, mindset and ideas work together, new possibilities pop up.
In a new report, the Institute for the Future argues that “technological change is increasingly driven by the combination and recombination of foundational elements.” So, when we think about the future, we need to consider not just fundamental advances (say, in computing, materials, bioscience) but also at the intersection of these technologies.
The report uses combination-analysis in the form of a map. IFTF selects 13 “territories”–what it calls “frontiers of innovation”–and then examines the linkages and overlaps. The result is 20 “combinational forecasts.” “These are the big stories, hot spots that will shape the landscape of technology in the coming decade,” the report explains. “Each combinatorial forecast emerges from the intersection of multiple territories.”…
Quantified Experiences
Advances in brain-imaging techniques will make bring new transparency to our thoughts and feelings. “Assigning precise measurements to feelings like pain through neurofeedback and other techniques could allow for comparison, modulation, and manipulation of these feelings,” the report says. “Direct measurement of our once-private thoughts and feelings can help us understand other people’s experience but will also present challenges regarding privacy and definition of norms.”…
Code Is The Law
The law enforcement of the future may increasingly rely on sensors and programmable devices. “Governance is shifting from reliance on individual responsibility and human policing toward a system of embedded protocols and automatic rule enforcement,” the report says. That in turn means greater power for programmers who are effectively laying down the parameters of the new relationship between government and governed….”
Generative Emergence: A New Discipline of Organizational, Entrepreneurial, and Social Innovation
New book by Benyamin Lichtenstein: “Culminating more than 30 years of research into evolution, complexity science, organizing and entrepreneurship, this book provides insights to scholars who are increasingly using emergence to explain social phenomena. In addition to providing the first comprehensive definition and framework for understanding emergence, it is the first publication of data from a year-long experimental study of emergence in high-potential ventures—a week-by-week longitudinal analysis of their processes based on over 750 interviews and 1000 hours of on-site observation. These data, combined with reports from over a dozen other studies, confirm the dynamics of the five phase model in multiple contexts…
- Findings which show a major difference between an aspiration that generates a purposive drive for generative emergence, versus a performance-driven crisis that sparks organizational change and transformation. This difference has important implications for studies of entrepreneurship, innovation, and social change.
- A definition of emergence based on 100+ years of work in philosophy and philosophy of science, evolutionary studies, sociology, and organization science.
- The most inclusive review of complexity science published, to help reinvigorate and legitimize those methods in the social sciences.
- The Dynamic States Model—a new approach for understanding the non-linear growth and development of new ventures.
- In-depth examinations of more than twenty well-known emergence studies, to reveal their shared dynamics and underlying drivers.
- Proposals for applying the five-phase model—as a logic of emergence—to social innovation, organizational leadership, and entrepreneurial development.”
Business Models That Take Advantage of Open Data Opportunities
In a session held on the first day of the event, Borlongan facilitated an interactive workshop to help would-be entrepreneurs understand how startups are building business models that take advantage of open data opportunities to create sustainable, employment-generating businesses.
Citing research from the McKinsey Institute that calculates the value of open data to be worth $3 trillion globally, Borlongan said: “So the understanding of the open data process is usually: We throw open data over the wall, then we hold a hackathon, and then people will start making products off it, and then we make the $3 trillion.”
Borlongan argued that it is actually a “blurry identity to be an open data startup” and encouraged participants to unpack, with each of the startups presenting exactly how income can be generated and a viable business built in this space.
Jeni Tennison, from the U.K.’s Open Data Institute (which supports 15 businesses in its Startup Programme) categorizes two types of business models:
- Businesses that publish (but do not sell) open data.
- Businesses built on top of using open data.
Businesses That Publish but Do Not Sell Open Data
At the Open Data Institute, Tennison is investigating the possibility of an open address database that would provide street address data for every property in the U.K. She describes three types of business models that could be created by projects that generated and published such data:
Freemium: In this model, the bulk data of open addresses could be made available freely, “but if you want an API service, then you would pay for it.” Tennison pointed to lots of opportunities also to degrade the freemium-level data—for example, having it available in bulk but not at a particularly granular level (unless you pay for it), or by provisioning reuse on a share-only basis, but you would pay if you wanted the data for corporate use cases (similar to how OpenCorporates sells access to its data).
Cross-subsidy: In this approach, the data would be available, and the opportunities to generate income would come from providing extra services, like consultancy or white labeling data services alongside publishing the open data.
Network: In this business model, value is created by generating a network effect around the core business interest, which may not be the open data itself. As an example, Tennison suggested that if a post office or delivery company were to create the open address database, it might be interested in encouraging private citizens to collaboratively maintain or crowdsource the quality of the data. The revenue generated by this open data would then come from reductions in the cost of delivery services as the data improved accuracy.
Businesses Built on Top of Open Data
Six startups working in unique ways to make use of available open data also presented their business models to OKFestival attendees: Development Seed, Mapbox, OpenDataSoft, Enigma.io, Open Bank API, and Snips.

Startup: Development Seed
What it does: Builds solutions for development, public health and citizen democracy challenges by creating open source tools and utilizing open data.
Open data API focus: Regularly uses open data APIs in its projects. For example, it worked with the World Bank to create a data visualization website built on top of the World Bank API.
Type of business model: Consultancy, but it has also created new businesses out of the products developed as part of its work, most notably Mapbox (see below).

Startup: Enigma.io
What it does: Open data platform with advanced discovery and search functions.
Open data API focus: Provides the Enigma API to allow programmatic access to all data sets and some analytics from the Enigma platform.
Type of business model: SaaS including a freemium plan with no degradation of data and with access to API calls; some venture funding; some contracting services to particular enterprises; creating new products in Enigma Labs for potential later sale.

Startup: Mapbox
What it does: Enables users to design and publish maps based on crowdsourced OpenStreetMap data.
Open data API focus: Uses OpenStreetMap APIs to draw data into its map-creation interface; provides the Mapbox API to allow programmatic creation of maps using Mapbox web services.
Type of business model: SaaS including freemium plan; some tailored contracts for big map users such as Foursquare and Evernote.

Startup: Open Bank Project
What it does: Creates an open source API for use by banks.
Open data API focus: Its core product is to build an API so that banks can use a standard, open source API tool when creating applications and web services for their clients.
Type of business model: Contract license with tiered SLAs depending on the number of applications built using the API; IT consultancy projects.

Startup: OpenDataSoft
What it does: Provides an open data publishing platform so that cities, governments, utilities and companies can publish their own data portal for internal and public use.
Open data API focus: It’s able to route data sources into the portal from a publisher’s APIs; provides automatic API-creation tools so that any data set uploaded to the portal is then available as an API.
Type of business model: SaaS model with freemium plan, pricing by number of data sets published and number of API calls made against the data, with free access for academic and civic initiatives.

Startup: Snips
What it does: Predictive modeling for smart cities.
Open data API focus: Channels some open and client proprietary data into its modeling algorithm calculations via API; provides a predictive modeling API for clients’ use to programmatically generate solutions based on their data.
Type of business model: Creating one B2C app product for sale as a revenue-generation product; individual contracts with cities and companies to solve particular pain points, such as using predictive modeling to help a post office company better manage staff rosters (matched to sales needs) and a consultancy project to create a visualization mapping tool that can predict the risk of car accidents for a city….”
Power to Create
From the RSA: “In his 2014 Chief Executive’s lecture, Matthew Taylor will explore new thinking around the RSA’s core mission: to empower people to be capable, active participants in creating the world we want to live in.
The 21st century presents us with challenges of increasing scale and complexity, and yet we are failing to harness the ingenuity and skills of millions of individuals who could make a unique contribution towards our collective goals. Just as creativity is in ever greater demand, a vast resource of creative potential is going untapped.
In his lecture, Matthew will argue that we need to work towards a world that gives people the freedom to make the most of their capabilities. This will involve tackling the many constraints that limit individuals, and lock them out of the creative process.
Matthew argues that this can be done by combining new leadership and institutions that give us hope and excitement about the future, with a championing of individual creative endeavour and a 21st century spirit of solidarity and collaboration.
Listen to the audio
(full recording including audience Q&A)
Please right-click link and choose “Save Link As…” to download audio file onto your computer.
Read the transcript – Power to Create “