Putting Crowdsourcing on the Map


MIT Technology Review: “Even in San Francisco, where Google’s roving Street View cars have mapped nearly every paved surface, there are still places that have remained untouched, such as the flights of stairs that serve as pathways between streets in some of the city’s hilliest neighborhoods.
It’s these places that a startup called Mapillary is focusing on. Cofounders Jan Erik Solem and Johan Gyllenspetz are attempting to build an open, crowdsourced, photographic map that lets smartphone users log all sorts of places, creating a richer view of the world than what is offered by Street View and other street-level mapping services. If contributors provide images often, that view could be more representative of how things look right now.
Google itself is no stranger to the benefits of crowdsourced map content: it paid $966 million last year for traffic and navigation app Waze, whose users contribute data. Google also lets people augment Street View content with their own images. But Solem and Gyllenspetz think there’s still plenty of room for Mapillary, which they say can be used for everything from tracking a nature hike to offering more up-to-date images to house hunters and Airbnb users.
Solem and Gyllenspetz have only been working on the project for four months; they released an iPhone app in November, and an Android app in January. So far, there are just a few hundred users who have shared about 100,000 photos on the service. While it’s free for anyone to use, the startup plans to eventually make money by licensing the data its users generate to companies.
With the app, a user can choose to collect images by walking, biking, or driving. Once you press a virtual shutter button within the app, it takes a photo every two seconds, until you press the button again. You can then upload the images to Mapillary’s service via Wi-Fi, where each photo’s location is noted through its GPS tag. Computer-vision software compares each photo with others that are within a radius of about 100 meters, searching for matching image features so it can find the geometric relationship between the photos. It then places those images properly on the map, and stitches them all together. When new images come in of an area that has already been mapped, Mapillary will add them to its database, too.
It can take less than 30 seconds for the images to show up on the Web-based map, but several minutes for the images to be fully processed. As with Google’s Street View photos, image-recognition software blurs out faces and license plate numbers.
Users can edit Mapillary’s map by moving around the icons that correspond to images—to fix a misplaced image, for instance. Eventually, users will also be able to add comments and tags.
So far, Mapillary’s map is quite sparse. But the few hundred users trying out Mapillary include some map providers in Europe, and the 100,000 or so images to the service ranging from a bike path on Venice Beach in California to a snow-covered ski slope in Sweden.
Street-level images can be viewed on the Web or through Mapillary’s smartphone apps (though the apps just pull up the Web page within the app). Blue lines and colored tags indicate where users have added photos to the map; you can zoom in to see them at the street level.

Navigating through photos is still quite rudimentary; you can tap or click to move from one image to the next with onscreen arrows, depending on the direction you want to explore.
Beyond technical and design challenges, the biggest issue Mapillary faces is convincing a large enough number of users to build up its store of images so that others will start using it and contributing as well, and then ensuring that these users keep coming back.”

New Research Network to Study and Design Innovative Ways of Solving Public Problems


Network

MacArthur Foundation Research Network on Opening Governance formed to gather evidence and develop new designs for governing 

NEW YORK, NY, March 4, 2014 The Governance Lab (The GovLab) at New York University today announced the formation of a Research Network on Opening Governance, which will seek to develop blueprints for more effective and legitimate democratic institutions to help improve people’s lives.
Convened and organized by the GovLab, the MacArthur Foundation Research Network on Opening Governance is made possible by a three-year grant of $5 million from the John D. and Catherine T. MacArthur Foundation as well as a gift from Google.org, which will allow the Network to tap the latest technological advances to further its work.
Combining empirical research with real-world experiments, the Research Network will study what happens when governments and institutions open themselves to diverse participation, pursue collaborative problem-solving, and seek input and expertise from a range of people. Network members include twelve experts (see below) in computer science, political science, policy informatics, social psychology and philosophy, law, and communications. This core group is supported by an advisory network of academics, technologists, and current and former government officials. Together, they will assess existing innovations in governing and experiment with new practices and how institutions make decisions at the local, national, and international levels.
Support for the Network from Google.org will be used to build technology platforms to solve problems more openly and to run agile, real-world, empirical experiments with institutional partners such as governments and NGOs to discover what can enhance collaboration and decision-making in the public interest.
The Network’s research will be complemented by theoretical writing and compelling storytelling designed to articulate and demonstrate clearly and concretely how governing agencies might work better than they do today. “We want to arm policymakers and practitioners with evidence of what works and what does not,” says Professor Beth Simone Noveck, Network Chair and author of Wiki Government: How Technology Can Make Government Better, Democracy Stronger and Citi More Powerful, “which is vital to drive innovation, re-establish legitimacy and more effectively target scarce resources to solve today’s problems.”
“From prize-backed challenges to spur creative thinking to the use of expert networks to get the smartest people focused on a problem no matter where they work, this shift from top-down, closed, and professional government to decentralized, open, and smarter governance may be the major social innovation of the 21st century,” says Noveck. “The MacArthur Research Network on Opening Governance is the ideal crucible for helping  transition from closed and centralized to open and collaborative institutions of governance in a way that is scientifically sound and yields new insights to inform future efforts, always with an eye toward real-world impacts.”
MacArthur Foundation President Robert Gallucci added, “Recognizing that we cannot solve today’s challenges with yesterday’s tools, this interdisciplinary group will bring fresh thinking to questions about how our governing institutions operate, and how they can develop better ways to help address seemingly intractable social problems for the common good.”
Members
The MacArthur Research Network on Opening Governance comprises:
Chair: Beth Simone Noveck
Network Coordinator: Andrew Young
Chief of Research: Stefaan Verhulst
Faculty Members:

  • Sir Tim Berners-Lee (Massachusetts Institute of Technology (MIT)/University of Southampton, UK)
  • Deborah Estrin (Cornell Tech/Weill Cornell Medical College)
  • Erik Johnston (Arizona State University)
  • Henry Farrell (George Washington University)
  • Sheena S. Iyengar (Columbia Business School/Jerome A. Chazen Institute of International Business)
  • Karim Lakhani (Harvard Business School)
  • Anita McGahan (University of Toronto)
  • Cosma Shalizi (Carnegie Mellon/Santa Fe Institute)

Institutional Members:

  • Christian Bason and Jesper Christiansen (MindLab, Denmark)
  • Geoff Mulgan (National Endowment for Science Technology and the Arts – NESTA, United Kingdom)
  • Lee Rainie (Pew Research Center)

The Network is eager to hear from and engage with the public as it undertakes its work. Please contact Stefaan Verhulst to share your ideas or identify opportunities to collaborate.”

The benefits—and limits—of decision models


Article by Phil Rosenzweig in McKinsey Quaterly: “The growing power of decision models has captured plenty of C-suite attention in recent years. Combining vast amounts of data and increasingly sophisticated algorithms, modeling has opened up new pathways for improving corporate performance.1 Models can be immensely useful, often making very accurate predictions or guiding knotty optimization choices and, in the process, can help companies to avoid some of the common biases that at times undermine leaders’ judgments.
Yet when organizations embrace decision models, they sometimes overlook the need to use them well. In this article, I’ll address an important distinction between outcomes leaders can influence and those they cannot. For things that executives cannot directly influence, accurate judgments are paramount and the new modeling tools can be valuable. However, when a senior manager can have a direct influence over the outcome of a decision, the challenge is quite different. In this case, the task isn’t to predict what will happen but to make it happen. Here, positive thinking—indeed, a healthy dose of management confidence—can make the difference between success and failure.

Where models work well

Examples of successful decision models are numerous and growing. Retailers gather real-time information about customer behavior by monitoring preferences and spending patterns. They can also run experiments to test the impact of changes in pricing or packaging and then rapidly observe the quantities sold. Banks approve loans and insurance companies extend coverage, basing their decisions on models that are continually updated, factoring in the most information to make the best decisions.
Some recent applications are truly dazzling. Certain companies analyze masses of financial transactions in real time to detect fraudulent credit-card use. A number of companies are gathering years of data about temperature and rainfall across the United States to run weather simulations and help farmers decide what to plant and when. Better risk management and improved crop yields are the result.
Other examples of decision models border on the humorous. Garth Sundem and John Tierney devised a model to shed light on what they described, tongues firmly in cheek, as one of the world’s great unsolved mysteries: how long will a celebrity marriage last? They came up with the Sundem/Tierney Unified Celebrity Theory, which predicted the length of a marriage based on the couple’s combined age (older was better), whether either had tied the knot before (failed marriages were not a good sign), and how long they had dated (the longer the better). The model also took into account fame (measured by hits on a Google search) and sex appeal (the share of those Google hits that came up with images of the wife scantily clad). With only a handful of variables, the model did a very good job of predicting the fate of celebrity marriages over the next few years.
Models have also shown remarkable power in fields that are usually considered the domain of experts. With data from France’s premier wine-producing regions, Bordeaux and Burgundy, Princeton economist Orley Ashenfelter devised a model that used just three variables to predict the quality of a vintage: winter rainfall, harvest rainfall, and average growing-season temperature. To the surprise of many, the model outperformed wine connoisseurs.
Why do decision models perform so well? In part because they can gather vast quantities of data, but also because they avoid common biases that undermine human judgment.2 People tend to be overly precise, believing that their estimates will be more accurate than they really are. They suffer from the recency bias, placing too much weight on the most immediate information. They are also unreliable: ask someone the same question on two different occasions and you may get two different answers. Decision models have none of these drawbacks; they weigh all data objectively and evenly. No wonder they do better than humans.

Can we control outcomes?

With so many impressive examples, we might conclude that decision models can improve just about anything. That would be a mistake. Executives need not only to appreciate the power of models but also to be cognizant of their limits.
Look back over the previous examples. In every case, the goal was to make a prediction about something that could not be influenced directly. Models can estimate whether a loan will be repaid but won’t actually change the likelihood that payments will arrive on time, give borrowers a greater capacity to pay, or make sure they don’t squander their money before payment is due. Models can predict the rainfall and days of sunshine on a given farm in central Iowa but can’t change the weather. They can estimate how long a celebrity marriage might last but won’t help it last longer or cause another to end sooner. They can predict the quality of a wine vintage but won’t make the wine any better, reduce its acidity, improve the balance, or change the undertones. For these sorts of estimates, finding ways to avoid bias and maintain accuracy is essential.
Executives, however, are not concerned only with predicting things they cannot influence. Their primary duty—as the word execution implies—is to get things done. The task of leadership is to mobilize people to achieve a desired end. For that, leaders need to inspire their followers to reach demanding goals, perhaps even to do more than they have done before or believe is possible. Here, positive thinking matters. Holding a somewhat exaggerated level of self-confidence isn’t a dangerous bias; it often helps to stimulate higher performance.
This distinction seems simple but it’s often overlooked. In our embrace of decision models, we sometimes forget that so much of life is about getting things done, not predicting things we cannot control.

Improving models over time

Part of the appeal of decision models lies in their ability to make predictions, to compare those predictions with what actually happens, and then to evolve so as to make more accurate predictions. In retailing, for example, companies can run experiments with different combinations of price and packaging, then rapidly obtain feedback and alter their marketing strategy. Netflix captures rapid feedback to learn what programs have the greatest appeal and then uses those insights to adjust its offerings. Models are not only useful at any particular moment but can also be updated over time to become more and more accurate.
Using feedback to improve models is a powerful technique but is more applicable in some settings than in others. Dynamic improvement depends on two features: first, the observation of results should not make any future occurrence either more or less likely and, second, the feedback cycle of observation and adjustment should happen rapidly. Both conditions hold in retailing, where customer behavior can be measured without directly altering it and results can be applied rapidly, with prices or other features changed almost in real time. They also hold in weather forecasting, since daily measurements can refine models and help to improve subsequent predictions. The steady improvement of models that predict weather—from an average error (in the maximum temperature) of 6 degrees Fahrenheit in the early 1970s to 5 degrees in the 1990s and just 4 by 2010—is testimony to the power of updated models.
For other events, however, these two conditions may not be present. As noted, executives not only estimate things they cannot affect but are also charged with bringing about outcomes. Some of the most consequential decisions of all—including the launch of a new product, entry into a new market, or the acquisition of a rival—are about mobilizing resources to get things done. Furthermore, the results are not immediately visible and may take months or years to unfold. The ability to gather and insert objective feedback into a model, to update it, and to make a better decision the next time just isn’t present.
None of these caveats call into question the considerable power of decision analysis and predictive models in so many domains. They help underscore the main point: an appreciation of decision analytics is important, but an understanding of when these techniques are useful and of their limitations is essential, too…”

Open Government -Opportunities and Challenges for Public Governance


New volume of Public Administration and Information Technology series: “Given this global context, and taking into account both the need of academicians and practitioners, it is the intention of this book to shed light on the open government concept and, in particular:
• To provide comprehensive knowledge of recent major developments of open government around the world.
• To analyze the importance of open government efforts for public governance.
• To provide insightful analysis about those factors that are critical when designing, implementing and evaluating open government initiatives.
• To discuss how contextual factors affect open government initiatives’success or failure.
• To explore the existence of theoretical models of open government.
• To propose strategies to move forward and to address future challenges in an international context.”

Disinformation Visualization: How to lie with datavis


Mushon Zer-Aviv at School of Data: “Seeing is believing. When working with raw data we’re often encouraged to present it differently, to give it a form, to map it or visualize it. But all maps lie. In fact, maps have to lie, otherwise they wouldn’t be useful. Some are transparent and obvious lies, such as a tree icon on a map often represents more than one tree. Others are white lies – rounding numbers and prioritising details to create a more legible representation. And then there’s the third type of lie, those lies that convey a bias, be it deliberately or subconsciously. A bias that misrepresents the data and skews it towards a certain reading.

It all sounds very sinister, and indeed sometimes it is. It’s hard to see through a lie unless you stare it right in the face, and what better way to do that than to get our minds dirty and look at some examples of creative and mischievous visual manipulation.
Over the past year I’ve had a few opportunities to run Disinformation Visualization workshops, encouraging activists, designers, statisticians, analysts, researchers, technologists and artists to visualize lies. During these sessions I have used the DIKW pyramid (Data > Information > Knowledge > Wisdom), a framework for thinking about how data gains context and meaning and becomes information. This information needs to be consumed and understood to become knowledge. And finally when knowledge influences our insights and our decision making about the future it becomes wisdom. Data visualization is one of the ways to push data up the pyramid towards wisdom in order to affect our actions and decisions. It would be wise then to look at visualizations suspiciously.
DIKW
Centuries before big data, computer graphics and social media collided and gave us the datavis explosion, visualization was mostly a scientific tool for inquiry and documentation. This history gave the artform its authority as an integral part of the scientific process. Being a product of human brains and hands, a certain degree of bias was always there, no matter how scientific the process was. The effect of these early off-white lies are still felt today, as even our most celebrated interactive maps still echo the biases of the Mercator map projection, grounding Europe and North America on the top of the world, over emphasizing their size and perceived importance over the Global South. Our contemporary practices of programmatically data driven visualization hide both the human eyes and hands that produce them behind data sets, algorithms and computer graphics, but the same biases are still there, only they’re harder to decipher…”

The Power to Give


Press Release: “HTC, a global leader in mobile innovation and design, today unveiled HTC Power To Give™, an initiative that aims to create the a supercomputer by harnessing the collective processing power of Android smartphones.
Currently in beta, HTC Power To Give aims to galvanize smartphone owners to unlock their unused processing power in order to help answer some of society’s biggest questions. Currently, the fight against cancer, AIDS and Alzheimer’s; the drive to ensure every child has clean water to drink and even the search for extra-terrestrial life are all being tackled by volunteer computing platforms.
Empowering people to use their Android smartphones to offer tangible support for vital fields of research, including medicine, science and ecology, HTC Power To Give has been developed in partnership with Dr. David Anderson of the University of California, Berkeley.  The project will support the world’s largest volunteer computing initiative and tap into the powerful processing capabilities of a global network of smartphones.
Strength in numbers
One million HTC One smartphones, working towards a project via HTC Power To Give, could provide similar processing power to that of one of the world’s 30 supercomputers (one PetaFLOP). This could drastically shorten the research cycles for organizations that would otherwise have to spend years analyzing the same volume of data, potentially bringing forward important discoveries in vital subjects by weeks, months, years or even decades. For example, one of the programs available at launch is IBM’s World Community Grid, which gives anyone an opportunity to advance science by donating their computer, smartphone or tablet’s unused computing power to humanitarian research. To date, the World Community Grid volunteers have contributed almost 900,000 years’ worth of processing time to cutting-edge research.
Limitless future potential
Cher Wang, Chairwoman, HTC commented, “We’ve often used innovation to bring about change in the mobile industry, but this programme takes our vision one step further. With HTC Power To Give, we want to make it possible for anyone to dedicate their unused smartphone processing power to contribute to projects that have the potential to change the world.”
“HTC Power To Give will support the world’s largest volunteer computing initiative, and the impact that this project will have on the world over the years to come is huge. This changes everything,” noted Dr. David Anderson, Inventor of the Shared Computing Initiative BOINC, University of California, Berkeley.
Cher Wang added, “We’ve been discussing the impact that just one million HTC Power To Give-enabled smartphones could make, however analysts estimate that over 780 million Android phones were shipped in 2013i alone. Imagine the difference we could make to our children’s future if just a fraction of these Android users were able to divert some of their unused processing power to help find answers to the questions that concern us all.”
Opt-in with ease
After downloading the HTC Power To Give app from the Google Play™ store, smartphone owners can select the research programme to which they will divert a proportion of their phone’s processing power. HTC Power To Give will then run while the phone is chargingii  and connected to a WiFi network, enabling people to change the world whilst sitting at their desk or relaxing at home.
The beta version of HTC Power To Give will be available to download from the Google Play store and will initially be compatible with the HTC One family, HTC Butterfly and HTC Butterfly s. HTC plans to make the app more widely available to other Android smartphone owners in the coming six months as the beta trial progresses.”

Four Threats to American Democracy


Jared Diamond in Governance: “The U.S. government has spent the last two years wrestling with a series of crises over the federal budget and debt ceiling. I do not deny that our national debt and the prospect of a government shutdown pose real problems. But they are not our fundamental problems, although they are symptoms of them. Instead, our fundamental problems are four interconnected issues combining to threaten a breakdown of effective democratic government in the United States.
Why should we care? Let’s remind ourselves of the oft-forgotten reasons why democracy is a superior form of government (provided that it works), and hence why its deterioration is very worrisome. (Of course, I acknowledge that there are many countries in which democracy does not work, because of the lack of a national identity, of an informed electorate, or of both). The advantages of democracy include the following:

  • In a democracy, one can propose and discuss virtually any idea, even if it is initially unpalatable to the government. Debate may reveal the idea to be the best solution, whereas in a dictatorship the idea would not have gotten debated, and its virtues would not have been discovered.
  • In a democracy, citizens and their ideas get heard. Hence, without democracy, people are more likely to feel unheard and frustrated and to resort to violence.
  • Compromise is essential to a democracy. It enables us to avoid tyranny by the majority or (conversely) paralysis of government through vetoes exercised by a frustrated minority.
  • In modern democracies, all citizens can vote. Hence, government is motivated to invest in all citizens, who thereby receive the opportunity to become productive, rather than just a small dictatorial elite receiving that opportunity.

Why should we Americans keep reminding ourselves of those fundamental advantages of democracies? I would answer: not only in order to motivate ourselves to defend our democratic processes, but also because increasing numbers of Americans today are falling into the trap of envying the supposed efficiency of China’s dictatorship. Yes, it is true that dictatorships, by closing debate, can sometimes implement good policies faster than can the United States, as has China in quickly converting to lead-free gasoline and building a high-speed rail network. But dictatorships suffer from a fatal disadvantage. No one, in the 5,400 years of history of centralized government on all the continents, has figured out how to ensure that a dictatorship will embrace only good policies. Dictatorships also prevent the public debate that helps to avert catastrophic policies unparalleled in any large modern First World democracy—such as China’s quickly abolishing its educational system, sending its teachers out into the fields, and creating the world’s worst air pollution.
That is why democracy, given the prerequisites of an informed electorate and a basic sense of common interest, is the best form of government—at least, better than all the alternatives that have been tried, as Winston Churchill quipped. Our form of government is a big part of the explanation why the United States has become the richest and most powerful country in the world. Hence, an undermining of democratic processes in the United States means throwing away one of our biggest advantages. Unfortunately, that is what we are now doing, in four ways.
First, political compromise has been deteriorating in recent decades, and especially in the last five years. That deterioration can be measured as the increase in Senate rejections of presidential nominees whose approvals used to be routine, the increasing use of filibusters by the minority party, the majority party’s response of abolishing filibusters for certain types of votes, and the decline in number of laws passed by Congress to the lowest level of recent history. The reasons for this breakdown in political compromise, which seems to parallel increasing levels of nastiness in other areas of American life, remain debated. Explanations offered include the growth of television and then of the Internet, replacing face-to-face communication, and the growth of many narrowly partisan TV channels at the expense of a few broad-public channels. Even if these reasons hold a germ of truth, they leave open the question why these same trends operating in Canada and in Europe have not led to similar deterioration of political compromise in those countries as well.
Second, there are increasing restrictions on the right to vote, weighing disproportionately on voters for one party and implemented at the state level by the other party. Those obstacles include making registration to vote difficult and demanding that registered voters show documentation of citizenship when they present themselves at the polls. Of course, the United States has had a long history of denying voting rights to blacks, women, and other groups. But access to voting had been increasing in the last 50 years, so the recent proliferation of restrictions reverses that long positive trend. In addition to those obstacles preventing voter registration, the United States has by far the lowest election turnout among large First World democracies: under 60% of registered voters in most presidential elections, 40% for congressional elections, and 20% for the recent election for mayor of my city of Los Angeles. (A source of numbers for this and other comparisons that I shall cite is an excellent recent book by Howard Steven Friedman, The Measure of a Nation). And, while we are talking about elections, let’s not forget the astronomical recent increase in costs and durations of election campaigns, their funding by wealthy interests, and the shift in campaign pitches to sound bites. Those trends, unparalled in other large First World democracies, undermine the democratic prerequisite of a well-informed electorate.
A third contributor to the growing breakdown of democracy is our growing gap between rich and poor. Among our most cherished core values is our belief that the United States is a “land of opportunity,” and that we uniquely offer to our citizens the potential for rising from “rags to riches”—provided that citizens have the necessary ability and work hard. This is a myth. Income and wealth disparity in the United States (as measured by the Gini index of equality/inequality, and in other ways) is much higher in the United States than in any other large First World democracy. So is hereditary socioeconomic immobility, that is, the probability that a son’s relative income will just mirror his father’s relative income, and that sons of poor fathers will not become wealthy. Part of the reason for those depressing facts is inequality of educational opportunities. Children of rich Americans tend to receive much better educations than children of poor Americans.
That is bad for our economy, because it means that we are failing to develop a large fraction of our intellectual capital. It is also bad for our political stability, because poor parents who correctly perceive that their children are not being given the opportunity to succeed may express their resulting frustration in violence. Twice during my 47 years of residence in Los Angeles, in 1964 and 1993, frustration in poor areas of Los Angeles erupted into violence, lootings, and killings. In the 1993 riots, when police feared that rioters would spill into the wealthy suburb of Beverly Hills, all that the outnumbered police could do to protect Beverly Hills was to string yellow plastic police tape across major streets. As it turned out, the rioters did not try to invade Beverly Hills in 1993. But if present trends causing frustration continue, there will be more riots in Los Angeles and other American cities, and yellow plastic police tape will not suffice to contain the rioters.
The remaining contributor to the decline of American democracy is the decline of government investment in public purposes, such as education, infrastructure, and nonmilitary research and development. Large segments of the American populace deride government investment as “socialism.” But it is not socialism. On the contrary, it is one of the longest established functions of government. Ever since the rise of the first governments 5,400 years ago, governments have served two main functions: to maintain internal peace by monopolizing force, settling disputes, and forbidding citizens to resort to violence in order to settle disputes themselves; and to redistribute individual wealth for investing in larger aims—in the worst cases, enriching the elite; in the best cases, promoting the good of society as a whole. Of course, some investment is private, by wealthy individuals and companies expecting to profit from their investments. But many potential payoffs cannot attract private investment, either because the payoff is so far off in the future (such as the payoff from universal primary school education), or because the payoff is diffused over all of society rather than concentrated in areas profitable to the private investor (such as diffused benefits of municipal fire departments, roads, and broad education). Even the most passionate American supporters of small government do not decry as socialism the funding of fire departments, interstate highways, and public schools.

Canadian Organizations Join Forces to Launch Open Data Institute to Foster Open Government


Press Release: “The Canadian Digital Media Network, the University of Waterloo, Communitech, OpenText and Desire2Learn today announced the creation of the Open Data Institute.

The Open Data Institute, which received support from the Government of Canada in this week’s budget, will work with governments, academic institutions and the private sector to solve challenges facing “open government” efforts and realize the full potential of “open data.”
According to a statement, partners will work on development of common standards, the integration of data from different levels of government and the commercialization of data, “allowing Canadians to derive greater economic benefit from datasets that are made available by all levels of government.”
The Open Data Institute is a public-private partnership. Founding partners will contribute $3 million in cash and in-kind contributions over three years to establish the institute, a figure that has been matched by the Government of Canada.
“This is a strategic investment in Canada’s ability to lead the digital economy,” said Kevin Tuer, Managing Director of CDMN. “Similar to how a common system of telephone exchanges allowed world-wide communication, the Open Data Institute will help create a common platform to share and access datasets.”
“This will allow the development of new applications and products, creating new business opportunities and jobs across the country,” he added.
“The Institute will serve as a common forum for government, academia and the private sector to collaborate on Open Government initiatives with the goal of fueling Canadian tech innovation,” noted OpenText President and CEO Mark J. Barrenechea
“The Open Data Institute has the potential to strengthen the regional economy and increase our innovative capacity,” added Feridun Hamdullahpur, president and vice-chancellor of the University of Waterloo.

The newsonomics of measuring the real impact of news


Ken Doctor at Nieman Journalism Lab: “Hello there! It’s me, your friendly neighborhood Tweet Button. What if you could tap me and unlock a brand new source of funding for startup news sources of all kinds? What if, even better, you the reader could tap that money loose with a single click?
That’s the delightfully simple conceit behind a little widget, Impaq.me, you may have seen popping up as you traverse the news web. It’s social. It’s viral. It uses OPM (Other People’s Money) — and maybe a little bit of your own. It makes a new case to funders and maybe commercial sponsors. And it spits out metrics around the clock. It aims to be a convergence widget, acting on that now-aging idea that our attention is as important as our wallet. Consider it a new digital Swiss Army knife for the attention economy. TWEET
It’s impossible to tell how much of an impact Impaq.me may have. It’s still in its second round of testing at six of the U.S.’s most successful independent nonprofit startups — MinnPost, Center for Investigative Reporting, The Texas Tribune, Voice of San Diego, ProPublica, and the Center for Public Integrity — but as in all things digital, timing is everything. And that timing seems right.
First, let’s consider that spate of new news sites that have sprouted with the winter rains — Bill Keller’s and Neil Barsky’s Marshall Project being only the latest. It’s been quite a run — from Ezra Klein’s Project X to Pierre Omidyar’s First Look (and just launched The Intercept) to the reimagining of FiveThirtyEight. While they encompass a broad range of business models and goals (“The newsonomics of why everyone seems to be starting a news site”), they all need two things: money and engagement. Or, maybe better ordered, engagement and money. The dance between the two is still in the early stages of Internet choreography. Get the sequences right and you win.
Second, and related, is the big question of “social” and how our sharing of news is changing the old publishing dynamic of editors deciding what we’re going to read. Just this week, two pieces here at the Lab — one on Upworthy’s influence and one on the social/search tango — highlighted the still-being-understood role of social in our news-reading lives.
Third, funders of news sites, especially Knight and other lead foundations, are looking for harder evidence of the value generated by their early grants. Millions have been poured into creating new news sites. Now they’re asking: What has our funding really done? Within that big question, Impaq.me is only one of several new attempts to demonstrably measure real impact in new ways. We’ll take a brief look at those impact initiatives below….
If Impaq.me is all about impact and money, then it’s got good company. There are at least two other noteworthy impact-measuring projects going on.

  • The Center for Investigative Reporting’s Impact Tracker effort impact-tracking initiative launched last fall. The big idea: getting beyond the traditional metrics like unique visitors and pageviews to track the value of investigative and enterprise work. To that end, CIR has hired Lindsay Green-Barber, a CUNY-trained social scientist, and given her a perhaps first-ever title: media impact analyst.We can see the fruits of the work around CIR’s impressive Returning Home to Battle veterans series. On that series, CIR is tracking such impacts as change and rise in the public discourse around veterans’ issues and related allocation of government resources. The notion of good journalism intended to shine a light in dark places has been embedded in the CIR DNA for a long time; this new effort is intended to provide data — and words — to describe progress toward solutions. CIR is working with The Seattle Times on the impact of that paper’s education reporting, and CIR may soon look at more partnerships as well. Related: CIR is holding two “Dissection” events in New York and Washington in April, bringing together journalists, funders, and social scientists to widen the media impact movement.
  • Chalkbeat, a growing national education news site, too, is moving on impact analysis. It’s called MORI (Measures of our Reporting’s Influence), and it’s a WordPress plugin. Says Chalkbeat cofounder Elizabeth Green: “We built MORI to solve for a problem that I guess you could call ‘impact loss.’ We knew that our stories were having all kinds of impacts, but we had no way of keeping track of these impacts or making sense of them. That meant that we couldn’t easily compile what we had done in the last year to share with the outside world (board, donors, foundations, readers, our moms) but also — just as important — we couldn’t look back on what we’d done and learn from it.”Sound familiar?
    After much inquiry, Chalkbeat settled on technology. “Within each story’s back end,” Green said, “we can enter inputs — qualitative data about the type of story, topic, and target audience — as well as outcomes — impacts on policy and practice (what we call ‘informed action’) as well as impacts on what we call ‘civic deliberation.’”