Open Government -Opportunities and Challenges for Public Governance


New volume of Public Administration and Information Technology series: “Given this global context, and taking into account both the need of academicians and practitioners, it is the intention of this book to shed light on the open government concept and, in particular:
• To provide comprehensive knowledge of recent major developments of open government around the world.
• To analyze the importance of open government efforts for public governance.
• To provide insightful analysis about those factors that are critical when designing, implementing and evaluating open government initiatives.
• To discuss how contextual factors affect open government initiatives’success or failure.
• To explore the existence of theoretical models of open government.
• To propose strategies to move forward and to address future challenges in an international context.”

This algorithm can predict a revolution


Russell Brandom at the Verge: “For students of international conflict, 2013 provided plenty to examine. There was civil war in Syria, ethnic violence in China, and riots to the point of revolution in Ukraine. For those working at Duke University’s Ward Lab, all specialists in predicting conflict, the year looks like a betting sheet, full of predictions that worked and others that didn’t pan out.

Guerrilla campaigns intensified, proving out the prediction

When the lab put out their semiannual predictions in July, they gave Paraguay a 97 percent chance of insurgency, largely based on reports of Marxist rebels. The next month, guerrilla campaigns intensified, proving out the prediction. In the case of China’s armed clashes between Uighurs and Hans, the models showed a 33 percent chance of violence, even as the cause of each individual flare-up was concealed by the country’s state-run media. On the other hand, the unrest in Ukraine didn’t start raising alarms until the action had already started, so the country was left off the report entirely.

According to Ward Lab’s staff, the purpose of the project isn’t to make predictions but to test theories. If a certain theory of geopolitics can predict an uprising in Ukraine, then maybe that theory is onto something. And even if these specialists could predict every conflict, it would only be half the battle. “It’s a success only if it doesn’t come at the cost of predicting a lot of incidents that don’t occur,” says Michael D. Ward, the lab’s founder and chief investigator, who also runs the blog Predictive Heuristics. “But it suggests that we might be on the right track.”

If a certain theory of geopolitics can predict an uprising in Ukraine, maybe that theory is onto something

Forecasting the future of a country wasn’t always done this way. Traditionally, predicting revolution or war has been a secretive project, for the simple reason that any reliable prediction would be too valuable to share. But as predictions lean more on data, they’ve actually become harder to keep secret, ushering in a new generation of open-source prediction models that butt against the siloed status quo.

Will this country’s government face an acute existential threat in the next six months?

The story of automated conflict prediction starts at the Defense Advance Research Projects Agency, known as the Pentagon’s R&D wing. In the 1990s, DARPA wanted to try out software-based approaches to anticipating which governments might collapse in the near future. The CIA was already on the case, with section chiefs from every region filing regular forecasts, but DARPA wanted to see if a computerized approach could do better. They looked at a simple question: will this country’s government face an acute existential threat in the next six months? When CIA analysts were put to the test, they averaged roughly 60 percent accuracy, so DARPA’s new system set the bar at 80 percent, looking at 29 different countries in Asia with populations over half a million. It was dubbed ICEWS, the Integrated Conflict Early Warning System, and it succeeded almost immediately, clearing 80 percent with algorithms built on simple regression analysis….

On the data side, researchers at Georgetown University are cataloging every significant political event of the past century into a single database called GDELT, and leaving the whole thing open for public research. Already, projects have used it to map the Syrian civil war and diplomatic gestures between Japan and South Korea, looking at dynamics that had never been mapped before. And then, of course, there’s Ward Lab, releasing a new sheet of predictions every six months and tweaking its algorithms with every development. It’s a mirror of the same open-vs.-closed debate in software — only now, instead of fighting over source code and security audits, it’s a fight over who can see the future the best.”

Disinformation Visualization: How to lie with datavis


Mushon Zer-Aviv at School of Data: “Seeing is believing. When working with raw data we’re often encouraged to present it differently, to give it a form, to map it or visualize it. But all maps lie. In fact, maps have to lie, otherwise they wouldn’t be useful. Some are transparent and obvious lies, such as a tree icon on a map often represents more than one tree. Others are white lies – rounding numbers and prioritising details to create a more legible representation. And then there’s the third type of lie, those lies that convey a bias, be it deliberately or subconsciously. A bias that misrepresents the data and skews it towards a certain reading.

It all sounds very sinister, and indeed sometimes it is. It’s hard to see through a lie unless you stare it right in the face, and what better way to do that than to get our minds dirty and look at some examples of creative and mischievous visual manipulation.
Over the past year I’ve had a few opportunities to run Disinformation Visualization workshops, encouraging activists, designers, statisticians, analysts, researchers, technologists and artists to visualize lies. During these sessions I have used the DIKW pyramid (Data > Information > Knowledge > Wisdom), a framework for thinking about how data gains context and meaning and becomes information. This information needs to be consumed and understood to become knowledge. And finally when knowledge influences our insights and our decision making about the future it becomes wisdom. Data visualization is one of the ways to push data up the pyramid towards wisdom in order to affect our actions and decisions. It would be wise then to look at visualizations suspiciously.
DIKW
Centuries before big data, computer graphics and social media collided and gave us the datavis explosion, visualization was mostly a scientific tool for inquiry and documentation. This history gave the artform its authority as an integral part of the scientific process. Being a product of human brains and hands, a certain degree of bias was always there, no matter how scientific the process was. The effect of these early off-white lies are still felt today, as even our most celebrated interactive maps still echo the biases of the Mercator map projection, grounding Europe and North America on the top of the world, over emphasizing their size and perceived importance over the Global South. Our contemporary practices of programmatically data driven visualization hide both the human eyes and hands that produce them behind data sets, algorithms and computer graphics, but the same biases are still there, only they’re harder to decipher…”

Mapping Twitter Topic Networks: From Polarized Crowds to Community Clusters


Pew Internet: “Conversations on Twitter create networks with identifiable contours as people reply to and mention one another in their tweets. These conversational structures differ, depending on the subject and the people driving the conversation. Six structures are regularly observed: divided, unified, fragmented, clustered, and inward and outward hub and spoke structures. These are created as individuals choose whom to reply to or mention in their Twitter messages and the structures tell a story about the nature of the conversation.
If a topic is political, it is common to see two separate, polarized crowds take shape. They form two distinct discussion groups that mostly do not interact with each other. Frequently these are recognizably liberal or conservative groups. The participants within each separate group commonly mention very different collections of website URLs and use distinct hashtags and words. The split is clearly evident in many highly controversial discussions: people in clusters that we identified as liberal used URLs for mainstream news websites, while groups we identified as conservative used links to conservative news websites and commentary sources. At the center of each group are discussion leaders, the prominent people who are widely replied to or mentioned in the discussion. In polarized discussions, each group links to a different set of influential people or organizations that can be found at the center of each conversation cluster.
While these polarized crowds are common in political conversations on Twitter, it is important to remember that the people who take the time to post and talk about political issues on Twitter are a special group. Unlike many other Twitter members, they pay attention to issues, politicians, and political news, so their conversations are not representative of the views of the full Twitterverse. Moreover, Twitter users are only 18% of internet users and 14% of the overall adult population. Their demographic profile is not reflective of the full population. Additionally, other work by the Pew Research Center has shown that tweeters’ reactions to events are often at odds with overall public opinion— sometimes being more liberal, but not always. Finally, forthcoming survey findings from Pew Research will explore the relatively modest size of the social networking population who exchange political content in their network.
Still, the structure of these Twitter conversations says something meaningful about political discourse these days and the tendency of politically active citizens to sort themselves into distinct partisan camps. Social networking maps of these conversations provide new insights because they combine analysis of the opinions people express on Twitter, the information sources they cite in their tweets, analysis of who is in the networks of the tweeters, and how big those networks are. And to the extent that these online conversations are followed by a broader audience, their impact may reach well beyond the participants themselves.
Our approach combines analysis of the size and structure of the network and its sub-groups with analysis of the words, hashtags and URLs people use. Each person who contributes to a Twitter conversation is located in a specific position in the web of relationships among all participants in the conversation. Some people occupy rare positions in the network that suggest that they have special importance and power in the conversation.
Social network maps of Twitter crowds and other collections of social media can be created with innovative data analysis tools that provide new insight into the landscape of social media. These maps highlight the people and topics that drive conversations and group behavior – insights that add to what can be learned from surveys or focus groups or even sentiment analysis of tweets. Maps of previously hidden landscapes of social media highlight the key people, groups, and topics being discussed.

Conversational archetypes on Twitter

The Polarized Crowd network structure is only one of several different ways that crowds and conversations can take shape on Twitter. There are at least six distinctive structures of social media crowds which form depending on the subject being discussed, the information sources being cited, the social networks of the people talking about the subject, and the leaders of the conversation. Each has a different social structure and shape: divided, unified, fragmented, clustered, and inward and outward hub and spokes.
After an analysis of many thousands of Twitter maps, we found six different kinds of network crowds.

Polarized Crowds in Twitter Conversations
Click to view detail

Polarized Crowd: Polarized discussions feature two big and dense groups that have little connection between them. The topics being discussed are often highly divisive and heated political subjects. In fact, there is usually little conversation between these groups despite the fact that they are focused on the same topic. Polarized Crowds on Twitter are not arguing. They are ignoring one another while pointing to different web resources and using different hashtags.
Why this matters: It shows that partisan Twitter users rely on different information sources. While liberals link to many mainstream news sources, conservatives link to a different set of websites.

Tight Crowds in Twitter Conversations
Click to to view detail

Tight Crowd: These discussions are characterized by highly interconnected people with few isolated participants. Many conferences, professional topics, hobby groups, and other subjects that attract communities take this Tight Crowd form.
Why this matters: These structures show how networked learning communities function and how sharing and mutual support can be facilitated by social media.

Brand Clusters in Twitter Conversations
Click to view detail

Brand Clusters: When well-known products or services or popular subjects like celebrities are discussed in Twitter, there is often commentary from many disconnected participants: These “isolates” participating in a conversation cluster are on the left side of the picture on the left). Well-known brands and other popular subjects can attract large fragmented Twitter populations who tweet about it but not to each other. The larger the population talking about a brand, the less likely it is that participants are connected to one another. Brand-mentioning participants focus on a topic, but tend not to connect to each other.
Why this matters: There are still institutions and topics that command mass interest. Often times, the Twitter chatter about these institutions and their messages is not among people connecting with each other. Rather, they are relaying or passing along the message of the institution or person and there is no extra exchange of ideas.

Community Clusters in Twitter Conversations
Click to view detail

Community Clusters: Some popular topics may develop multiple smaller groups, which often form around a few hubs each with its own audience, influencers, and sources of information. These Community Clusters conversations look like bazaars with multiple centers of activity. Global news stories often attract coverage from many news outlets, each with its own following. That creates a collection of medium-sized groups—and a fair number of isolates (the left side of the picture above).
Why this matters: Some information sources and subjects ignite multiple conversations, each cultivating its own audience and community. These can illustrate diverse angles on a subject based on its relevance to different audiences, revealing a diversity of opinion and perspective on a social media topic.

Broadcast Networks in Twitter Conversations
Click to view detail

Broadcast Network: Twitter commentary around breaking news stories and the output of well-known media outlets and pundits has a distinctive hub and spoke structure in which many people repeat what prominent news and media organizations tweet. The members of the Broadcast Network audience are often connected only to the hub news source, without connecting to one another. In some cases there are smaller subgroups of densely connected people— think of them as subject groupies—who do discuss the news with one another.
Why this matters: There are still powerful agenda setters and conversation starters in the new social media world. Enterprises and personalities with loyal followings can still have a large impact on the conversation.

Support Networks in Twitter Conversations
Click to view detail

Support Network: Customer complaints for a major business are often handled by a Twitter service account that attempts to resolve and manage customer issues around their products and services. This produces a hub and spoke structure that is different from the Broadcast Network pattern. In the Support Network structure, the hub account replies to many otherwise disconnected users, creating outward spokes. In contrast, in the Broadcast pattern, the hub gets replied to or retweeted by many disconnected people, creating inward spokes.
Why this matters: As government, businesses, and groups increasingly provide services and support via social media, support network structures become an important benchmark for evaluating the performance of these institutions. Customer support streams of advice and feedback can be measured in terms of efficiency and reach using social media network maps.

Why is it useful to map the social landscape this way?

Social media is increasingly home to civil society, the place where knowledge sharing, public discussions, debates, and disputes are carried out. As the new public square, social media conversations are as important to document as any other large public gathering. Network maps of public social media discussions in services like Twitter can provide insights into the role social media plays in our society. These maps are like aerial photographs of a crowd, showing the rough size and composition of a population. These maps can be augmented with on the ground interviews with crowd participants, collecting their words and interests. Insights from network analysis and visualization can complement survey or focus group research methods and can enhance sentiment analysis of the text of messages like tweets.
Like topographic maps of mountain ranges, network maps can also illustrate the points on the landscape that have the highest elevation. Some people occupy locations in networks that are analogous to positions of strategic importance on the physical landscape. Network measures of “centrality” can identify key people in influential locations in the discussion network, highlighting the people leading the conversation. The content these people create is often the most popular and widely repeated in these networks, reflecting the significant role these people play in social media discussions.
While the physical world has been mapped in great detail, the social media landscape remains mostly unknown. However, the tools and techniques for social media mapping are improving, allowing more analysts to get social media data, analyze it, and contribute to the collective construction of a more complete map of the social media world. A more complete map and understanding of the social media landscape will help interpret the trends, topics, and implications of these new communication technologies.”

Can We Balance Data Protection With Value Creation?


A “privacy perspective” by Sara Degli Esposti: “In the last few years there has been a dramatic change in the opportunities organizations have to generate value from the data they collect about customers or service users. Customers and users are rapidly becoming collections of “data points” and organizations can learn an awful lot from the analysis of this huge accumulation of data points, also known as “Big Data.”

Organizations are perhaps thrilled, dreaming about new potential applications of digital data but also a bit concerned about hidden risks and unintended consequences. Take, for example, the human rights protections placed on personal data by the EU.  Regulators are watching closely, intending to preserve the eight basic privacy principles without compromising the free flow of information.
Some may ask whether it’s even possible to balance the two.
Enter the Big Data Protection Project (BDPP): an Open University study on organizations’ ability to leverage Big Data while complying with EU data protection principles. The study represents a chance for you to contribute to, and learn about, the debate on the reform of the EU Data Protection Directive. It is open to staff with interests in data management or use, from all types of organizations, both for-profit and nonprofit, with interests in Europe.
Join us by visiting the study’s page on the Open University website. Participants will receive a report with all the results. The BDP is a scientific project—no commercial organization is involved—with implications relevant to both policy-makers and industry representatives..
What kind of legislation do we need to create that positive system of incentive for organizations to innovate in the privacy field?
There is no easy answer.
That’s why we need to undertake empirical research into actual information management practices to understand the effects of regulation on people and organizations. Legal instruments conceived with the best intentions can be ineffective or detrimental in practice. However, other factors can also intervene and motivate business players to develop procedures and solutions which go far beyond compliance. Good legislation should complement market forces in bringing values and welfare to both consumers and organizations.
Is European data protection law keeping its promise of protecting users’ information privacy while contributing to the flourishing of the digital economy or not? Will the proposed General Data Protection Regulation (GDPR) be able to achieve this goal? What would you suggest to do to motivate organizations to invest in information security and take information privacy seriously?
Let’s consider for a second some basic ideas such as the eight fundamental data protection principles: notice, consent, purpose specification and limitation, data quality, respect of data subjects’ rights, information security and accountability. Many of these ideas are present in the EU 1995 Data Protection Directive, the U.S. Fair Information Practice Principles (FIPPs) andthe 1980 OECD Guidelines. The fundamental question now is, should all these ideas be brought into the future, as suggested in the proposed new GDPR, orshould we reconsider our approach and revise some of them, as recommended in the 21st century version of the 1980 OECD Guidelines?
As you may know, notice and consent are often taken as examples of how very good intentions can be transformed into actions of limited importance. Rather than increase people’s awareness of the growing data economy, notice and consent have produced a tick-box tendency accompanied by long and unintelligible privacy policies. Besides, consent is rarely freely granted. Individuals give their consent in exchange for some product or service or as part of a job relationship. The imbalance between the two goods traded—think about how youngsters perceive not having access to some social media as a form of social exclusion—and the lack of feasible alternatives often make an instrument, such as the current use made of consent, meaningless.
On the other hand, a principle such as data quality, which has received very limited attention, could offer opportunities to policy-makers and businesses to reopen the debate on users’ control of their personal data. Having updated, accurate data is something very valuable for organizations. Data quality is also key to the success of many business models. New partnerships between users and organizations could be envisioned under this principle.
Finally, data collection limitation and purpose specification could be other examples of the divide between theory and practice: The tendency we see is that people and businesses want to share, merge and reuse data over time and to do new and unexpected things. Of course, we all want to avoid function creep and prevent any detrimental use of our personal data. We probably need new, stronger mechanisms to ensure data are used for good purposes.
Digital data have become economic assets these days. We need good legislation to stop the black market for personal data and open the debate on how each of us wants to contribute to, and benefit from, the data economy.”

Selfiecity


New Project aimed at investigating the style of self-portraits (selfies) in five cities across the world: “Selfiecity investigates selfies using a mix of theoretic, artistic and quantitative methods:

  • We present our findings about the demographics of people taking selfies, their poses and expressions.
  • Rich media visualizations (imageplots) assemble thousands of photos to reveal interesting patterns.
  • The interactive selfiexploratory allows you to navigate the whole set of 3200 photos.
  • Finally, theoretical essays discuss selfies in the history of photography, the functions of images in social media, and methods and dataset.”

Are bots taking over Wikipedia?


Kurzweil News: “As crowdsourced Wikipedia has grown too large — with more than 30 million articles in 287 languages — to be entirely edited and managed by volunteers, 12 Wikipedia bots have emerged to pick up the slack.

The bots use Wikidata — a free knowledge base that can be read and edited by both humans and bots — to exchange information between entries and between the 287 languages.

Which raises an interesting question: what portion of Wikipedia edits are generated by humans versus bots?

To find out (and keep track of other bot activity), Thomas Steiner of Google Germany has created an open-source application (and API): Wikipedia and Wikidata Realtime Edit Stats, described in an arXiv paper.
The percentages of bot vs. human edits as shown in the application is constantly changing.  A KurzweilAI snapshot on Feb. 20 at 5:19 AM EST showed an astonishing 42% of Wikipedia being edited by bots. (The application lists the 12 bots.)


Anonymous vs. logged-In humans (credit: Thomas Steiner)
The percentages also vary by language. Only 5% of English edits were by bots; but for Serbian pages, in which few Wikipedians apparently participate, 96% of edits were by bots.

The application also tracks what percentage of edits are by anonymous users. Globally, it was 25 percent in our snapshot and a surprising 34 percent for English — raising interesting questions about corporate and other interests covertly manipulating Wikipedia information.

Can Twitter Predict Major Events Such As Mass Protests?


Emerging Technology From the arXiv : “The idea that social media sites such as Twitter can predict the future has a controversial history. In the last few years, various groups have claimed to be able to predict everything from the outcome of elections to the box office takings for new movies.
It’s fair to say that these claims have generated their fair share of criticism. So it’s interesting to see a new claim come to light.
Today, Nathan Kallus at the Massachusetts Institute of Technology in Cambridge says he has developed a way to predict crowd behaviour using statements made on Twitter. In particular, he has analysed the tweets associated with the 2013 coup d’état in Egypt and says that the civil unrest associated with this event was clearly predictable days in advance.
It’s not hard to imagine how the future behaviour of crowds might be embedded in the Twitter stream. People often signal their intent to meet in advance and even coordinate their behaviour using social media. So this social media activity is a leading indicator of future crowd behaviour.
That makes it seem clear that predicting future crowd behaviour is simply a matter of picking this leading indicator out of the noise.
Kallus says this is possible by mining tweets for any mention of future events and then analysing trends associated with them. “The gathering of crowds into a single action can often be seen through trends appearing in this data far in advance,” he says.
It turns out that exactly this kind of analysis is available from a company called Recorded Future based in Cambridge, which scans 300,000 different web sources in seven different languages from all over the world. It then extracts mentions of future events for later analysis….
The bigger question is whether it’s possible to pick out this evidence in advance. In other words, is possible to make predictions before the events actually occur?
That’s not so clear but there are good reasons to be cautious. First of all, while it’s possible to correlate Twitter activity to real protests, it’s also necessary to rule out false positives. There may be significant Twitter trends that do not lead to significant protests in the streets. Kallus does not adequately address the question of how to tell these things apart.
Then there is the question of whether tweets are trustworthy. It’s not hard to imagine that when it comes to issues of great national consequence, propaganda, rumor and irony may play a significant role. So how to deal with this?
There is also the question of demographics and whether tweets truly represent the intentions and activity of the population as a whole. People who tweet are overwhelmingly likely to be young but there is another silent majority that plays hugely important role. So can the Twitter firehose really represent the intentions of this part of the population too?
The final challenge is in the nature of prediction. If the Twitter feed is predictive, then what’s needed is evidence that it can be used to make real predictions about the future and not just historical predictions about the past.
We’ve looked at some of these problems with the predictive power of social media before and the challenge is clear: if there is a claim to be able to predict the future, then this claim must be accompanied by convincing evidence of an actual prediction about an event before it happens.
Until then, it would surely be wise to be circumspect about the predictive powers of Twitter and other forms of social media.
Ref: arxiv.org/abs/1402.2308: Predicting Crowd Behavior with Big Public Data”

Canadian Organizations Join Forces to Launch Open Data Institute to Foster Open Government


Press Release: “The Canadian Digital Media Network, the University of Waterloo, Communitech, OpenText and Desire2Learn today announced the creation of the Open Data Institute.

The Open Data Institute, which received support from the Government of Canada in this week’s budget, will work with governments, academic institutions and the private sector to solve challenges facing “open government” efforts and realize the full potential of “open data.”
According to a statement, partners will work on development of common standards, the integration of data from different levels of government and the commercialization of data, “allowing Canadians to derive greater economic benefit from datasets that are made available by all levels of government.”
The Open Data Institute is a public-private partnership. Founding partners will contribute $3 million in cash and in-kind contributions over three years to establish the institute, a figure that has been matched by the Government of Canada.
“This is a strategic investment in Canada’s ability to lead the digital economy,” said Kevin Tuer, Managing Director of CDMN. “Similar to how a common system of telephone exchanges allowed world-wide communication, the Open Data Institute will help create a common platform to share and access datasets.”
“This will allow the development of new applications and products, creating new business opportunities and jobs across the country,” he added.
“The Institute will serve as a common forum for government, academia and the private sector to collaborate on Open Government initiatives with the goal of fueling Canadian tech innovation,” noted OpenText President and CEO Mark J. Barrenechea
“The Open Data Institute has the potential to strengthen the regional economy and increase our innovative capacity,” added Feridun Hamdullahpur, president and vice-chancellor of the University of Waterloo.

The newsonomics of measuring the real impact of news


Ken Doctor at Nieman Journalism Lab: “Hello there! It’s me, your friendly neighborhood Tweet Button. What if you could tap me and unlock a brand new source of funding for startup news sources of all kinds? What if, even better, you the reader could tap that money loose with a single click?
That’s the delightfully simple conceit behind a little widget, Impaq.me, you may have seen popping up as you traverse the news web. It’s social. It’s viral. It uses OPM (Other People’s Money) — and maybe a little bit of your own. It makes a new case to funders and maybe commercial sponsors. And it spits out metrics around the clock. It aims to be a convergence widget, acting on that now-aging idea that our attention is as important as our wallet. Consider it a new digital Swiss Army knife for the attention economy. TWEET
It’s impossible to tell how much of an impact Impaq.me may have. It’s still in its second round of testing at six of the U.S.’s most successful independent nonprofit startups — MinnPost, Center for Investigative Reporting, The Texas Tribune, Voice of San Diego, ProPublica, and the Center for Public Integrity — but as in all things digital, timing is everything. And that timing seems right.
First, let’s consider that spate of new news sites that have sprouted with the winter rains — Bill Keller’s and Neil Barsky’s Marshall Project being only the latest. It’s been quite a run — from Ezra Klein’s Project X to Pierre Omidyar’s First Look (and just launched The Intercept) to the reimagining of FiveThirtyEight. While they encompass a broad range of business models and goals (“The newsonomics of why everyone seems to be starting a news site”), they all need two things: money and engagement. Or, maybe better ordered, engagement and money. The dance between the two is still in the early stages of Internet choreography. Get the sequences right and you win.
Second, and related, is the big question of “social” and how our sharing of news is changing the old publishing dynamic of editors deciding what we’re going to read. Just this week, two pieces here at the Lab — one on Upworthy’s influence and one on the social/search tango — highlighted the still-being-understood role of social in our news-reading lives.
Third, funders of news sites, especially Knight and other lead foundations, are looking for harder evidence of the value generated by their early grants. Millions have been poured into creating new news sites. Now they’re asking: What has our funding really done? Within that big question, Impaq.me is only one of several new attempts to demonstrably measure real impact in new ways. We’ll take a brief look at those impact initiatives below….
If Impaq.me is all about impact and money, then it’s got good company. There are at least two other noteworthy impact-measuring projects going on.

  • The Center for Investigative Reporting’s Impact Tracker effort impact-tracking initiative launched last fall. The big idea: getting beyond the traditional metrics like unique visitors and pageviews to track the value of investigative and enterprise work. To that end, CIR has hired Lindsay Green-Barber, a CUNY-trained social scientist, and given her a perhaps first-ever title: media impact analyst.We can see the fruits of the work around CIR’s impressive Returning Home to Battle veterans series. On that series, CIR is tracking such impacts as change and rise in the public discourse around veterans’ issues and related allocation of government resources. The notion of good journalism intended to shine a light in dark places has been embedded in the CIR DNA for a long time; this new effort is intended to provide data — and words — to describe progress toward solutions. CIR is working with The Seattle Times on the impact of that paper’s education reporting, and CIR may soon look at more partnerships as well. Related: CIR is holding two “Dissection” events in New York and Washington in April, bringing together journalists, funders, and social scientists to widen the media impact movement.
  • Chalkbeat, a growing national education news site, too, is moving on impact analysis. It’s called MORI (Measures of our Reporting’s Influence), and it’s a WordPress plugin. Says Chalkbeat cofounder Elizabeth Green: “We built MORI to solve for a problem that I guess you could call ‘impact loss.’ We knew that our stories were having all kinds of impacts, but we had no way of keeping track of these impacts or making sense of them. That meant that we couldn’t easily compile what we had done in the last year to share with the outside world (board, donors, foundations, readers, our moms) but also — just as important — we couldn’t look back on what we’d done and learn from it.”Sound familiar?
    After much inquiry, Chalkbeat settled on technology. “Within each story’s back end,” Green said, “we can enter inputs — qualitative data about the type of story, topic, and target audience — as well as outcomes — impacts on policy and practice (what we call ‘informed action’) as well as impacts on what we call ‘civic deliberation.’”