Social Media as Government Watchdog


Gordon Crovitz in the Wall Street Journal: “Two new data points for the debate on whether greater access to the Internet leads to more freedom and fewer authoritarian regimes:

According to reports last week, Facebook plans to buy a company that makes solar-powered drones that can hover for years at high altitudes without refueling, which it would use to bring the Internet to parts of the world not yet on the grid. In contrast to this futuristic vision, Russia evoked land grabs of the analog Soviet era by invading Crimea after Ukrainians forced out Vladimir Putin‘s ally as president.
Internet idealists can point to another triumph in helping bring down Ukraine’s authoritarian government. Ukrainian citizens ignored intimidation including officious text messages: “Dear subscriber, you are registered as a participant in a mass disturbance.” Protesters made the most of social media to plan demonstrations and avoid attacks by security forces.
But Mr. Putin quickly delivered the message that social media only goes so far against a fully committed authoritarian. His claim that he had to invade to protect ethnic Russians in Crimea was especially brazen because there had been no loud outcry, on social media or otherwise, among Russian speakers in the region.
A new book reports the state of play on the Internet as a force for freedom. For a decade, Emily Parker, a former Wall Street Journal editorial-page writer and State Department staffer, has researched the role of the Internet in China, Cuba and Russia. The title of her book, “Now I Know Who My Comrades Are,” comes from a blogger in China who explained to Ms. Parker how the Internet helps people discover they are not alone in their views and aspirations for liberty.
Officials in these countries work hard to keep critics isolated and in fear. In Russia, Ms. Parker notes, there is also apathy because the Putin regime seems so entrenched. “Revolutions need a spark, often in the form of a political or economic crisis,” she observes. “Social media alone will not light that spark. What the Internet does create is a new kind of citizen: networked, unafraid, and ready for action.”
Asked about lessons from the invasion of Crimea, Ms. Parker noted that the Internet “chips away at Russia’s control over information.” She added: “Even as Russian state media tries to shape the narrative about Ukraine, ordinary Russians can go online to seek the truth.”
But this same shared awareness may also be accelerating a decline in U.S. influence. In the digital era, U.S. failure to make good on its promises reduces the stature of Washington faster than similar inaction did in the past.
Consider the Hungarian uprising of 1956, the first significant rebellion against Soviet control. The U.S. secretary of state, John Foster Dulles, said: “To all those suffering under communist slavery, let us say you can count on us.” Yet no help came as Soviet tanks rolled into Budapest, tens of thousands were killed, and the leader who tried to secede from the Warsaw Pact, Imre Nagy, was executed.
There were no Facebook posts or YouTube videos instantly showing the result of U.S. fecklessness. In the digital era, scenes of Russian occupation of Crimea are available 24/7. People can watch Mr. Putin’s brazen press conferences and see for themselves what he gets away with.
The U.S. stood by as Syrian civilians were massacred and gassed. There was instant global awareness when President Obama last year backed down from enforcing his “red line” when the Syrian regime used chemical weapons. American inaction in Syria sent a green light for Mr. Putin and others around the world to act with impunity.
Just in recent weeks, Iran tried to ship Syrian rockets to Gaza to attack Israel; Moscow announced it would use bases in Cuba, Venezuela and Nicaragua for its navy and bombers; and China budgeted a double-digit increase in military spending as President Obama cut back the U.S. military.
All institutions are more at risk in this era of instant communication and awareness. Reputations get lost quickly, whether it’s a misstep by a company, a gaffe by a politician, or a lack of resolve by an American president.
Over time, the power of the Internet to bring people together will help undermine authoritarian governments. But as Mr. Putin reminds us, in the short term a peaceful world depends more on a U.S. resolute in using its power and influence to deter aggression.”

The disruptive power of collaboration: An interview with Clay Shirky


McKinsey: “From the invention of the printing press to the telephone, the radio, and the Internet, the ways people collaborate change frequently, and the effects of those changes often reverberate through generations. In this video interview, Clay Shirky, author, New York University professor, and leading thinker on the impact of social media, explains the disruptive impact of technology on how people live and work—and on the economics of what we make and consume. This interview was conducted by McKinsey Global Institute partner Michael Chui, and an edited transcript of Shirky’s remarks follows….
Shirky:…The thing I’ve always looked at, because it is long-term disruptive, is changes in the way people collaborate. Because in the history of particularly the Western world, when communications tools come along and they change how people can contact each other, how they can share information, how they can find each other—we’re talking about the printing press, or the telephone, or the radio, or what have you—the changes that are left in the wake of those new technologies often span generations.
The printing press was a sustaining technology for the scientific revolution, the spread of newspapers, the spread of democracy, just on down the list. So the thing I always watch out for, when any source of disruption comes along, when anything that’s going to upset the old order comes along, is I look for what the collaborative penumbra is.”

Open Government -Opportunities and Challenges for Public Governance


New volume of Public Administration and Information Technology series: “Given this global context, and taking into account both the need of academicians and practitioners, it is the intention of this book to shed light on the open government concept and, in particular:
• To provide comprehensive knowledge of recent major developments of open government around the world.
• To analyze the importance of open government efforts for public governance.
• To provide insightful analysis about those factors that are critical when designing, implementing and evaluating open government initiatives.
• To discuss how contextual factors affect open government initiatives’success or failure.
• To explore the existence of theoretical models of open government.
• To propose strategies to move forward and to address future challenges in an international context.”

Big Data, Big New Businesses


Nigel Shaboldt and Michael Chui: “Many people have long believed that if government and the private sector agreed to share their data more freely, and allow it to be processed using the right analytics, previously unimaginable solutions to countless social, economic, and commercial problems would emerge. They may have no idea how right they are.

Even the most vocal proponents of open data appear to have underestimated how many profitable ideas and businesses stand to be created. More than 40 governments worldwide have committed to opening up their electronic data – including weather records, crime statistics, transport information, and much more – to businesses, consumers, and the general public. The McKinsey Global Institute estimates that the annual value of open data in education, transportation, consumer products, electricity, oil and gas, health care, and consumer finance could reach $3 trillion.

These benefits come in the form of new and better goods and services, as well as efficiency savings for businesses, consumers, and citizens. The range is vast. For example, drawing on data from various government agencies, the Climate Corporation (recently bought for $1 billion) has taken 30 years of weather data, 60 years of data on crop yields, and 14 terabytes of information on soil types to create customized insurance products.

Similarly, real-time traffic and transit information can be accessed on smartphone apps to inform users when the next bus is coming or how to avoid traffic congestion. And, by analyzing online comments about their products, manufacturers can identify which features consumers are most willing to pay for, and develop their business and investment strategies accordingly.

Opportunities are everywhere. A raft of open-data start-ups are now being incubated at the London-based Open Data Institute (ODI), which focuses on improving our understanding of corporate ownership, health-care delivery, energy, finance, transport, and many other areas of public interest.

Consumers are the main beneficiaries, especially in the household-goods market. It is estimated that consumers making better-informed buying decisions across sectors could capture an estimated $1.1 trillion in value annually. Third-party data aggregators are already allowing customers to compare prices across online and brick-and-mortar shops. Many also permit customers to compare quality ratings, safety data (drawn, for example, from official injury reports), information about the provenance of food, and producers’ environmental and labor practices.

Consider the book industry. Bookstores once regarded their inventory as a trade secret. Customers, competitors, and even suppliers seldom knew what stock bookstores held. Nowadays, by contrast, bookstores not only report what stock they carry but also when customers’ orders will arrive. If they did not, they would be excluded from the product-aggregation sites that have come to determine so many buying decisions.

The health-care sector is a prime target for achieving new efficiencies. By sharing the treatment data of a large patient population, for example, care providers can better identify practices that could save $180 billion annually.

The Open Data Institute-backed start-up Mastodon C uses open data on doctors’ prescriptions to differentiate among expensive patent medicines and cheaper “off-patent” varieties; when applied to just one class of drug, that could save around $400 million in one year for the British National Health Service. Meanwhile, open data on acquired infections in British hospitals has led to the publication of hospital-performance tables, a major factor in the 85% drop in reported infections.

There are also opportunities to prevent lifestyle-related diseases and improve treatment by enabling patients to compare their own data with aggregated data on similar patients. This has been shown to motivate patients to improve their diet, exercise more often, and take their medicines regularly. Similarly, letting people compare their energy use with that of their peers could prompt them to save hundreds of billions of dollars in electricity costs each year, to say nothing of reducing carbon emissions.

Such benchmarking is even more valuable for businesses seeking to improve their operational efficiency. The oil and gas industry, for example, could save $450 billion annually by sharing anonymized and aggregated data on the management of upstream and downstream facilities.

Finally, the move toward open data serves a variety of socially desirable ends, ranging from the reuse of publicly funded research to support work on poverty, inclusion, or discrimination, to the disclosure by corporations such as Nike of their supply-chain data and environmental impact.

There are, of course, challenges arising from the proliferation and systematic use of open data. Companies fear for their intellectual property; ordinary citizens worry about how their private information might be used and abused. Last year, Telefónica, the world’s fifth-largest mobile-network provider, tried to allay such fears by launching a digital confidence program to reassure customers that innovations in transparency would be implemented responsibly and without compromising users’ personal information.

The sensitive handling of these issues will be essential if we are to reap the potential $3 trillion in value that usage of open data could deliver each year. Consumers, policymakers, and companies must work together, not just to agree on common standards of analysis, but also to set the ground rules for the protection of privacy and property.”

Disinformation Visualization: How to lie with datavis


Mushon Zer-Aviv at School of Data: “Seeing is believing. When working with raw data we’re often encouraged to present it differently, to give it a form, to map it or visualize it. But all maps lie. In fact, maps have to lie, otherwise they wouldn’t be useful. Some are transparent and obvious lies, such as a tree icon on a map often represents more than one tree. Others are white lies – rounding numbers and prioritising details to create a more legible representation. And then there’s the third type of lie, those lies that convey a bias, be it deliberately or subconsciously. A bias that misrepresents the data and skews it towards a certain reading.

It all sounds very sinister, and indeed sometimes it is. It’s hard to see through a lie unless you stare it right in the face, and what better way to do that than to get our minds dirty and look at some examples of creative and mischievous visual manipulation.
Over the past year I’ve had a few opportunities to run Disinformation Visualization workshops, encouraging activists, designers, statisticians, analysts, researchers, technologists and artists to visualize lies. During these sessions I have used the DIKW pyramid (Data > Information > Knowledge > Wisdom), a framework for thinking about how data gains context and meaning and becomes information. This information needs to be consumed and understood to become knowledge. And finally when knowledge influences our insights and our decision making about the future it becomes wisdom. Data visualization is one of the ways to push data up the pyramid towards wisdom in order to affect our actions and decisions. It would be wise then to look at visualizations suspiciously.
DIKW
Centuries before big data, computer graphics and social media collided and gave us the datavis explosion, visualization was mostly a scientific tool for inquiry and documentation. This history gave the artform its authority as an integral part of the scientific process. Being a product of human brains and hands, a certain degree of bias was always there, no matter how scientific the process was. The effect of these early off-white lies are still felt today, as even our most celebrated interactive maps still echo the biases of the Mercator map projection, grounding Europe and North America on the top of the world, over emphasizing their size and perceived importance over the Global South. Our contemporary practices of programmatically data driven visualization hide both the human eyes and hands that produce them behind data sets, algorithms and computer graphics, but the same biases are still there, only they’re harder to decipher…”

The Power to Give


Press Release: “HTC, a global leader in mobile innovation and design, today unveiled HTC Power To Give™, an initiative that aims to create the a supercomputer by harnessing the collective processing power of Android smartphones.
Currently in beta, HTC Power To Give aims to galvanize smartphone owners to unlock their unused processing power in order to help answer some of society’s biggest questions. Currently, the fight against cancer, AIDS and Alzheimer’s; the drive to ensure every child has clean water to drink and even the search for extra-terrestrial life are all being tackled by volunteer computing platforms.
Empowering people to use their Android smartphones to offer tangible support for vital fields of research, including medicine, science and ecology, HTC Power To Give has been developed in partnership with Dr. David Anderson of the University of California, Berkeley.  The project will support the world’s largest volunteer computing initiative and tap into the powerful processing capabilities of a global network of smartphones.
Strength in numbers
One million HTC One smartphones, working towards a project via HTC Power To Give, could provide similar processing power to that of one of the world’s 30 supercomputers (one PetaFLOP). This could drastically shorten the research cycles for organizations that would otherwise have to spend years analyzing the same volume of data, potentially bringing forward important discoveries in vital subjects by weeks, months, years or even decades. For example, one of the programs available at launch is IBM’s World Community Grid, which gives anyone an opportunity to advance science by donating their computer, smartphone or tablet’s unused computing power to humanitarian research. To date, the World Community Grid volunteers have contributed almost 900,000 years’ worth of processing time to cutting-edge research.
Limitless future potential
Cher Wang, Chairwoman, HTC commented, “We’ve often used innovation to bring about change in the mobile industry, but this programme takes our vision one step further. With HTC Power To Give, we want to make it possible for anyone to dedicate their unused smartphone processing power to contribute to projects that have the potential to change the world.”
“HTC Power To Give will support the world’s largest volunteer computing initiative, and the impact that this project will have on the world over the years to come is huge. This changes everything,” noted Dr. David Anderson, Inventor of the Shared Computing Initiative BOINC, University of California, Berkeley.
Cher Wang added, “We’ve been discussing the impact that just one million HTC Power To Give-enabled smartphones could make, however analysts estimate that over 780 million Android phones were shipped in 2013i alone. Imagine the difference we could make to our children’s future if just a fraction of these Android users were able to divert some of their unused processing power to help find answers to the questions that concern us all.”
Opt-in with ease
After downloading the HTC Power To Give app from the Google Play™ store, smartphone owners can select the research programme to which they will divert a proportion of their phone’s processing power. HTC Power To Give will then run while the phone is chargingii  and connected to a WiFi network, enabling people to change the world whilst sitting at their desk or relaxing at home.
The beta version of HTC Power To Give will be available to download from the Google Play store and will initially be compatible with the HTC One family, HTC Butterfly and HTC Butterfly s. HTC plans to make the app more widely available to other Android smartphone owners in the coming six months as the beta trial progresses.”

Mapping Twitter Topic Networks: From Polarized Crowds to Community Clusters


Pew Internet: “Conversations on Twitter create networks with identifiable contours as people reply to and mention one another in their tweets. These conversational structures differ, depending on the subject and the people driving the conversation. Six structures are regularly observed: divided, unified, fragmented, clustered, and inward and outward hub and spoke structures. These are created as individuals choose whom to reply to or mention in their Twitter messages and the structures tell a story about the nature of the conversation.
If a topic is political, it is common to see two separate, polarized crowds take shape. They form two distinct discussion groups that mostly do not interact with each other. Frequently these are recognizably liberal or conservative groups. The participants within each separate group commonly mention very different collections of website URLs and use distinct hashtags and words. The split is clearly evident in many highly controversial discussions: people in clusters that we identified as liberal used URLs for mainstream news websites, while groups we identified as conservative used links to conservative news websites and commentary sources. At the center of each group are discussion leaders, the prominent people who are widely replied to or mentioned in the discussion. In polarized discussions, each group links to a different set of influential people or organizations that can be found at the center of each conversation cluster.
While these polarized crowds are common in political conversations on Twitter, it is important to remember that the people who take the time to post and talk about political issues on Twitter are a special group. Unlike many other Twitter members, they pay attention to issues, politicians, and political news, so their conversations are not representative of the views of the full Twitterverse. Moreover, Twitter users are only 18% of internet users and 14% of the overall adult population. Their demographic profile is not reflective of the full population. Additionally, other work by the Pew Research Center has shown that tweeters’ reactions to events are often at odds with overall public opinion— sometimes being more liberal, but not always. Finally, forthcoming survey findings from Pew Research will explore the relatively modest size of the social networking population who exchange political content in their network.
Still, the structure of these Twitter conversations says something meaningful about political discourse these days and the tendency of politically active citizens to sort themselves into distinct partisan camps. Social networking maps of these conversations provide new insights because they combine analysis of the opinions people express on Twitter, the information sources they cite in their tweets, analysis of who is in the networks of the tweeters, and how big those networks are. And to the extent that these online conversations are followed by a broader audience, their impact may reach well beyond the participants themselves.
Our approach combines analysis of the size and structure of the network and its sub-groups with analysis of the words, hashtags and URLs people use. Each person who contributes to a Twitter conversation is located in a specific position in the web of relationships among all participants in the conversation. Some people occupy rare positions in the network that suggest that they have special importance and power in the conversation.
Social network maps of Twitter crowds and other collections of social media can be created with innovative data analysis tools that provide new insight into the landscape of social media. These maps highlight the people and topics that drive conversations and group behavior – insights that add to what can be learned from surveys or focus groups or even sentiment analysis of tweets. Maps of previously hidden landscapes of social media highlight the key people, groups, and topics being discussed.

Conversational archetypes on Twitter

The Polarized Crowd network structure is only one of several different ways that crowds and conversations can take shape on Twitter. There are at least six distinctive structures of social media crowds which form depending on the subject being discussed, the information sources being cited, the social networks of the people talking about the subject, and the leaders of the conversation. Each has a different social structure and shape: divided, unified, fragmented, clustered, and inward and outward hub and spokes.
After an analysis of many thousands of Twitter maps, we found six different kinds of network crowds.

Polarized Crowds in Twitter Conversations
Click to view detail

Polarized Crowd: Polarized discussions feature two big and dense groups that have little connection between them. The topics being discussed are often highly divisive and heated political subjects. In fact, there is usually little conversation between these groups despite the fact that they are focused on the same topic. Polarized Crowds on Twitter are not arguing. They are ignoring one another while pointing to different web resources and using different hashtags.
Why this matters: It shows that partisan Twitter users rely on different information sources. While liberals link to many mainstream news sources, conservatives link to a different set of websites.

Tight Crowds in Twitter Conversations
Click to to view detail

Tight Crowd: These discussions are characterized by highly interconnected people with few isolated participants. Many conferences, professional topics, hobby groups, and other subjects that attract communities take this Tight Crowd form.
Why this matters: These structures show how networked learning communities function and how sharing and mutual support can be facilitated by social media.

Brand Clusters in Twitter Conversations
Click to view detail

Brand Clusters: When well-known products or services or popular subjects like celebrities are discussed in Twitter, there is often commentary from many disconnected participants: These “isolates” participating in a conversation cluster are on the left side of the picture on the left). Well-known brands and other popular subjects can attract large fragmented Twitter populations who tweet about it but not to each other. The larger the population talking about a brand, the less likely it is that participants are connected to one another. Brand-mentioning participants focus on a topic, but tend not to connect to each other.
Why this matters: There are still institutions and topics that command mass interest. Often times, the Twitter chatter about these institutions and their messages is not among people connecting with each other. Rather, they are relaying or passing along the message of the institution or person and there is no extra exchange of ideas.

Community Clusters in Twitter Conversations
Click to view detail

Community Clusters: Some popular topics may develop multiple smaller groups, which often form around a few hubs each with its own audience, influencers, and sources of information. These Community Clusters conversations look like bazaars with multiple centers of activity. Global news stories often attract coverage from many news outlets, each with its own following. That creates a collection of medium-sized groups—and a fair number of isolates (the left side of the picture above).
Why this matters: Some information sources and subjects ignite multiple conversations, each cultivating its own audience and community. These can illustrate diverse angles on a subject based on its relevance to different audiences, revealing a diversity of opinion and perspective on a social media topic.

Broadcast Networks in Twitter Conversations
Click to view detail

Broadcast Network: Twitter commentary around breaking news stories and the output of well-known media outlets and pundits has a distinctive hub and spoke structure in which many people repeat what prominent news and media organizations tweet. The members of the Broadcast Network audience are often connected only to the hub news source, without connecting to one another. In some cases there are smaller subgroups of densely connected people— think of them as subject groupies—who do discuss the news with one another.
Why this matters: There are still powerful agenda setters and conversation starters in the new social media world. Enterprises and personalities with loyal followings can still have a large impact on the conversation.

Support Networks in Twitter Conversations
Click to view detail

Support Network: Customer complaints for a major business are often handled by a Twitter service account that attempts to resolve and manage customer issues around their products and services. This produces a hub and spoke structure that is different from the Broadcast Network pattern. In the Support Network structure, the hub account replies to many otherwise disconnected users, creating outward spokes. In contrast, in the Broadcast pattern, the hub gets replied to or retweeted by many disconnected people, creating inward spokes.
Why this matters: As government, businesses, and groups increasingly provide services and support via social media, support network structures become an important benchmark for evaluating the performance of these institutions. Customer support streams of advice and feedback can be measured in terms of efficiency and reach using social media network maps.

Why is it useful to map the social landscape this way?

Social media is increasingly home to civil society, the place where knowledge sharing, public discussions, debates, and disputes are carried out. As the new public square, social media conversations are as important to document as any other large public gathering. Network maps of public social media discussions in services like Twitter can provide insights into the role social media plays in our society. These maps are like aerial photographs of a crowd, showing the rough size and composition of a population. These maps can be augmented with on the ground interviews with crowd participants, collecting their words and interests. Insights from network analysis and visualization can complement survey or focus group research methods and can enhance sentiment analysis of the text of messages like tweets.
Like topographic maps of mountain ranges, network maps can also illustrate the points on the landscape that have the highest elevation. Some people occupy locations in networks that are analogous to positions of strategic importance on the physical landscape. Network measures of “centrality” can identify key people in influential locations in the discussion network, highlighting the people leading the conversation. The content these people create is often the most popular and widely repeated in these networks, reflecting the significant role these people play in social media discussions.
While the physical world has been mapped in great detail, the social media landscape remains mostly unknown. However, the tools and techniques for social media mapping are improving, allowing more analysts to get social media data, analyze it, and contribute to the collective construction of a more complete map of the social media world. A more complete map and understanding of the social media landscape will help interpret the trends, topics, and implications of these new communication technologies.”

Are bots taking over Wikipedia?


Kurzweil News: “As crowdsourced Wikipedia has grown too large — with more than 30 million articles in 287 languages — to be entirely edited and managed by volunteers, 12 Wikipedia bots have emerged to pick up the slack.

The bots use Wikidata — a free knowledge base that can be read and edited by both humans and bots — to exchange information between entries and between the 287 languages.

Which raises an interesting question: what portion of Wikipedia edits are generated by humans versus bots?

To find out (and keep track of other bot activity), Thomas Steiner of Google Germany has created an open-source application (and API): Wikipedia and Wikidata Realtime Edit Stats, described in an arXiv paper.
The percentages of bot vs. human edits as shown in the application is constantly changing.  A KurzweilAI snapshot on Feb. 20 at 5:19 AM EST showed an astonishing 42% of Wikipedia being edited by bots. (The application lists the 12 bots.)


Anonymous vs. logged-In humans (credit: Thomas Steiner)
The percentages also vary by language. Only 5% of English edits were by bots; but for Serbian pages, in which few Wikipedians apparently participate, 96% of edits were by bots.

The application also tracks what percentage of edits are by anonymous users. Globally, it was 25 percent in our snapshot and a surprising 34 percent for English — raising interesting questions about corporate and other interests covertly manipulating Wikipedia information.

11 ways to rethink open data and make it relevant to the public


Miguel Paz at IJNET: “It’s time to transform open data from a trendy concept among policy wonks and news nerds into something tangible to everyday life for citizens, businesses and grassroots organizations. Here are some ideas to help us get there:
1. Improve access to data
Craig Hammer from the World Bank has tackled this issue, stating that “Open Data could be the game changer when it comes to eradicating global poverty”, but only if governments make available online data that become actionable intelligence: a launch pad for investigation, analysis, triangulation, and improved decision making at all levels.
2. Create open data for the end user
As Hammer wrote in a blog post for the Harvard Business Review, while the “opening” has generated excitement from development experts, donors, several government champions, and the increasingly mighty geek community, the hard reality is that much of the public has been left behind, or tacked on as an afterthought. Let`s get out of the building and start working for the end user.
3. Show, don’t tell
Regular folks don’t know what “open data” means. Actually, they probably don’t care what we call it and don’t know if they need it. Apple’s Steve Jobs said that a lot of times, people don’t know what they want until you show it to them. We need to stop telling them they need it and start showing them why they need it, through actionable user experience.
4. Make it relevant to people’s daily lives, not just to NGOs and policymakers’ priorities
A study of the use of open data and transparency in Chile showed the top 10 uses were for things that affect their lives directly for better or for worse: data on government subsidies and support, legal certificates, information services, paperwork. If the data doesn’t speak to priorities at the household or individual level, we’ve lost the value of both the “opening” of data, and the data itself.
5. Invite the public into the sandbox
We need to give people “better tools to not only consume, but to create and manipulate data,” says my colleague Alvaro Graves, Poderopedia’s semantic web developer and researcher. This is what Code for America does, and it’s also what happened with the advent of Web 2.0, when the availability of better tools, such as blogging platforms, helped people create and share content.
6. Realize that open data are like QR codes
Everyone talks about open data the way they used to talk about QR codes–as something ground breaking. But as with QR Codes, open data only succeeds with the proper context to satisfy the needs of citizens. Context is the most important thing to funnel use and success of open data as a tool for global change.
7. Make open data sexy and pop, like Jess3.com
Geeks became popular because they made useful and cool things that could be embraced by end users. Open data geeks need to stick with that program.
8. Help journalists embrace open data
Jorge Lanata, a famous Argentinian journalist who is now being targeted by the Cristina Fernández administration due to his unfolding of government corruption scandals, once said that 50 percent of the success of a story or newspaper is assured if journalists like it.
That’s true of open data as well. If journalists understand its value for the public interest and learn how to use it, so will the public. And if they do, the winds of change will blow. Governments and the private sector will be forced to provide better, more up-to-date and standardized data. Open data will be understood not as a concept but as a public information source as relevant as any other. We need to teach Latin American journalists to be part of this.
9. News nerds can help you put your open data to good use
In order to boost the use of open data by journalists we need news nerds, teams of lightweight and tech-heavy armored journalist-programmers who can teach colleagues how open data through brings us high-impact storytelling that can change public policies and hold authorities accountable.
News nerds can also help us with “institutionalizing data literacy across societies” as Hammer puts it. ICFJ Knight International Journalism Fellow and digital strategist Justin Arenstein calls these folks “mass mobilizers” of information. Alex Howard “points to these groups because they can help demystify data, to make it understandable by populations and not just statisticians.”
I call them News Ninja Nerds, accelerator taskforces that can foster innovationsin news, data and transparency in a speedy way, saving governments and organizations time and a lot of money. Projects like ProPublica’s Dollars For Docs are great examples of what can be achieved if you mix FOIA, open data and the will to provide news in the public interest.
10. Rename open data
Part of the reasons people don’t embrace concepts such as open data is because it is part of a lingo that has nothing to do with them. No empathy involved. Let’s start talking about people’s right to know and use the data generated by governments. As Tim O’Reilly puts it: “Government as a Platform for Greatness,” with examples we can relate to, instead of dead .PDF’s and dirty databases.
11. Don’t expect open data to substitute for thinking or reporting
Investigative Reporting can benefit from it. But “but there is no substitute for the kind of street-level digging, personal interviews, and detective work” great journalism projects entailed, says David Kaplan in a great post entitled, Why Open Data is Not Enough.”

Innovating for the Global South: New book offers practical insights


Press Release: “Despite the vast wealth generated in the last half century, in today’s world inequality is worsening and poverty is becoming increasingly chronic. Hundreds of millions of people continue to live on less than $2 per day and lack basic human necessities such as nutritious food, shelter, clean water, primary health care, and education.
Innovating for the Global South: Towards an Inclusive Innovation Agenda, the latest book from Rotman-UTP Publishing and the first volume in the Munk Series on Global Affairs, offers fresh solutions for reducing poverty in the developing world. Highlighting the multidisciplinary expertise of the University of Toronto’s Global Innovation Group, leading experts from the fields of engineering, public health, medicine, management, and public policy examine the causes and consequences of endemic poverty and the challenges of mitigating its effects from the perspective of the world’s poorest of the poor.
Can we imagine ways to generate solar energy to run essential medical equipment in the countryside? Can we adapt information and communication technologies to provide up-to-the-minute agricultural market prices for remote farming villages? How do we create more inclusive innovation processes to hear the voices of those living in urban slums? Is it possible to reinvent a low-cost toilet that operates beyond the water and electricity grids?
Motivated by the imperatives of developing, delivering, and harnessing innovation in the developing world, Innovating for the Global South is essential reading for managers, practitioners, and scholars of development, business, and policy.
“As we see it, Innovating for the Global South is fundamentally about innovating scalable solutions that mitigate the effects of poverty and underdevelopment in the Global South. It is not about inventing some new gizmo for some untapped market in the developing world,” say Profs. Dilip Soman and Joseph Wong of the UofT, who are two of the editors of the volume.
The book is edited and also features contributions by three leading UofT thinkers who are tackling innovation in the global south from three different academic perspectives.

  • Dilip Soman is Corus Chair in Communication Strategy and a professor of Marketing at the Rotman School of Management.
  • Janice Gross Stein is the Belzberg Professor of Conflict Management in the Department of Political Science and Director of the Munk School of Global Affairs.
  • Joseph Wong is Ralph and Roz Halbert Professor of Innovation at the Munk School of Global Affairs and Canada Research Chair in Democratization, Health, and Development in the Department of Political Science.

The chapters in the book address the process of innovation from a number of vantage points.
Introduction: Rethinking Innovation – Joseph Wong and Dilip Soman
Chapter 1: Poverty, Invisibility, and Innovation – Joseph Wong
Chapter 2: Behaviourally Informed Innovation – Dilip Soman
Chapter 3: Appropriate Technologies for the Global South – Yu-Ling Cheng (University of Toronto, Chemical Engineering and Applied Chemistry) and Beverly Bradley (University of Toronto, Centre for Global Engineering)
Chapter 4: Globalization of Biopharmaceutical Innovation: Implications for Poor-Market Diseases – Rahim Rezaie (University of Toronto, Munk School of Global Affairs, Research Fellow)
Chapter 5: Embedded Innovation in Health – Anita M. McGahan (University of Toronto, Rotman School of Management, Associate Dean of Research), Rahim Rezaie and Donald C. Cole (University of Toronto, Dalla Lana School of Public Health)
Chapter 6: Scaling Up: The Case of Nutritional Interventions in the Global South – Ashley Aimone Phillips (Registered Dietitian), Nandita Perumal (University of Toronto, Doctoral Fellow, Epidemiology), Carmen Ho (University of Toronto, Doctoral Fellow, Political Science), and Stanley Zlotkin (University of Toronto and the Hospital for Sick Children,Paediatrics, Public Health Sciences and Nutritional Sciences)
Chapter 7: New Models for Financing Innovative Technologies and Entrepreneurial Organizations in the Global South – Murray R. Metcalfe (University of Toronto, Centre for Global Engineering, Globalization)
Chapter 8: Innovation and Foreign Policy – Janice Gross Stein
Conclusion: Inclusive Innovation – Will Mitchell (University of Toronto, Rotman School of Management, Strategic Management), Anita M. McGahan”