“This Prezi by César Nicandro Cruz-Rubiois designed for educational purposes. It presents open government concept and uses some Youtube source videos in order to give some examples”
This algorithm can predict a revolution
Russell Brandom at the Verge: “For students of international conflict, 2013 provided plenty to examine. There was civil war in Syria, ethnic violence in China, and riots to the point of revolution in Ukraine. For those working at Duke University’s Ward Lab, all specialists in predicting conflict, the year looks like a betting sheet, full of predictions that worked and others that didn’t pan out.
Guerrilla campaigns intensified, proving out the prediction
When the lab put out their semiannual predictions in July, they gave Paraguay a 97 percent chance of insurgency, largely based on reports of Marxist rebels. The next month, guerrilla campaigns intensified, proving out the prediction. In the case of China’s armed clashes between Uighurs and Hans, the models showed a 33 percent chance of violence, even as the cause of each individual flare-up was concealed by the country’s state-run media. On the other hand, the unrest in Ukraine didn’t start raising alarms until the action had already started, so the country was left off the report entirely.
According to Ward Lab’s staff, the purpose of the project isn’t to make predictions but to test theories. If a certain theory of geopolitics can predict an uprising in Ukraine, then maybe that theory is onto something. And even if these specialists could predict every conflict, it would only be half the battle. “It’s a success only if it doesn’t come at the cost of predicting a lot of incidents that don’t occur,” says Michael D. Ward, the lab’s founder and chief investigator, who also runs the blog Predictive Heuristics. “But it suggests that we might be on the right track.”
If a certain theory of geopolitics can predict an uprising in Ukraine, maybe that theory is onto something
Forecasting the future of a country wasn’t always done this way. Traditionally, predicting revolution or war has been a secretive project, for the simple reason that any reliable prediction would be too valuable to share. But as predictions lean more on data, they’ve actually become harder to keep secret, ushering in a new generation of open-source prediction models that butt against the siloed status quo.
Will this country’s government face an acute existential threat in the next six months?
The story of automated conflict prediction starts at the Defense Advance Research Projects Agency, known as the Pentagon’s R&D wing. In the 1990s, DARPA wanted to try out software-based approaches to anticipating which governments might collapse in the near future. The CIA was already on the case, with section chiefs from every region filing regular forecasts, but DARPA wanted to see if a computerized approach could do better. They looked at a simple question: will this country’s government face an acute existential threat in the next six months? When CIA analysts were put to the test, they averaged roughly 60 percent accuracy, so DARPA’s new system set the bar at 80 percent, looking at 29 different countries in Asia with populations over half a million. It was dubbed ICEWS, the Integrated Conflict Early Warning System, and it succeeded almost immediately, clearing 80 percent with algorithms built on simple regression analysis….
On the data side, researchers at Georgetown University are cataloging every significant political event of the past century into a single database called GDELT, and leaving the whole thing open for public research. Already, projects have used it to map the Syrian civil war and diplomatic gestures between Japan and South Korea, looking at dynamics that had never been mapped before. And then, of course, there’s Ward Lab, releasing a new sheet of predictions every six months and tweaking its algorithms with every development. It’s a mirror of the same open-vs.-closed debate in software — only now, instead of fighting over source code and security audits, it’s a fight over who can see the future the best.”
Big Data, Big New Businesses
Nigel Shaboldt and Michael Chui: “Many people have long believed that if government and the private sector agreed to share their data more freely, and allow it to be processed using the right analytics, previously unimaginable solutions to countless social, economic, and commercial problems would emerge. They may have no idea how right they are.
Even the most vocal proponents of open data appear to have underestimated how many profitable ideas and businesses stand to be created. More than 40 governments worldwide have committed to opening up their electronic data – including weather records, crime statistics, transport information, and much more – to businesses, consumers, and the general public. The McKinsey Global Institute estimates that the annual value of open data in education, transportation, consumer products, electricity, oil and gas, health care, and consumer finance could reach $3 trillion.
These benefits come in the form of new and better goods and services, as well as efficiency savings for businesses, consumers, and citizens. The range is vast. For example, drawing on data from various government agencies, the Climate Corporation (recently bought for $1 billion) has taken 30 years of weather data, 60 years of data on crop yields, and 14 terabytes of information on soil types to create customized insurance products.
Similarly, real-time traffic and transit information can be accessed on smartphone apps to inform users when the next bus is coming or how to avoid traffic congestion. And, by analyzing online comments about their products, manufacturers can identify which features consumers are most willing to pay for, and develop their business and investment strategies accordingly.
Opportunities are everywhere. A raft of open-data start-ups are now being incubated at the London-based Open Data Institute (ODI), which focuses on improving our understanding of corporate ownership, health-care delivery, energy, finance, transport, and many other areas of public interest.
Consumers are the main beneficiaries, especially in the household-goods market. It is estimated that consumers making better-informed buying decisions across sectors could capture an estimated $1.1 trillion in value annually. Third-party data aggregators are already allowing customers to compare prices across online and brick-and-mortar shops. Many also permit customers to compare quality ratings, safety data (drawn, for example, from official injury reports), information about the provenance of food, and producers’ environmental and labor practices.
Consider the book industry. Bookstores once regarded their inventory as a trade secret. Customers, competitors, and even suppliers seldom knew what stock bookstores held. Nowadays, by contrast, bookstores not only report what stock they carry but also when customers’ orders will arrive. If they did not, they would be excluded from the product-aggregation sites that have come to determine so many buying decisions.
The health-care sector is a prime target for achieving new efficiencies. By sharing the treatment data of a large patient population, for example, care providers can better identify practices that could save $180 billion annually.
The Open Data Institute-backed start-up Mastodon C uses open data on doctors’ prescriptions to differentiate among expensive patent medicines and cheaper “off-patent” varieties; when applied to just one class of drug, that could save around $400 million in one year for the British National Health Service. Meanwhile, open data on acquired infections in British hospitals has led to the publication of hospital-performance tables, a major factor in the 85% drop in reported infections.
There are also opportunities to prevent lifestyle-related diseases and improve treatment by enabling patients to compare their own data with aggregated data on similar patients. This has been shown to motivate patients to improve their diet, exercise more often, and take their medicines regularly. Similarly, letting people compare their energy use with that of their peers could prompt them to save hundreds of billions of dollars in electricity costs each year, to say nothing of reducing carbon emissions.
Such benchmarking is even more valuable for businesses seeking to improve their operational efficiency. The oil and gas industry, for example, could save $450 billion annually by sharing anonymized and aggregated data on the management of upstream and downstream facilities.
Finally, the move toward open data serves a variety of socially desirable ends, ranging from the reuse of publicly funded research to support work on poverty, inclusion, or discrimination, to the disclosure by corporations such as Nike of their supply-chain data and environmental impact.
There are, of course, challenges arising from the proliferation and systematic use of open data. Companies fear for their intellectual property; ordinary citizens worry about how their private information might be used and abused. Last year, Telefónica, the world’s fifth-largest mobile-network provider, tried to allay such fears by launching a digital confidence program to reassure customers that innovations in transparency would be implemented responsibly and without compromising users’ personal information.
The sensitive handling of these issues will be essential if we are to reap the potential $3 trillion in value that usage of open data could deliver each year. Consumers, policymakers, and companies must work together, not just to agree on common standards of analysis, but also to set the ground rules for the protection of privacy and property.”
Visualising Information for Advocacy
New book: “Visualising Information for Advocacy is a book about how advocates and activists use visual elements in their campaigns. This 170-page guide features over 60 case studies from around the world to provide an introduction to understanding visual information and a framework for using images for influence.
At Tactical Tech we have been analysing how different kinds of visual techniques serve the work of advocacy, and have been testing out our ideas. We have developed three ways to classify how the visual works in advocacy campaigns:
- Get the idea is about making simple, eye-catching products that convey one concise point, provoking and inviting audiences to find out more about the issue.
- Get the picture is about creating a visual summary of an argument by crafting a narrative with visuals and data.
- Get the detail is about presenting data through interactive digital formats in a way that allows the audience to dig deeper and explore the issue for themselves.
Flick through Visualising Information for Advocacy to get inspiration for your project, try out some of the visual techniques showcased, or find advice on how we produce visuals for advocates.”
The Power of Knowledge,' by Jeremy Black
The technological side of that syllogism, Mr. Black notes, didn’t really come to the fore until the 17th century. “Knowledge is Power,” wrote Francis Bacon, enunciating a principle that seems obvious only because we have been the beneficiaries of the processes he helped to define. Writing a few years later, René Descartes promised that his method of inquiry would make man “the master and possessor of nature.” Descartes looked forward to all manner of material benefits, not least in the realm of medicine. He was prescient.
Mr. Black touches on Bacon and Descartes in his chapter on the Scientific Revolution, but his focus is much more encompassing. He begins his story with the formation of the Mongol Empire in the 13th century and describes how the Great Khan’s superior deployment of information, prominently including his command of trade along the Silk Road, stood behind the creation of the largest contiguous empire in history.

The guiding theme of this book is the complex linkage (what Mr. Black calls “synergies”) between information and power—political power, above all, but also military, economic and technological power. The increasingly sophisticated acquisition and manipulation of information, Mr. Black argues, is “a defining characteristic of modernity,” and it fueled the rise and shaped the distinctiveness of the West from the 18th century through the early 21st.
Part of what makes modernity modern—and a large part of what has made the West its crucible—is the interpenetration of information and technique under the guidance of an increasingly secularized idea of human flourishing. Other cultures made local contributions to this drama, but Mr. Black is right: The breathtaking spectacle of modern technological achievement has been overwhelmingly a Western achievement. This fact has been a source of great sadness for politically correct exponents of multiculturalism. Mr. Black knows this, but he early on hedges his account to avoid “triumphalism.”
As Mr. Black concedes, many of the master terms of his narrative are “porous.” What, after all, is information? It is data, yes, but also statistics, rumor, propaganda and practical know-how. All enter into the story he tells, but their aggregation makes for an argument that is more kaleidoscopic than discursive.
The very breadth of Mr. Black’s subject makes reading “The Power of Knowledge” partly thrilling, partly vertiginous. It is a long journey that Mr. Black engages here, but all of the stops are express stops. There is no lingering. In a section on advances in maritime charting, for example, he mentions that an equivalent was the development during the Renaissance of one-point perspective using mathematical rules. True enough, but that rich topic is confined to a few sentences. Still, the book bristles with interesting tidbits. I took some smuggish satisfaction in knowing that the term “bureaucracy” was coined by a Frenchman (de Gournay) but that we owe “scientist” to the British geologist William Whewell.
Although deeply grounded in history from the Middle Ages on down, this book also conjures with contemporary issues. Some are technical, like the vistas of information unraveled in the human genome project. Some are political. Until fairly recently, Mr. Black notes, central governments lacked the mechanisms to intervene consistently in everyday life. That, as anyone who can pronounce the acronym “NSA” knows, has changed dramatically. Mr. Black devotes an entire chapter to what he calls “the scrutinized society.” The effort to control public opinion and the flow of information has been most flagrant in totalitarian regimes, but he shows that in democracies, too, information is “filtered and deployed as part of the battle for public opinion.”…
Disinformation Visualization: How to lie with datavis
It all sounds very sinister, and indeed sometimes it is. It’s hard to see through a lie unless you stare it right in the face, and what better way to do that than to get our minds dirty and look at some examples of creative and mischievous visual manipulation.
Over the past year I’ve had a few opportunities to run Disinformation Visualization workshops, encouraging activists, designers, statisticians, analysts, researchers, technologists and artists to visualize lies. During these sessions I have used the DIKW pyramid (Data > Information > Knowledge > Wisdom), a framework for thinking about how data gains context and meaning and becomes information. This information needs to be consumed and understood to become knowledge. And finally when knowledge influences our insights and our decision making about the future it becomes wisdom. Data visualization is one of the ways to push data up the pyramid towards wisdom in order to affect our actions and decisions. It would be wise then to look at visualizations suspiciously.
Centuries before big data, computer graphics and social media collided and gave us the datavis explosion, visualization was mostly a scientific tool for inquiry and documentation. This history gave the artform its authority as an integral part of the scientific process. Being a product of human brains and hands, a certain degree of bias was always there, no matter how scientific the process was. The effect of these early off-white lies are still felt today, as even our most celebrated interactive maps still echo the biases of the Mercator map projection, grounding Europe and North America on the top of the world, over emphasizing their size and perceived importance over the Global South. Our contemporary practices of programmatically data driven visualization hide both the human eyes and hands that produce them behind data sets, algorithms and computer graphics, but the same biases are still there, only they’re harder to decipher…”
The Problem With Serious Games–Solved

Here’s an imaginary scenario: you’re a law enforcement officer confronted with John, a 21-year-old male suspect who is accused of breaking into a private house on Sunday evening and stealing a laptop, jewellery and some cash. Your job is to find out whether John has an alibi and if so whether it is coherent and believable.
That’s exactly the kind of scenario that police officers the world over face on a regular basis. But how do you train for such a situation? How do you learn the skills necessary to gather the right kind of information?
An increasingly common way of doing this is with serious games, those designed primarily for purposes other than entertainment. In the last 10 years or so, medical, military and commercial organisations all over the world began to experiment with game-based scenarios that are designed to teach people how to perform their jobs and tasks in realistic situations.
But there is a problem with serious games which require realistic interaction is with another person. It’s relatively straightforward to design one or two scenarios that are coherent, lifelike and believable but it’s much harder to generate them continually on an ongoing basis.
Imagine in the example above, that John is a computer-generated character. What kind of activities could he describe that would serve as a believable, coherent alibi for Sunday evening? And how could he do it a thousand times, each describing a different realistic alibi. Therein lies the problem.
Today, Sigal Sina at Bar-Ilan University in Israel, and a couple pals, say they’ve solved this probelm. These guys have come up with a novel way of generating ordinary, realistic scenarios that can be cut and pasted into a serious game to serve exactly this purpose. The secret sauce in their new approach is to crowdsource the new scenarios from real people using Amazon’s Mechanical Turk service.
The approach is straightforward. Sina and co simply ask Turkers to answer a set of questions asking what they did during each one-hour period throughout various days, offering bonuses to those who provide the most varied detail.
They then analyse the answers, categorising activities by factors such as the times they are performed, the age and sex of the person doing it, the number of people involved and so on.
This then allows a computer game to cut and paste activities into the action at appropriate times. So for example, the computer can select an appropriate alibi for John on a Sunday evening by choosing an activity described by a male Turker for the same time while avoiding activitiesthat a woman might describe for a Friday morning, which might otherwise seem unbelievable. The computer also changes certain details in the narrative, such as names, locations and so on to make the narrative coherent with John’s profile….
That solves a significant problem with serious games. Until now, developers have had to spend an awful lot of time producing realistic content, a process known as procedural content generation. That’s always been straightforward for things like textures, models and terrain in game settings. Now, thanks to this new crowdsourcing technique, it can be just as easy for human interactions in serious games too.
Ref: arxiv.org/abs/1402.5034 : Using the Crowd to Generate Content for Scenario-Based Serious-Games”
The Power to Give
Press Release: “HTC, a global leader in mobile innovation and design, today unveiled HTC Power To Give™, an initiative that aims to create the a supercomputer by harnessing the collective processing power of Android smartphones.
Currently in beta, HTC Power To Give aims to galvanize smartphone owners to unlock their unused processing power in order to help answer some of society’s biggest questions. Currently, the fight against cancer, AIDS and Alzheimer’s; the drive to ensure every child has clean water to drink and even the search for extra-terrestrial life are all being tackled by volunteer computing platforms.
Empowering people to use their Android smartphones to offer tangible support for vital fields of research, including medicine, science and ecology, HTC Power To Give has been developed in partnership with Dr. David Anderson of the University of California, Berkeley. The project will support the world’s largest volunteer computing initiative and tap into the powerful processing capabilities of a global network of smartphones.
Strength in numbers
One million HTC One smartphones, working towards a project via HTC Power To Give, could provide similar processing power to that of one of the world’s 30 supercomputers (one PetaFLOP). This could drastically shorten the research cycles for organizations that would otherwise have to spend years analyzing the same volume of data, potentially bringing forward important discoveries in vital subjects by weeks, months, years or even decades. For example, one of the programs available at launch is IBM’s World Community Grid, which gives anyone an opportunity to advance science by donating their computer, smartphone or tablet’s unused computing power to humanitarian research. To date, the World Community Grid volunteers have contributed almost 900,000 years’ worth of processing time to cutting-edge research.
Limitless future potential
Cher Wang, Chairwoman, HTC commented, “We’ve often used innovation to bring about change in the mobile industry, but this programme takes our vision one step further. With HTC Power To Give, we want to make it possible for anyone to dedicate their unused smartphone processing power to contribute to projects that have the potential to change the world.”
“HTC Power To Give will support the world’s largest volunteer computing initiative, and the impact that this project will have on the world over the years to come is huge. This changes everything,” noted Dr. David Anderson, Inventor of the Shared Computing Initiative BOINC, University of California, Berkeley.
Cher Wang added, “We’ve been discussing the impact that just one million HTC Power To Give-enabled smartphones could make, however analysts estimate that over 780 million Android phones were shipped in 2013i alone. Imagine the difference we could make to our children’s future if just a fraction of these Android users were able to divert some of their unused processing power to help find answers to the questions that concern us all.”
Opt-in with ease
After downloading the HTC Power To Give app from the Google Play™ store, smartphone owners can select the research programme to which they will divert a proportion of their phone’s processing power. HTC Power To Give will then run while the phone is chargingii and connected to a WiFi network, enabling people to change the world whilst sitting at their desk or relaxing at home.
The beta version of HTC Power To Give will be available to download from the Google Play store and will initially be compatible with the HTC One family, HTC Butterfly and HTC Butterfly s. HTC plans to make the app more widely available to other Android smartphone owners in the coming six months as the beta trial progresses.”
NOAA announces RFI to unleash power of 'big data'
Mapping Twitter Topic Networks: From Polarized Crowds to Community Clusters
Pew Internet: “Conversations on Twitter create networks with identifiable contours as people reply to and mention one another in their tweets. These conversational structures differ, depending on the subject and the people driving the conversation. Six structures are regularly observed: divided, unified, fragmented, clustered, and inward and outward hub and spoke structures. These are created as individuals choose whom to reply to or mention in their Twitter messages and the structures tell a story about the nature of the conversation.
If a topic is political, it is common to see two separate, polarized crowds take shape. They form two distinct discussion groups that mostly do not interact with each other. Frequently these are recognizably liberal or conservative groups. The participants within each separate group commonly mention very different collections of website URLs and use distinct hashtags and words. The split is clearly evident in many highly controversial discussions: people in clusters that we identified as liberal used URLs for mainstream news websites, while groups we identified as conservative used links to conservative news websites and commentary sources. At the center of each group are discussion leaders, the prominent people who are widely replied to or mentioned in the discussion. In polarized discussions, each group links to a different set of influential people or organizations that can be found at the center of each conversation cluster.
While these polarized crowds are common in political conversations on Twitter, it is important to remember that the people who take the time to post and talk about political issues on Twitter are a special group. Unlike many other Twitter members, they pay attention to issues, politicians, and political news, so their conversations are not representative of the views of the full Twitterverse. Moreover, Twitter users are only 18% of internet users and 14% of the overall adult population. Their demographic profile is not reflective of the full population. Additionally, other work by the Pew Research Center has shown that tweeters’ reactions to events are often at odds with overall public opinion— sometimes being more liberal, but not always. Finally, forthcoming survey findings from Pew Research will explore the relatively modest size of the social networking population who exchange political content in their network.
Still, the structure of these Twitter conversations says something meaningful about political discourse these days and the tendency of politically active citizens to sort themselves into distinct partisan camps. Social networking maps of these conversations provide new insights because they combine analysis of the opinions people express on Twitter, the information sources they cite in their tweets, analysis of who is in the networks of the tweeters, and how big those networks are. And to the extent that these online conversations are followed by a broader audience, their impact may reach well beyond the participants themselves.
Our approach combines analysis of the size and structure of the network and its sub-groups with analysis of the words, hashtags and URLs people use. Each person who contributes to a Twitter conversation is located in a specific position in the web of relationships among all participants in the conversation. Some people occupy rare positions in the network that suggest that they have special importance and power in the conversation.
Social network maps of Twitter crowds and other collections of social media can be created with innovative data analysis tools that provide new insight into the landscape of social media. These maps highlight the people and topics that drive conversations and group behavior – insights that add to what can be learned from surveys or focus groups or even sentiment analysis of tweets. Maps of previously hidden landscapes of social media highlight the key people, groups, and topics being discussed.
Conversational archetypes on Twitter
The Polarized Crowd network structure is only one of several different ways that crowds and conversations can take shape on Twitter. There are at least six distinctive structures of social media crowds which form depending on the subject being discussed, the information sources being cited, the social networks of the people talking about the subject, and the leaders of the conversation. Each has a different social structure and shape: divided, unified, fragmented, clustered, and inward and outward hub and spokes.
After an analysis of many thousands of Twitter maps, we found six different kinds of network crowds.

Polarized Crowd: Polarized discussions feature two big and dense groups that have little connection between them. The topics being discussed are often highly divisive and heated political subjects. In fact, there is usually little conversation between these groups despite the fact that they are focused on the same topic. Polarized Crowds on Twitter are not arguing. They are ignoring one another while pointing to different web resources and using different hashtags.
Why this matters: It shows that partisan Twitter users rely on different information sources. While liberals link to many mainstream news sources, conservatives link to a different set of websites.

Tight Crowd: These discussions are characterized by highly interconnected people with few isolated participants. Many conferences, professional topics, hobby groups, and other subjects that attract communities take this Tight Crowd form.
Why this matters: These structures show how networked learning communities function and how sharing and mutual support can be facilitated by social media.

Brand Clusters: When well-known products or services or popular subjects like celebrities are discussed in Twitter, there is often commentary from many disconnected participants: These “isolates” participating in a conversation cluster are on the left side of the picture on the left). Well-known brands and other popular subjects can attract large fragmented Twitter populations who tweet about it but not to each other. The larger the population talking about a brand, the less likely it is that participants are connected to one another. Brand-mentioning participants focus on a topic, but tend not to connect to each other.
Why this matters: There are still institutions and topics that command mass interest. Often times, the Twitter chatter about these institutions and their messages is not among people connecting with each other. Rather, they are relaying or passing along the message of the institution or person and there is no extra exchange of ideas.

Community Clusters: Some popular topics may develop multiple smaller groups, which often form around a few hubs each with its own audience, influencers, and sources of information. These Community Clusters conversations look like bazaars with multiple centers of activity. Global news stories often attract coverage from many news outlets, each with its own following. That creates a collection of medium-sized groups—and a fair number of isolates (the left side of the picture above).
Why this matters: Some information sources and subjects ignite multiple conversations, each cultivating its own audience and community. These can illustrate diverse angles on a subject based on its relevance to different audiences, revealing a diversity of opinion and perspective on a social media topic.

Broadcast Network: Twitter commentary around breaking news stories and the output of well-known media outlets and pundits has a distinctive hub and spoke structure in which many people repeat what prominent news and media organizations tweet. The members of the Broadcast Network audience are often connected only to the hub news source, without connecting to one another. In some cases there are smaller subgroups of densely connected people— think of them as subject groupies—who do discuss the news with one another.
Why this matters: There are still powerful agenda setters and conversation starters in the new social media world. Enterprises and personalities with loyal followings can still have a large impact on the conversation.

Support Network: Customer complaints for a major business are often handled by a Twitter service account that attempts to resolve and manage customer issues around their products and services. This produces a hub and spoke structure that is different from the Broadcast Network pattern. In the Support Network structure, the hub account replies to many otherwise disconnected users, creating outward spokes. In contrast, in the Broadcast pattern, the hub gets replied to or retweeted by many disconnected people, creating inward spokes.
Why this matters: As government, businesses, and groups increasingly provide services and support via social media, support network structures become an important benchmark for evaluating the performance of these institutions. Customer support streams of advice and feedback can be measured in terms of efficiency and reach using social media network maps.
Why is it useful to map the social landscape this way?
Social media is increasingly home to civil society, the place where knowledge sharing, public discussions, debates, and disputes are carried out. As the new public square, social media conversations are as important to document as any other large public gathering. Network maps of public social media discussions in services like Twitter can provide insights into the role social media plays in our society. These maps are like aerial photographs of a crowd, showing the rough size and composition of a population. These maps can be augmented with on the ground interviews with crowd participants, collecting their words and interests. Insights from network analysis and visualization can complement survey or focus group research methods and can enhance sentiment analysis of the text of messages like tweets.
Like topographic maps of mountain ranges, network maps can also illustrate the points on the landscape that have the highest elevation. Some people occupy locations in networks that are analogous to positions of strategic importance on the physical landscape. Network measures of “centrality” can identify key people in influential locations in the discussion network, highlighting the people leading the conversation. The content these people create is often the most popular and widely repeated in these networks, reflecting the significant role these people play in social media discussions.
While the physical world has been mapped in great detail, the social media landscape remains mostly unknown. However, the tools and techniques for social media mapping are improving, allowing more analysts to get social media data, analyze it, and contribute to the collective construction of a more complete map of the social media world. A more complete map and understanding of the social media landscape will help interpret the trends, topics, and implications of these new communication technologies.”