Social Media and the ‘Spiral of Silence’


Report by By , , , , and : “A major insight into human behavior from pre-internet era studies of communication is the tendency of people not to speak up about policy issues in public—or among their family, friends, and work colleagues—when they believe their own point of view is not widely shared. This tendency is called the “spiral of silence.”1
Some social media creators and supporters have hoped that social media platforms like Facebook and Twitter might produce different enough discussion venues that those with minority views might feel freer to express their opinions, thus broadening public discourse and adding new perspectives to everyday discussion of political issues.
We set out to study this by conducting a survey of 1,801 adults.2 It focused on one important public issue: Edward Snowden’s 2013 revelations of widespread government surveillance of Americans’ phone and email records. We selected this issue because other surveys by the Pew Research Center at the time we were fielding this poll showed that Americans were divided over whether the NSA contractor’s leaks about surveillance were justified and whether the surveillance policy itself was a good or bad idea. For instance, Pew Research found in one survey that 44% say the release of classified information harms the public interest while 49% said it serves the public interest.
The survey reported in this report sought people’s opinions about the Snowden leaks, their willingness to talk about the revelations in various in-person and online settings, and their perceptions of the views of those around them in a variety of online and off-line contexts.
This survey’s findings produced several major insights:

Open Intellectual Property Casebook


New book by James Boyle & Jennifer Jenkins: “..This book, the first in a series of Duke Open Coursebooks, is available for free download under a Creative Commons license. It can also be purchased in a glossy paperback print edition for $29.99, $130 cheaper than other intellectual property casebooks.
This book is an introduction to intellectual property law, the set of private legal rights that allows individuals and corporations to control intangible creations and marks—from logos to novels to drug formulae—and the exceptions and limitations that define those rights. It focuses on the three main forms of US federal intellectual property—trademark, copyright and patent—but many of the ideas discussed here apply far beyond those legal areas and far beyond the law of the United States.
The book is intended to be a textbook for the basic Intellectual Property class, but because it is an open coursebook, which can be freely edited and customized, it is also suitable for an undergraduate class, or for a business, library studies, communications or other graduate school class. Each chapter contains cases and secondary readings and a set of problems or role-playing exercises involving the material. The problems range from a video of the Napster oral argument to counseling clients about search engines and trademarks, applying the First Amendment to digital rights management and copyright or commenting on the Supreme Court’s new rulings on gene patents.
Intellectual Property: Law & the Information Society is current as of August 2014. It includes discussions of such issues as the Redskins trademark cancelations, the Google Books case and the America Invents Act. Its illustrations range from graphs showing the growth in patent litigation to comic book images about copyright. The best way to get some sense of its coverage is to download it. In coming weeks, we will provide a separate fuller webpage with a table of contents and individual downloadable chapters.
The Center has also published an accompanying supplement of statutory and treaty materials that is available for free download and low cost print purchase.”

Assessing Social Value in Open Data Initiatives: A Framework


Paper by Gianluigi Viscusi, Marco Castelli and Carlo Batini in Future Internet Journal: “Open data initiatives are characterized, in several countries, by a great extension of the number of data sets made available for access by public administrations, constituencies, businesses and other actors, such as journalists, international institutions and academics, to mention a few. However, most of the open data sets rely on selection criteria, based on a technology-driven perspective, rather than a focus on the potential public and social value of data to be published. Several experiences and reports confirm this issue, such as those of the Open Data Census. However, there are also relevant best practices. The goal of this paper is to investigate the different dimensions of a framework suitable to support public administrations, as well as constituencies, in assessing and benchmarking the social value of open data initiatives. The framework is tested on three initiatives, referring to three different countries, Italy, the United Kingdom and Tunisia. The countries have been selected to provide a focus on European and Mediterranean countries, considering also the difference in legal frameworks (civic law vs. common law countries)”

Google's fact-checking bots build vast knowledge bank


Hal Hodson in the New Scientist: “The search giant is automatically building Knowledge Vault, a massive database that could give us unprecedented access to the world’s facts

GOOGLE is building the largest store of knowledge in human history – and it’s doing so without any human help. Instead, Knowledge Vault autonomously gathers and merges information from across the web into a single base of facts about the world, and the people and objects in it.

The breadth and accuracy of this gathered knowledge is already becoming the foundation of systems that allow robots and smartphones to understand what people ask them. It promises to let Google answer questions like an oracle rather than a search engine, and even to turn a new lens on human history.

Knowledge Vault is a type of “knowledge base” – a system that stores information so that machines as well as people can read it. Where a database deals with numbers, a knowledge base deals with facts. When you type “Where was Madonna born” into Google, for example, the place given is pulled from Google’s existing knowledge base.

This existing base, called Knowledge Graph, relies on crowdsourcing to expand its information. But the firm noticed that growth was stalling; humans could only take it so far. So Google decided it needed to automate the process. It started building the Vault by using an algorithm to automatically pull in information from all over the web, using machine learning to turn the raw data into usable pieces of knowledge.

Knowledge Vault has pulled in 1.6 billion facts to date. Of these, 271 million are rated as “confident facts”, to which Google’s model ascribes a more than 90 per cent chance of being true. It does this by cross-referencing new facts with what it already knows.

“It’s a hugely impressive thing that they are pulling off,” says Fabian Suchanek, a data scientist at Télécom ParisTech in France.

Google’s Knowledge Graph is currently bigger than the Knowledge Vault, but it only includes manually integrated sources such as the CIA Factbook.

Knowledge Vault offers Google fast, automatic expansion of its knowledge – and it’s only going to get bigger. As well as the ability to analyse text on a webpage for facts to feed its knowledge base, Google can also peer under the surface of the web, hunting for hidden sources of data such as the figures that feed Amazon product pages, for example.

Tom Austin, a technology analyst at Gartner in Boston, says that the world’s biggest technology companies are racing to build similar vaults. “Google, Microsoft, Facebook, Amazon and IBM are all building them, and they’re tackling these enormous problems that we would never even have thought of trying 10 years ago,” he says.

The potential of a machine system that has the whole of human knowledge at its fingertips is huge. One of the first applications will be virtual personal assistants that go way beyond what Siri and Google Now are capable of, says Austin…”

Cell-Phone Data Might Help Predict Ebola’s Spread


David Talbot at MIT Technology Review: “A West African mobile carrier has given researchers access to data gleaned from cell phones in Senegal, providing a window into regional population movements that could help predict the spread of Ebola. The current outbreak is so far known to have killed at least 1,350 people, mainly in Liberia, Guinea, and Sierra Leone.
The model created using the data is not meant to lead to travel restrictions, but rather to offer clues about where to focus preventive measures and health care. Indeed, efforts to restrict people’s movements, such as Senegal’s decision to close its border with Guinea this week, remain extremely controversial.
Orange Telecom made “an exceptional authorization in support of Ebola control efforts,” according to Flowminder, the Swedish nonprofit that analyzed the data. “If there are outbreaks in other countries, this might tell what places connected to the outbreak location might be at increased risk of new outbreaks,” says Linus Bengtsson, a medical doctor and cofounder of Flowminder, which builds models of population movements using cell-phone data and other sources.
The data from Senegal was gathered in 2013 from 150,000 phones before being anonymized and aggregated. This information had already been given to a number of researchers as part of a data analysis challenge planned for 2015, and the carrier chose to authorize its release to Flowminder as well to help meet the Ebola crisis.
The new model helped Flowminder build a picture of the overall travel patterns of people across West Africa. In addition to using data from Senegal, researchers used an earlier data set from Ivory Coast, which Orange had released two years ago as part of a similar conference (see “Released: A Trove of Data-Mining Research from Phones” and “African Bus Routes Redrawn Using Cell-Phone Data”). The model also includes data about population movements from more conventional sources, including surveys.
Separately, Flowminder has produced an animation of the epidemic’s spread since March, based on records of when and where people died of the disease….”

Our future government will work more like Amazon


Michael Case in The Verge: “There is a lot of government in the United States. Several hundred federal agencies, 535 voting members in two houses of Congress, more than 90,000 state and local governments, and over 20 million Americans involved in public service.

We say we have a government for and by the people. But the way American government conducts its day-to-day business does not feel like anything we, the people weaned on the internet, would design in 2014. Most interactions with the US government don’t resemble anything else we’re used to in our daily lives….

But if the government is ever going to completely retool itself to provide sensible services to a growing, aging, diversifying American population, it will have to do more than bring in a couple innovators and throw data at the public. At the federal level, these kinds of adjustments will require new laws to change the way money is allocated to executive branch agencies so they can coordinate the purchase and development of a standard set of tools. State and local governments will have to agree on standard tools and data formats as well so that the mayor of Anchorage can collaborate with the governor of Delaware.

Technology is the answer to a lot of American government’s current operational shortcomings. Not only are the tools and systems most public servants use outdated and suboptimal, but the organizations and processes themselves have also calcified around similarly out-of-date thinking. So the real challenge won’t be designing cutting edge software or high tech government facilities — it’s going to be conjuring the will to overcome decades of old thinking. It’s going to be convincing over 90,000 employees to learn new skills, coaxing a bitterly divided Congress to collaborate on something scary, and finding a way to convince a timid and distracted White House to put its name on risky investments that won’t show benefits for many years.

But! If we can figure out a way for governments across the country to perform their basic functions and provide often life-saving services, maybe we can move on to chase even more elusive government tech unicorns. Imagine voting from your smartphone, having your taxes calculated and filed automatically with a few online confirmations, or filing for your retirement at a friendly tablet kiosk at your local government outpost. Government could — feasibly — be not only more effective, but also a pleasure to interact with someday. Someday.”

Big Data: Google Searches Predict Unemployment in Finland


Paper by Tuhkuri, Joonas: “There are over 3 billion searches globally on Google every day. This report examines whether Google search queries can be used to predict the present and the near future unemployment rate in Finland. Predicting the present and the near future is of interest, as the official records of the state of the economy are published with a delay. To assess the information contained in Google search queries, the report compares a simple predictive model of unemployment to a model that contains a variable, Google Index, formed from Google data. In addition, cross-correlation analysis and Granger-causality tests are performed. Compared to a simple benchmark, Google search queries improve the prediction of the present by 10 % measured by mean absolute error. Moreover, predictions using search terms perform 39 % better over the benchmark for near future unemployment 3 months ahead. Google search queries also tend to improve the prediction accuracy around turning points. The results suggest that Google searches contain useful information of the present and the near future unemployment rate in Finland.”

Crowd-Sourced, Gamified Solutions to Geopolitical Issues


Gamification Corp: “Daniel Green, co-founder and CTO of Wikistrat, spoke at GSummit 2014 on an intriguing topic: How Gamification Motivates All Age Groups: Or How to Get Retired Generals to Play Games Alongside Students and Interns.

Wikistrat, a crowdsourced consulting company, leverages a worldwide network of experts from various industries to solve some of the world’s geopolitical problems through the power of gamification. Wikistrat also leverages fun, training, mentorship, and networking as core concepts in their company.

Dan (@wsdan) spoke with TechnologyAdvice host Clark Buckner about Wikistrat’s work, origins, what clients can expect from working with Wikistrat, and how gamification correlates with big data and business intelligence. Listen to the podcast and read the summary below:

Wikistrat aims to solve a common problem faced by most governments and organizations when generating strategies: “groupthink.” Such entities can devise a diverse set of strategies, but they always seem to find their resolution in the most popular answer.

In order to break group thinking, Wikistrat carries out geopolitical simulations that work around “collaborative competition.” The process involves:

  • Securing analysts: Wikistrat recruits a diverse group of analysts who are experts in certain fields and located in different strategic places.

  • Competing with ideas: These analysts are placed in an online environment where, instead of competing with each other, one analyst contributes an idea, then other analysts create 2-3 more ideas based on the initial idea.

  • Breaking group thinking: Now the competition becomes only about ideas. People champion the ideas they care about rather than arguing with other analysts. That’s when Wikistrat breaks group thinking and helps their clients discover ideas they may have never considered before.

Gamification occurs when analysts create different scenarios for a specific angle or question the client raises. Plus, Wikistrat’s global analyst coverage is so good that they tout having at least one expert in every country. They accomplished this by allowing anyone—not just four-star generals—to register as an analyst. However, applicants must submit a resume and a writing sample, as well as pass a face-to-face interview….”

Beyond just politics: A systematic literature review of online participation


Paper by Christoph Lutz, Christian Pieter Hoffmann, and Miriam Meckel in First Monday :”This paper presents a systematic literature review of the current state–of–research on online participation. The review draws on four databases and is guided by the application of six topical search terms. The analysis strives to differentiate distinct forms of online participation and to identify salient discourses within each research field. We find that research on online participation is highly segregated into specific sub–discourses that reflect disciplinary boundaries. Research on online political participation and civic engagement is identified as the most prominent and extensive research field. Yet research on other forms of participation, such as cultural, business, education and health participation, provides distinct perspectives and valuable insights. We outline both field–specific and common findings and derive propositions for future research.”