Power to the People: Addressing Big Data Challenges in Neuroscience by Creating a New Cadre of Citizen Neuroscientists


Jane Roskams and Zoran Popović in Neuron: “Global neuroscience projects are producing big data at an unprecedented rate that informatic and artificial intelligence (AI) analytics simply cannot handle. Online games, like Foldit, Eterna, and Eyewire—and now a new neuroscience game, Mozak—are fueling a people-powered research science (PPRS) revolution, creating a global community of “new experts” that over time synergize with computational efforts to accelerate scientific progress, empowering us to use our collective cerebral talents to drive our understanding of our brain….(More)”

Portugal has announced the world’s first nationwide participatory budget


Graça Fonseca at apolitical:”Portugal has announced the world’s first participatory budget on a national scale. The project will let people submit ideas for what the government should spend its money on, and then vote on which ideas are adopted.

Although participatory budgeting has become increasingly popular around the world in the past few years, it has so far been confined to cities and regions, and no country that we know of has attempted it nationwide. To reach as many people as possible, Portugal is also examining another innovation: letting people cast their votes via ATM machines.

‘It’s about quality of life, it’s about the quality of public space, it’s about the quality of life for your children, it’s about your life, OK?’ Graça Fonseca, the minister responsible, told Apolitical. ‘And you have a huge deficit of trust between people and the institutions of democracy. That’s the point we’re starting from and, if you look around, Portugal is not an exception in that among Western societies. We need to build that trust and, in my opinion, it’s urgent. If you don’t do anything, in ten, twenty years you’ll have serious problems.’

Although the official window for proposals begins in January, some have already been submitted to the project’s website. One suggests equipping kindergartens with technology to teach children about robotics. Using the open-source platform Arduino, the plan is to let children play with the tech and so foster scientific understanding from the earliest age.

Proposals can be made in the areas of science, culture, agriculture and lifelong learning, and there will be more than forty events in the new year for people to present and discuss their ideas.

The organisers hope that it will go some way to restoring closer contact between government and its citizens. Previous projects have shown that people who don’t vote in general elections often do cast their ballot on the specific proposals that participatory budgeting entails. Moreover, those who make the proposals often become passionate about them, campaigning for votes, flyering, making YouTube videos, going door-to-door and so fuelling a public discussion that involves ever more people in the process.

On the other side, it can bring public servants nearer to their fellow citizens by sharpening their understanding of what people want and what their priorities are. It can also raise the quality of public services by directing them more precisely to where they’re needed as well as by tapping the collective intelligence and imagination of thousands of participants….

Although it will not be used this year, because the project is still very much in the trial phase, the use of ATMs is potentially revolutionary. As Fonseca puts it, ‘In every remote part of the country, you might have nothing else, but you have an ATM.’ Moreover, an ATM could display proposals and allow people to vote directly, not least because it already contains a secure way of verifying their identity. At the moment, for comparison, people can vote by text or online, sending in the number from their ID card, which is checked against a database….(More)”.

Wikipedia’s not as biased as you might think


Ananya Bhattacharya in Quartz: “The internet is as open as people make it. Often, people limit their Facebook and Twitter circles to likeminded people and only follow certain subreddits, blogs, and news sites, creating an echo chamber of sorts. In a sea of biased content, Wikipedia is one of the few online outlets that strives for neutrality. After 15 years in operation, it’s starting to see results

Researchers at Harvard Business School evaluated almost 4,000 articles in Wikipedia’s online database against the same entries in Encyclopedia Brittanica to compare their biases. They focused on English-language articles about US politics, especially controversial topics, that appeared in both outlets in 2012.

“That is just not a recipe for coming to a conclusion,” Shane Greenstein, one of the study’s authors, said in an interview. “We were surprised that Wikipedia had not failed, had not fallen apart in the last several years.”

Greenstein and his co-author Feng Zhu categorized each article as “blue” or “red.” Drawing from research in political science, they identified terms that are idiosyncratic to each party. For instance, political scientists have identified that Democrats were more likely to use phrases such as “war in Iraq,” “civil rights,” and “trade deficit,” while Republicans used phrases such as “economic growth,” “illegal immigration,” and “border security.”…

“In comparison to expert-based knowledge, collective intelligence does not aggravate the bias of online content when articles are substantially revised,” the authors wrote in the paper. “This is consistent with a best-case scenario in which contributors with different ideologies appear to engage in fruitful online conversations with each other, in contrast to findings from offline settings.”

More surprisingly, the authors found that the 2.8 million registered volunteer editors who were reviewing the articles also became less biased over time. “You can ask questions like ‘do editors with red tendencies tend to go to red articles or blue articles?’” Greenstein said. “You find a prevalence of opposites attract, and that was striking.” The researchers even identified the political stance for a number of anonymous editors based on their IP locations, and the trend held steadfast….(More)”

Big Data Is Not a Monolith


Book edited by Cassidy R. Sugimoto, Hamid R. Ekbia and Michael Mattioli: “Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies.

The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data’s ramifications. The contributors look at big data’s effect on individuals as it exerts social control through monitoring, mining, and manipulation; big data and society, examining both its empowering and its constraining effects; big data and science, considering issues of data governance, provenance, reuse, and trust; and big data and organizations, discussing data responsibility, “data harm,” and decision making….(More)”

Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective


 et al at Peer J. Computer Science: “Recent advances in Natural Language Processing and Machine Learning provide us with the tools to build predictive models that can be used to unveil patterns driving judicial decisions. This can be useful, for both lawyers and judges, as an assisting tool to rapidly identify cases and extract patterns which lead to certain decisions. This paper presents the first systematic study on predicting the outcome of cases tried by the European Court of Human Rights based solely on textual content. We formulate a binary classification task where the input of our classifiers is the textual content extracted from a case and the target output is the actual judgment as to whether there has been a violation of an article of the convention of human rights. Textual information is represented using contiguous word sequences, i.e., N-grams, and topics. Our models can predict the court’s decisions with a strong accuracy (79% on average). Our empirical analysis indicates that the formal facts of a case are the most important predictive factor. This is consistent with the theory of legal realism suggesting that judicial decision-making is significantly affected by the stimulus of the facts. We also observe that the topical content of a case is another important feature in this classification task and explore this relationship further by conducting a qualitative analysis….(More)”

Essays on collective intelligence


Thesis by Yiftach Nagar: “This dissertation consists of three essays that advance our understanding of collective-intelligence: how it works, how it can be used, and how it can be augmented. I combine theoretical and empirical work, spanning qualitative inquiry, lab experiments, and design, exploring how novel ways of organizing, enabled by advancements in information technology, can help us work better, innovate, and solve complex problems.

The first essay offers a collective sensemaking model to explain structurational processes in online communities. I draw upon Weick’s model of sensemaking as committed-interpretation, which I ground in a qualitative inquiry into Wikipedia’s policy discussion pages, in attempt to explain how structuration emerges as interpretations are negotiated, and then committed through conversation. I argue that the wiki environment provides conditions that help commitments form, strengthen and diffuse, and that this, in turn, helps explain trends of stabilization observed in previous research.

In the second essay, we characterize a class of semi-structured prediction problems, where patterns are difficult to discern, data are difficult to quantify, and changes occur unexpectedly. Making correct predictions under these conditions can be extremely difficult, and is often associated with high stakes. We argue that in these settings, combining predictions from humans and models can outperform predictions made by groups of people, or computers. In laboratory experiments, we combined human and machine predictions, and find the combined predictions more accurate and more robust than predictions made by groups of only people or only machines.

The third essay addresses a critical bottleneck in open-innovation systems: reviewing and selecting the best submissions, in settings where submissions are complex intellectual artifacts whose evaluation require expertise. To aid expert reviewers, we offer a computational approach we developed and tested using data from the Climate CoLab – a large citizen science platform. Our models approximate expert decisions about the submissions with high accuracy, and their use can save review labor, and accelerate the review process….(More)”

100 Stories: The Impact of Open Access


Report by Jean-Gabriel Bankier and Promita Chatterji: “It is time to reassess how we talk about the impact of open access. Early thought leaders in the field of scholarly communications sparked our collective imagination with a compelling vision for open access: improving global access to knowledge, advancing science, and providing greater access to education.1 But despite the fact that open access has gained a sizable foothold, discussions about the impact of open access are often still stuck at the level of aspirational or potential benefit. Shouldn’t we be able to gather real examples of positive outcomes to demonstrate the impact of open access? We need to get more concrete. Measurements like

Measurements like altmetrics and download counts provide useful data about usage, but remain largely indicators of early-level interest rather actual outcomes and benefits. There has been considerable research into how open access affects citation counts,2 but beyond that discussion there is still a gap between the hypothetical societal good of open access and the minutiae of usage and interest measurements. This report begins to bridge that gap by presenting a framework, drawn from 100 real stories that describe the impact of open access. Collected by bepress from across 500 institutions and 1400 journals using Digital Commons as their publishing and/or institutional repository platform, these stories present information about actual outcomes, benefits, and impacts.

This report brings to light the wide variety of scholarly and cultural activity that takes place on university campuses and the benefit resulting from greater visibility and access to these materials. We hope that administrators, authors, students, and others will be empowered to articulate and amplify the impact of their own work. We also created the framework to serve as a tool for stakeholders who are interested in advocating for open access on their campus yet lack the specific vocabulary and suitable examples. Whether it is a librarian hoping to make the case for open access with reluctant administrators or faculty, a faculty member who wants to educate students about changing modes of publishing, a funding agency looking for evidence in support of its open access requirement, or students advocating for educational affordability, the framework and stories themselves can be a catalyst for these endeavors. Put more simply, these are 100 stories to answer the question: “why does open access matter?”…(More)”

Screen Shot 2016-10-29 at 7.09.03 PM

There isn’t always an app for that: How tech can better assist refugees


Alex Glennie and Meghan Benton at Nesta: “Refugees are natural innovators. Often armed with little more than a smartphone, they must be adaptable and inventive if they are to navigate unpredictable, dangerous environments and successfully establish themselves in a new country.

Take Mojahed Akil, a young Syrian computer science student whose involvement in street protests in Aleppo brought him to the attention – and torture chambers – of the regime. With the support of his family, Mojahed was able to move across the border to the relative safety of Gaziantep, a city in southwest Turkey. Yet once he was there, he found it very difficult to communicate with those around him (most of whom only spoke Turkish but not Arabic or English) and to access essential information about laws, regulations and local services.

To overcome these challenges, Mojahed used his software training to develop a free smartphone app and website for Syrians living in Turkey. The Gherbetna platform offers both information (for example, about job listings) and connections (through letting users ask for help from the app’s community of contributors). Since its launch in 2014, it is estimated that Gherbetna has been downloaded by more than 50,000 people.

Huge efforts, but mixed results

Over the last 18 months, an explosion of creativity and innovation from tech entrepreneurs has tried to make life better for refugees. A host of new tools and resources now exists to support refugees along every stage of their journey. Our new report for the Migration Policy Institute’s Transatlantic Council on Migration explores some of these tools trying to help refugees integrate, and examines how policymakers can support the best new initiatives.

Our report finds that the speed of this ‘digital humanitarianism’ has been a double-edged sword, with a huge amount of duplication in the sector and some tools failing to get off the ground. ‘Failing fast’ might be a badge of honour in Silicon Valley, but what are the risks if vulnerable refugees rely on an app that disappears from one day to the next?

For example, consider Migreat, a ‘skyscanner for migration’, which pivoted at the height of the refugee crisis to become an asylum information app. Its selling point was that it was obsessively updated by legal experts, so users could trust the information — and rely less on smugglers or word of mouth. At its peak, Migreat had two million users a month, but according to an interview with Josephine Goube (one of the cofounders of the initiative) funding challenges meant the platform had to fold. Its digital presence still exists, but is no longer being updated, a ghost of February 2016.

Perhaps an even greater challenge is that few of these apps were designed with refugees, so many do not meet their needs. Creating an app to help refugees navigate local services is a bit like putting a sticking plaster on a deep wound: it doesn’t solve the problem that most services, and especially digital services, are not attuned to refugee needs. Having multilingual, up-to-date and easy-to-navigate government websites might be more helpful.

A new ‘digital humanitarianism’…(More)”

Crowdsourcing and cellphone data could help guide urban revitalization


Science Magazine: “For years, researchers at the MIT Media Lab have been developing a database of images captured at regular distances around several major cities. The images are scored according to different visual characteristics — how safe the depicted areas look, how affluent, how lively, and the like….Adjusted for factors such as population density and distance from city centers, the correlation between perceived safety and visitation rates was strong, but it was particularly strong for women and people over 50. The correlation was negative for people under 30, which means that males in their 20s were actually more likely to visit neighborhoods generally perceived to be unsafe than to visit neighborhoods perceived to be safe.

In the same paper, the researchers also identified several visual features that are highly correlated with judgments that a particular area is safe or unsafe. Consequently, the work could help guide city planners in decisions about how to revitalize declining neighborhoods.,,,

Jacobs’ theory, Hidalgo says, is that neighborhoods in which residents can continuously keep track of street activity tend to be safer; a corollary is that buildings with street-facing windows tend to create a sense of safety, since they imply the possibility of surveillance. Newman’s theory is an elaboration on Jacobs’, suggesting that architectural features that demarcate public and private spaces, such as flights of stairs leading up to apartment entryways or archways separating plazas from the surrounding streets, foster the sense that crossing a threshold will bring on closer scrutiny….(More)”

The effect of “sunshine” on policy deliberation: The case of the Federal Open Market Committee


John T. Woolley and Joseph Gardner in The Social Science Journal: “How does an increase in transparency affect policy deliberation? Increased government transparency is commonly advocated as beneficial to democracy. Others argue that transparency can undermine democratic deliberation by, for example, causing poorer reasoning. We analyze the effect of increased transparency in the case of a rare natural experiment involving the Federal Open Market Committee (FOMC).

In 1994 the FOMC began the delayed public release of verbatim meeting transcripts and announced it would release all transcripts of earlier, secret, meetings back into the 1970s. To assess the effect of this change in transparency on deliberation, we develop a measure of an essential aspect of deliberation, the use of reasoned arguments.

Our contributions are twofold: we demonstrate a method for measuring deliberative reasoning and we assess how a particular form of transparency affected ongoing deliberation. In a regression model with a variety of controls, we find increased transparency had no independent effect on the use of deliberative reasoning in the FOMC. Of particular interest to deliberative scholars, our model also demonstrates a powerful role for leaders in facilitating deliberation. Further, both increasing participant equality and more frequent expressions of disagreement were associated with greater use of deliberative language….(More)”