How to Win a Science Contest


 at Pacific Standard: “…there are contests like the DARPA Robotics Challenge, which gives prizes for solving particularly difficult problems, like how to prevent an autonomous vehicle from crashing.

But who wins such contests, and how? One might think it’s the science insiders, since they have the knowledge and background to solve difficult scientific problems. It’s hard to imagine, for example, a political scientist solving a major problem in theoretical physics. At the same time, insiders can become inflexible, having been so ensconced in a particular way of thinking that they can’t see outside of the box, let alone think outside it.

Unfortunately, most of what we know about insiders, outsiders, and scientific success is anecdotal. (Hedy Lamarr, the late actress and co-inventor of a key wireless technology, is a prominent anecdote, but still just an anecdote.) To remedy that, Oguz Ali Acar and Jan van den Ende decided to conduct a proper study. For data, they looked to InnoCentive, an online platform that “crowdsource[s] innovative solutions from the world’s smartest people, who compete to provide ideas and solutions to important business, social, policy, scientific, and technical challenges,” according to its website.

Acar and van den Ende surveyed 230 InnoCentive contest participants, who reported how much expertise they had related to the last problem they’d solved, along with how much experience they had solving similar problems in the past, regardless of whether it was related to their professional expertise. The researchers also asked how many different scientific fields problem solvers had looked to for ideas, and how much effort they’d put into their solutions. For each of the solvers, the researchers then looked at all the contests that person won and computed their odds of winning—a measure of creativity, they argue, since contests are judged in part on the solutions’ creativity.

That data revealed an intuitive, though not entirely simple pattern. Insiders (think Richard Feynman in physics) were more likely to win a contest when they cast a wide net for ideas, while outsiders (like Lamarr) performed best when they focused on one scientific or technological domain. In other words, outsiders—who may bring a useful new perspective to bear—should bone up on the problem they’re trying to solve, while insiders, who’ve already done their homework, benefit from thinking outside the box.

Still, there’s something both groups can’t do without: hard work. “[I]f insiders … spend significant amounts of time seeking out knowledge from a wide variety of other fields, they are more likely to be creative in that domain,” Acar and van den Ende write, and if outsiders work hard, they “can turn their lack of knowledge in a domain into an advantage.”….(More)”

Governance by Algorithms: Reality Construction by Algorithmic Selection on the Internet


Paper by Natascha Just & Michael Latzer in Media, Culture & Society (fortcoming): “This paper explores the governance by algorithms in information societies. Theoretically, it builds on (co-)evolutionary innovation studies in order to adequately grasp the interplay of technological and societal change, and combines these with institutional approaches to incorporate governance by technology or rather software as institutions. Methodologically it draws from an empirical survey of Internet-based services that rely on automated algorithmic selection, a functional typology derived from it, and an analysis of associated potential social risks. It shows how algorithmic selection has become a growing source of social order, of a shared social reality in information societies. It argues that – similar to the construction of realities by traditional mass media – automated algorithmic selection applications shape daily lives and realities, affect the perception of the world, and influence behavior. However, the co-evolutionary perspective on algorithms as institutions, ideologies, intermediaries and actors highlights differences that are to be found first in the growing personalization of constructed realities, and second in the constellation of involved actors. Altogether, compared to reality construction by traditional mass media, algorithmic reality construction tends to increase individualization, commercialization, inequalities and deterritorialization, and to decrease transparency, controllability and predictability…(Full Paper)”

Knowledge Unbound


MIT Press: “Peter Suber has been a leading advocate for open access since 2001 and has worked full time on issues of open access since 2003. As a professor of philosophy during the early days of the internet, he realized its power and potential as a medium for scholarship. As he writes now, “it was like an asteroid crash, fundamentally changing the environment, challenging dinosaurs to adapt, and challenging all of us to figure out whether we were dinosaurs.” When Suber began putting his writings and course materials online for anyone to use for any purpose, he soon experienced the benefits of that wider exposure. In 2001, he started a newsletter—the Free Online Scholarship Newsletter, which later became the SPARC Open Access Newsletter—in which he explored the implications of open access for research and scholarship. This book offers a selection of some of Suber’s most significant and influential writings on open access from 2002 to 2010.

In these texts, Suber makes the case for open access to research; answers common questions, objections, and misunderstandings; analyzes policy issues; and documents the growth and evolution of open access during its most critical early decade. (Free Download)”

 

The Total Archive


LIMN issue edited by Boris Jardine and Christopher Kelty: “Vast accumulations saturate our world: phone calls and emails stored by security agencies; every preference of every individual collected by advertisers; ID numbers, and maybe an iris scan, for every Indian; hundreds of thousands of whole genome sequences; seed banks of all existing plants, and of course, books… all of them. Just what is the purpose of these optimistically total archives, and how are they changing us?

This issue of Limn asks authors and artists to consider how these accumulations govern us, where this obsession with totality came from and how we might think differently about big data and algorithms, by thinking carefully through the figure of the archive.

Contributors: Miriam Austin, Jenny Bangham, Reuben Binns, Balázs BodóGeoffry C. Bowker, Finn Brunton,Lawrence Cohen, Stephen Collier, Vadig De Croehling, Lukas Engelmann, Nicholas HA Evans, Fabienne Hess, Anna HughesBoris Jardine, Emily Jones, Judith Kaplan, Whitney Laemmli, Andrew Lakoff, Rebecca Lemov, Branwyn Poleykett, Mary Murrell, Ben Outhwaite, Julien Prévieux, and Jenny Reardon….(More)”

How to stop being so easily manipulated by misleading statistics


Q&A by Akshat Rathi in Quartz: “There are three kinds of lies: Lies, damned lies, and statistics.” Few people know the struggle of correcting such lies better than David Spiegelhalter. Since 2007, he has been the Winton professor for the public understanding of risk (though he prefers “statistics” to “risk”) at the University of Cambridge.In a sunlit hotel room in Washington DC, Quartz caught up with Spiegelhalter recently to talk about his unique job. The conversation sprawled from the wisdom of eating bacon (would you swallow any other known carcinogen?), to the serious crime of manipulating charts, to the right way to talk about rare but scary diseases.

In a sunlit hotel room in Washington DC, Quartz caught up with Spiegelhalter recently to talk about his unique job. The conversation sprawled from the wisdom of eating bacon (would you swallow any other known carcinogen?), to the serious crime of manipulating charts, to the right way to talk about rare but scary diseases.

 When he isn’t fixing people’s misunderstandings of numbers, he works to communicate numbers better so that misunderstandings can be avoided from the beginning. The interview is edited and condensed for clarity….
What’s a recent example of misrepresentation of statistics that drove you bonkers?
I got very grumpy at an official graph of British teenage pregnancy rates that apparently showed they had declined to nearly zero. Until I realized that the bottom part of the axis had been cut off, which made it impossible to visualize the (very impressive) 50% reduction since 2000.You once said graphical representation of data does not always communicate what we think it communicates. What do you mean by that?
Graphs can be as manipulative as words. Using tricks such as cutting axes, rescaling things, changing data from positive to negative, etc. Sometimes putting zero on the y-axis is wrong. So to be sure that you are communicating the right things, you need to evaluate the message that people are taking away. There are no absolute rules. It all depends on what you want to communicate….

Poorly communicated risk can have a severe effect. For instance, the news story about the risk that pregnant women are exposing their unborn child to when they drink alcohol caused stress to one of our news editors who had consumed wine moderately through her pregnancy.

 I think it’s irresponsible to say there is a risk when they actually don’t know if there is one. There is scientific uncertainty about that.
  “‘Absence of evidence is not evidence of absence.’ I hate that phrase…It’s always used in a manipulative way.” In such situations of unknown risk, there is a phrase that is often used: “Absence of evidence is not evidence of absence.” I hate that phrase. I get so angry when people use that phrase. It’s always used in a manipulative way. I say to them that it’s not evidence of absence, but if you’ve looked hard enough you’ll see that most of the time the evidence shows a very small effect, if at all.

So on the risks of drinking alcohol while being pregnant, the UK’s health authority said that as a precautionary step it’s better not to drink. That’s fair enough. This honesty is important. To say that we don’t definitely know if drinking is harmful, but to be safe we say you shouldn’t. That’s treating people as adults and allowing them to use their own judgement.

Science is a bigger and bigger part of our lives. What is the limitation in science journalism right now and how can we improve it?...(More)

Virtual tsunami simulator could help civilians prepare for the worst


Springwise: “The applications for virtual reality continue to grow — we have recently seen one VR game used to help recovering addicts and another that teaches peacekeeping skills. Now, the Aichi University of Technology has created a VR tsunami simulator, which can be experienced with Oculus Rift, Gear VR or Google Cardboard to help people prepare for natural disasters.

The three immersive videos — excerpts of which are on YouTube — were created by a team led by Dr. Tomoko Itamiya. They depict the effects of a tsunami similar to the one suffered by the country in 2011, in order that civilians can prepare themselves mentally for a natural disaster. Each video is in first person and guides the viewer through various stressful situations.

In one, the viewer is a driver, stuck in their car surrounded by water and floating vehicles. In another, the viewer is in a virtual flood, with water up to their knees and rising rapidly. All three videos use YouTube’s 360 degrees capability as well as sound effects to enhance the intensity of the situation. The hope is that by enabling viewers to experience the disaster in such an immersive way, they will be less prone to panic in the event of a real disaster….(More)”

Crowdsourcing Human Rights


Faisal Al Mutar at The World Post: “The Internet has also allowed activists to access information as never before. I recently joined the Movements.org team, a part of the New York-based organization, Advancing Human Rights. This new platform allows activists from closed societies to connect directly with people around the world with skills to help them. In the first month of its launch, thousands of activists from 92 countries have come to Movements.org to defend human rights.

Movements.org is a promising example of how technology can be utilized by activists to change the world. Dissidents from some of the most repressive dictatorships — Russia, Iran, Syria and China — are connecting with individuals from around the globe who have unique skills to aid them.

Here are just a few of the recent success stories:

  • A leading Saudi expert on combatting state-sponsored incitement in textbooks posted a request to speak with members of the German government due to their strict anti-hate-speech laws. A former foundation executive connected him with senior German officials.
  • A secular Syrian group posted a request for PR aid to explain to Americans that the opposition is not comprised solely of radical elements. The founder of a strategic communication firm based in Los Angeles responded and offered help.
  • A Yemeni dissident asked for help creating a radio station focused on youth empowerment. He was contacted by a Syrian dissident who set up Syrian radio programs to offer advice.
  • Journalists from leading newspapers offered to tell human rights stories and connected with activists from dictatorships.
  • A request was created for a song to commemorate the life of Sergei Magnitsky, a Russia tax lawyer who died in prisoner. A NYC-based song-writer created a beautiful song and activists from Russia (including a member of Pussy Riot) filmed a music video of it.
  • North Korean defectors posted requests to get information in and out of their country and technologists posted offers to help with radio and satellite communication systems.
  • A former Iranian political prisoner posted a request to help sustain his radio station which broadcasts into Iran and helps keep information flowing to Iranians.

There are more and more cases everyday….(More)

The Function of—and Need for—Institutional Review Boards


Review by  of The Censor’s Hand: The Misregulation of Human-Subject Research (Carl E. Schneider, The MIT Press): “Scientific research can be a laborious and frustrating process even before it gets started—especially when it involves living human subjects. Universities and other research institutions maintain Institutional Review Boards that scrutinize research proposals and their methodologies, consent and privacy procedures, and so on. Similarly intensive reviews are required when the intention is to use human tissue—if, say, tissue from diagnostic cancer biopsies could potentially be used to gauge the prevalence of some other illness across the population. These procedures can generate absurdities. A doctor who wanted to know which television characters children recognized, for example, was advised to seek ethics committee approval, and told that he needed to do a pilot study as a precursor.

Today’s IRB system is the response to a historic problem: academic researchers’ tendency to behave abominably when left unmonitored. Nazi medical and pseudomedical experiments provide an obvious and well-known reference, but such horrors are not found only in totalitarian regimes. The Tuskegee syphilis study, for example, deliberately left black men untreated over the course of decades so researchers could study the natural course of the disease. On a much smaller but equally disturbing scale is the case of Dan Markingson, a 26-year-old University of Michigan graduate. Suffering from psychotic illness, Markingson was coercively enrolled in a study of antipsychotics to which he could not consent, and concerns about his deteriorating condition were ignored. In 2004, he was found dead, having almost decapitated himself with a box cutter.

Many thoughtful ethicists are aware of the imperfections of IRBs. They have worried publicly for some time that the IRB system, or parts of it, may claim an authority with which even many bioethicists are uncomfortable, and hinder science for no particularly good reason. Does the system need re-tuning, a total re-build, or something even more drastic?

When it comes to IRBs, Carl E. Schneider, a professor of law and internal medicine at the University of Michigan, belongs to the abolitionist camp. In The Censor’s Hand: The Misregulation of Human-Subject Research, he presents the case against the IRB system plainly. It is a case that rests on seven related charges.

IRBs, Schneider posits, cannot be shown to do good, with regulators able to produce “no direct evidence that IRBs prevented harm”; that an IRB at least went through the motions of reviewing the trial in which Markingson died might be cited as evidence of this. On top of that, he claims, IRBs sometimes cause harm, at least insofar as they slow down medical innovation. They are built to err on the side of caution, since “research on humans” can cover a vast range of activities and disciplines, and they struggle to take this range into proper account. Correspondingly, they “lack a legible and convincing ethics”; the autonomy of IRBs means that they come to different decisions on identical cases. (In one case, an IRB thought that providing supplemental vitamin A in a study was so dangerous that it should not be allowed; another thought that withholding it in the same study was so dangerous that it should not be allowed.) IRBs have unrealistically high expectations of their members, who are often fairly ad hoc groupings with no obvious relevant expertise. They overemphasize informed consent, with the unintended consequence that cramming every possible eventuality into a consent form makes it utterly incomprehensible. Finally, Schneider argues, IRBs corrode free expression by restricting what researchers can do and how they can do it….(More)”

Accountable machines: bureaucratic cybernetics?


Alison Powell at LSE Media Policy Project Blog: “Algorithms are everywhere, or so we are told, and the black boxes of algorithmic decision-making make oversight of processes that regulators and activists argue ought to be transparent more difficult than in the past. But when, and where, and which machines do we wish to make accountable, and for what purpose? In this post I discuss how algorithms discussed by scholars are most commonly those at work on media platforms whose main products are the social networks and attention of individuals. Algorithms, in this case, construct individual identities through patterns of behaviour, and provide the opportunity for finely targeted products and services. While there are serious concerns about, for instance, price discrimination, algorithmic systems for communicating and consuming are, in my view, less inherently problematic than processes that impact on our collective participation and belonging as citizenship. In this second sphere, algorithmic processes – especially machine learning – combine with processes of governance that focus on individual identity performance to profoundly transform how citizenship is understood and undertaken.

Communicating and consuming

In the communications sphere, algorithms are what makes it possible to make money from the web for example through advertising brokerage platforms that help companies bid for ads on major newspaper websites. IP address monitoring, which tracks clicks and web activity, creates detailed consumer profiles and transform the everyday experience of communication into a constantly-updated production of consumer information. This process of personal profiling is at the heart of many of the concerns about algorithmic accountability. The consequence of perpetual production of data by individuals and the increasing capacity to analyse it even when it doesn’t appear to relate has certainly revolutionalised advertising by allowing more precise targeting, but what has it done for areas of public interest?

John Cheney-Lippold identifies how the categories of identity are now developed algorithmically, since a category like gender is not based on self-discloure, but instead on patterns of behaviour that fit with expectations set by previous alignment to a norm. In assessing ‘algorithmic identities’, he notes that these produce identity profiles which are narrower and more behaviour-based than the identities that we perform. This is a result of the fact that many of the systems that inspired the design of algorithmic systems were based on using behaviour and other markers to optimise consumption. Algorithmic identity construction has spread from the world of marketing to the broader world of citizenship – as evidenced by the Citizen Ex experiment shown at the Web We Want Festival in 2015.

Individual consumer-citizens

What’s really at stake is that the expansion of algorithmic assessment of commercially derived big data has extended the frame of the individual consumer into all kinds of other areas of experience. In a supposed ‘age of austerity’ when governments believe it’s important to cut costs, this connects with the view of citizens as primarily consumers of services, and furthermore, with the idea that a citizen is an individual subject whose relation to a state can be disintermediated given enough technology. So, with sensors on your garbage bins you don’t need to even remember to take them out. With pothole reporting platforms like FixMyStreet, a city government can be responsive to an aggregate of individual reports. But what aspects of our citizenship are collective? When, in the algorithmic state, can we expect to be together?

Put another way, is there any algorithmic process to value the long term education, inclusion, and sustenance of a whole community for example through library services?…

Seeing algorithms – machine learning in particular – as supporting decision-making for broad collective benefit rather than as part of ever more specific individual targeting and segmentation might make them more accountable. But more importantly, this would help algorithms support society – not just individual consumers….(More)”

The internet’s age of assembly is upon us


Ehud Shapiro in the Financial Times: “In 20 years, the internet has matured and has reached its equivalent of the Middle Ages. It has large feudal communities, with rulers who control everything and billions of serfs without civil rights. History tells us that the medieval era was followed by the Enlightenment. That great thinker of Enlightenment liberalism, John Stuart Mill, declared that there are three basic freedoms: freedom of thought and speech; freedom of “tastes and pursuits”; and the freedom to unite with others. The first two kinds of freedom are provided by the internet in abundance, at least in free countries.

But today’s internet technology does not support freedom of assembly, and consequently does not support democracy. For how can we practice democracy if people cannot assemble to discuss, take collective action or form political parties? The reason is that the internet currently is a masquerade. We can easily form a group on Google or Facebook, but we cannot know for sure who its members are. Online, people are sometimes not who they say they are.

Fortunately, help is on the way. The United Nations and the World Bank are committed to providing digital IDs to every person on the planet by 2030.

Digital IDs are smart cards that use public key cryptography, contain biometric information and allow easy proof of identity. They are already being used in many countries, but widespread use of them on the internet will require standardisation and seamless smartphone integration, which are yet to come.

In the meantime, we need to ask what kind of democracy could be realised on the internet. A new kind of online democracy is already emerging, with software such as Liquid Feedback or Adhocracy, which power “proposition development” and decision making. Known as “liquid” or “delegative democracy”, this is a hybrid of existing forms of direct and representative democracy.

It is like direct democracy, in that every vote is decided by the entire membership, directly or via delegation. It resembles representative democracy in that members normally trust delegates to vote on their behalf. But delegates must constantly earn the trust of the other members.

Another key question concerns which voting system to use. Systems that allow voters to rank alternatives are generally considered superior. Both delegative democracy and ranked voting require complex software and algorithms, and so previously were not practical. But they are uniquely suited to the internet.

Although today there are only a handful of efforts at internet democracy, I believe that smartphone-ready digital IDs will eventually usher in a “Cambrian explosion” of democratic forms. The resulting internet democracy will be far superior to its offline counterpart. Imagine a Facebook-like community that encompasses all of humanity. We may call it “united humanity”, as it will unite people, not nations. It will win hearts and minds by offering people the prospect of genuine participation, both locally and globally, in the democratic process….(More)