For Crowdsourcing to Work, Everyone Needs an Equal Voice


Joshua Becker and Edward “Ned” Smith in Havard Business Review: “How useful is the wisdom of crowds? For years, it has been recognized as producing incredibly accurate predictions by aggregating the opinions of many people, allowing even amateur forecasters to beat the experts. The belief is that when large numbers of people make forecasts independently, their errors are uncorrelated and ultimately cancel each other out, which leads to more accurate final answers.

However, researchers and pundits have argued that the wisdom of crowds is extremely fragile, especially in two specific circumstances: when people are influenced by the opinions of others (because they lose their independence) and when opinions are distorted by cognitive biases (for example, strong political views held by a group).

In new research, we and our colleagues zeroed in on these assumptions and found that the wisdom of crowds is more robust than previously thought — it can even withstand the groupthink of similar-minded people. But there’s one important caveat: In order for the wisdom of crowds to retain its accuracy for making predictions, every member of the group must be given an equal voice, without any one person dominating. As we discovered, the pattern of social influence within groups — that is, who talks to whom and when — is the key determinant of the crowd’s accuracy in making predictions….(More)”.

The Data Protection Officer Handbook


Handbook by Douwe Korff and Marie Georges: “This Handbook was prepared for and is used in the EU-funded  “T4DATA” training‐of-trainers programme. Part I explains the history and development of European data protection law and provides an overview of European data protection instruments including the Council of Europe Convention and its “Modernisation” and the various EU data protection instruments relating to Justice and Home Affairs, the CFSP and the EU institutions, before focusing on the GDPR in Part II. The final part (Part III) consists of detailed practical advice on the various tasks of the Data Protection Officer now institutionalised by the GDPR. Although produced for the T4DATA programme that focusses on DPOs in the public sector, it is hoped that the Handbook will be useful also to anyone else interested in the application of the GDPR, including DPOs in the private sector….(More)”.

Trust and Mistrust in Americans’ Views of Scientific Experts


Report by the Pew Research Center: “In an era when science and politics often appear to collide, public confidence in scientists is on the upswing, and six-inten Americans say scientists should play an active role in policy debates about scientific
issues, according to a new Pew Research Center survey.

The survey finds public confidence in scientists on par with confidence in the military. It also exceeds the levels of public confidence in other groups and institutions, including the media, business leaders and elected officials.

At the same time, Americans are divided along party lines in terms of how they view the value and objectivity of scientists and their ability to act in the public interest. And, while political divides do not carry over to views of all scientists and scientific issues, there are particularly sizable gaps between Democrats and Republicans when it comes to trust in scientists whose work is related to the environment.

Higher levels of familiarity with the work of scientists are associated with more positive and more trusting views of scientists regarding their competence, credibility and commitment to the public, the survey shows….(More)”.

Guidance Note: Statistical Disclosure Control


Centre for Humanitarian Data: “Survey and needs assessment data, or what is known as ‘microdata’, is essential for providing adequate response to crisis-affected people. However, collecting this information does present risks. Even as great effort is taken to remove unique identifiers such as names and phone numbers from microdata so no individual persons or communities are exposed, combining key variables such as location or ethnicity can still allow for re-identification of individual respondents. Statistical Disclosure Control (SDC) is one method for reducing this risk. 

The Centre has developed a Guidance Note on Statistical Disclosure Control that outlines the steps involved in the SDC process, potential applications for its use, case studies and key actions for humanitarian data practitioners to take when managing sensitive microdata. Along with an overview of what SDC is and what tools are available, the Guidance Note outlines how the Centre is using this process to mitigate risk for datasets shared on HDX. …(More)”.

Innovation Beyond Technology: Science for Society and Interdisciplinary Approaches


Book edited by Sébastien Lechevalier: ” The major purpose of this book is to clarify the importance of non-technological factors in innovation to cope with contemporary complex societal issues while critically reconsidering the relations between science, technology, innovation (STI), and society. For a few decades now, innovation—mainly derived from technological advancement—has been considered a driving force of economic and societal development and prosperity.

With that in mind, the following questions are dealt with in this book: What are the non-technological sources of innovation? What can the progress of STI bring to humankind? What roles will society be expected to play in the new model of innovation? The authors argue that the majority of so-called technological innovations are actually socio-technical innovations, requiring huge resources for financing activities, adapting regulations, designing adequate policy frames, and shaping new uses and new users while having the appropriate interaction with society.

This book gathers multi- and trans-disciplinary approaches in innovation that go beyond technology and take into account the inter-relations with social and human phenomena. Illustrated by carefully chosen examples and based on broad and well-informed analyses, it is highly recommended to readers who seek an in-depth and up-to-date integrated overview of innovation in its non-technological dimensions….(More)”.

Bringing machine learning to the masses


Matthew Hutson at Science: “Artificial intelligence (AI) used to be the specialized domain of data scientists and computer programmers. But companies such as Wolfram Research, which makes Mathematica, are trying to democratize the field, so scientists without AI skills can harness the technology for recognizing patterns in big data. In some cases, they don’t need to code at all. Insights are just a drag-and-drop away. One of the latest systems is software called Ludwig, first made open-source by Uber in February and updated last week. Uber used Ludwig for projects such as predicting food delivery times before releasing it publicly. At least a dozen startups are using it, plus big companies such as Apple, IBM, and Nvidia. And scientists: Tobias Boothe, a biologist at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany, uses it to visually distinguish thousands of species of flatworms, a difficult task even for experts. To train Ludwig, he just uploads images and labels….(More)”.

De-risking custom technology projects


Paper by Robin Carnahan, Randy Hart, and Waldo Jaquith: “Only 13% of large government software projects are successful. State IT projects, in particular, are often challenged because states lack basic knowledge about modern software development, relying on outdated procurement processes.

State governments are increasingly reliant on modern software and hardware to deliver essential services to the public, and the success of any major policy initiative depends on the success of the underlying software infrastructure. Government agencies all confront similar challenges, facing budget and staffing constraints while struggling to modernize legacy technology systems that are out-of-date, inflexible, expensive, and ineffective. Government officials and agencies often rely on the same legacy processes that led to problems in the first place.

The public deserves a government that provides the same world-class technology they get from the commercial marketplace. Trust in government depends on it.

This handbook is designed for executives, budget specialists, legislators, and other “non-technical” decision-makers who fund or oversee state government technology projects. It can help you set these projects up for success by asking the right questions, identifying the right outcomes, and equally important, empowering you with a basic knowledge of the fundamental principles of modern software design.

This handbook also gives you the tools you need to start tackling related problems like:

  • The need to use, maintain, and modernize legacy systems simultaneously
  • Lock-in from legacy commercial arrangements
  • Siloed organizations and risk-averse cultures
  • Long budget cycles that don’t always match modern software design practices
  • Security threats
  • Hiring, staffing, and other resource constraints

This is written specifically for procurement of custom software, but it’s important to recognize that commercial off-the-shelf software (COTS) is often custom and Software as a Service (SaaS) often requires custom code. Once any customization is made, the bulk of this advice in this handbook applies to these commercial offerings. (See “Beware the customized commercial software trap” for details.)

As government leaders, we must be good stewards of public money by demanding easy-to-use, cost-effective, sustainable digital tools for use by the public and civil servants. This handbook will help you do just that….(More)”

Exploring Digital Ecosystems: Organizational and Human Challenges


Proceedings edited by Alessandra Lazazzara, Francesca Ricciardi and Stefano Za: “The recent surge of interest in digital ecosystems is not only transforming the business landscape, but also poses several human and organizational challenges. Due to the pervasive effects of the transformation on firms and societies alike, both scholars and practitioners are interested in understanding the key mechanisms behind digital ecosystems, their emergence and evolution. In order to disentangle such factors, this book presents a collection of research papers focusing on the relationship between technologies (e.g. digital platforms, AI, infrastructure) and behaviours (e.g. digital learning, knowledge sharing, decision-making). Moreover, it provides critical insights into how digital ecosystems can shape value creation and benefit various stakeholders. The plurality of perspectives offered makes the book particularly relevant for users, companies, scientists and governments. The content is based on a selection of the best papers – original double-blind peer-reviewed contributions – presented at the annual conference of the Italian chapter of the AIS, which took place in Pavia, Italy in October 2018….(More)”.

What can the labor flow of 500 million people on LinkedIn tell us about the structure of the global economy?


Paper by Jaehyuk Park et al: “…One of the most popular concepts for policy makers and business economists to understand the structure of the global economy is “cluster”, the geographical agglomeration of interconnected firms such as Silicon ValleyWall Street, and Hollywood. By studying those well-known clusters, we become to understand the advantage of participating in a geo-industrial cluster for firms and how it is related to the economic growth of a region. 

However, the existing definition of geo-industrial cluster is not systematic enough to reveal the whole picture of the global economy. Often, after defining as a group of firms in a certain area, the geo-industrial clusters are considered as independent to each other. As we should consider the interaction between accounting team and marketing team to understand the organizational structure of a firm, the relationships among those geo-industrial clusters are the essential part of the whole picture….

In this new study, my colleagues and I at Indiana University — with support from LinkedIn — have finally overcome these limitations by defining geo-industrial clusters through labor flow and constructing a global labor flow network from LinkedIn’s individual-level job history dataset. Our access to this data was made possible by our selection as one of 11 teams selected to participate in the LinkedIn Economic Graph Challenge.

The transitioning of workers between jobs and firms — also known as labor flow — is considered central in driving firms towards geo-industrial clusters due to knowledge spillover and labor market pooling. In response, we mapped the cluster structure of the world economy based on labor mobility between firms during the last 25 years, constructing a “labor flow network.” 

To do this, we leverage LinkedIn’s data on professional demographics and employment histories from more than 500 million people between 1990 and 2015. The network, which captures approximately 130 million job transitions between more than 4 million firms, is the first-ever flow network of global labor.

The resulting “map” allows us to:

  • identify geo-industrial clusters systematically and organically using network community detection
  • verify the importance of region and industry in labor mobility
  • compare the relative importance between the two constraints in different hierarchical levels, and
  • reveal the practical advantage of the geo-industrial cluster as a unit of future economic analyses.
  • show a better picture of what industry in what region leads the economic growth of the industry or the region, at the same time
  • find out emerging and declining skills based on the representativeness of them in growing and declining geo-industrial clusters…(More)”.

For academics, what matters more: journal prestige or readership?


Katie Langin at Science: “With more than 30,000 academic journals now in circulation, academics can have a hard time figuring out where to submit their work for publication. The decision is made all the more difficult by the sky-high pressure of today’s academic environment—including working toward tenure and trying to secure funding, which can depend on a researcher’s publication record. So, what does a researcher prioritize?

According to a new study posted on the bioRxiv preprint server, faculty members say they care most about whether the journal is read by the people they most want to reach—but they think their colleagues care most about journal prestige. Perhaps unsurprisingly, prestige also held more sway for untenured faculty members than for their tenured colleagues.

“I think that it is about the security that comes with being later in your career,” says study co-author Juan Pablo Alperin, an assistant professor in the publishing program at Simon Fraser University in Vancouver, Canada. “It means you can stop worrying so much about the specifics of what is being valued; there’s a lot less at stake.”

According to a different preprint that Alperin and his colleagues posted on PeerJ in April, 40% of research-intensive universities in the United States and Canada explicitly mention that journal impact factors can be considered in promotion and tenure decisions. More likely do so unofficially, with faculty members using journal names on a CV as a kind of shorthand for how “good” a candidate’s publication record is. “You can’t ignore the fact that journal impact factor is a reality that gets looked at,” Alperin says. But some argue that journal prestige and impact factor are overemphasized and harm science, and that academics should focus on the quality of individual work rather than journal-wide metrics. 

In the new study, only 31% of the 338 faculty members who were surveyed—all from U.S. and Canadian institutions and from a variety of disciplines, including 38% in the life and physical sciences and math—said that journal prestige was “very important” to them when deciding where to submit a manuscript. The highest priority was journal readership, which half said was very important. Fewer respondents felt that publication costs (24%) and open access (10%) deserved the highest importance rating.

But, when those same faculty members were asked to assess how their colleagues make the same decision, journal prestige shot to the top of the list, with 43% of faculty members saying that it was very important to their peers when deciding where to submit a manuscript. Only 30% of faculty members thought the same thing about journal readership—a drop of 20 percentage points compared with how faculty members assessed their own motivations….(More)”.