Who Can You Trust? How Technology Brought Us Together and Why It Might Drive Us Apart


Book by Rachel Botsman: “If you can’t trust those in charge, who can you trust? From government to business, banks to media, trust in institutions is at an all-time low. But this isn’t the age of distrust–far from it.

In this revolutionary book, world-renowned trust expert Rachel Botsman reveals that we are at the tipping point of one of the biggest social transformations in human history–with fundamental consequences for everyone. A new world order is emerging: we might have lost faith in institutions and leaders, but millions of people rent their homes to total strangers, exchange digital currencies, or find themselves trusting a bot. This is the age of “distributed trust,” a paradigm shift driven by innovative technologies that are rewriting the rules of an all-too-human relationship.
If we are to benefit from this radical shift, we must understand the mechanics of how trust is built, managed, lost, and repaired in the digital age. In the first book to explain this new world, Botsman provides a detailed map of this uncharted landscape–and explores what’s next for humanity”

Business Models For Sustainable Research Data Repositories


OECD Report: “In 2007, the OECD Principles and Guidelines for Access to Research Data from Public Funding were published and in the intervening period there has been an increasing emphasis on open science. At the same time, the quantity and breadth of research data has massively expanded. So called “Big Data” is no longer limited to areas such as particle physics and astronomy, but is ubiquitous across almost all fields of research. This is generating exciting new opportunities, but also challenges.

The promise of open research data is that they will not only accelerate scientific discovery and improve reproducibility, but they will also speed up innovation and improve citizen engagement with research. In short, they will benefit society as a whole. However, for the benefits of open science and open research data to be realised, these data need to be carefully and sustainably managed so that they can be understood and used by both present and future generations of researchers.

Data repositories – based in local and national research institutions and international bodies – are where the long-term stewardship of research data takes place and hence they are the foundation of open science. Yet good data stewardship is costly and research budgets are limited. So, the development of sustainable business models for research data repositories needs to be a high priority in all countries. Surprisingly, perhaps, little systematic analysis has been done on income streams, costs, value propositions, and business models for data repositories, and that is the gap this report attempts to address, from a science policy perspective…..

This project was designed to take up the challenge and to contribute to a better understanding of how research data repositories are funded, and what developments are occurring in their funding. Central questions included:

  • How are data repositories currently funded, and what are the key revenue sources?
  • What innovative revenue sources are available to data repositories?
  • How do revenue sources fit together into sustainable business models?
  • What incentives for, and means of, optimising costs are available?
  • What revenue sources and business models are most acceptable to key stakeholders?…(More)”

Understanding Design Thinking, Lean, and Agile


Free ebook by Jonny Schneider: “Highly touted methodologies, such as Agile, Lean, and Design Thinking, leave many organizations bamboozled by an unprecedented array of processes, tools, and methods for digital product development. Many teams meet their peril trying to make sense of these options. How do the methods fit together to achieve the right outcome? What’s the best approach for your circumstances?

In this insightful report, Jonny Schneider from ThoughtWorks shows you how to diagnose your situation, understand where you need more insight to move forward, and then choose from a range of tactics that can move your team closer to clarity.

Blindly applying any model, framework, or method seldom delivers the desired result. Agile began as a better answer for delivering software. Lean focuses on product success. And Design Thinking is an approach for exploring opportunities and problems to solve. This report shows you how to evaluate your situation before committing to one, two, or all three of these techniques.

  • Understand how design thinking, the lean movement, and agile software development can make a difference
  • Define your beliefs and assumptions as well as your strategy
  • Diagnose the current condition and explore possible futures
  • Decide what to learn, and how to learn it, through fast research and experimentation
  • Decentralize decisions with purpose-driven, collaborative teams
  • Prioritize and measure value by responding to customer demand…(More)”

Victims of Sexual Harassment Have a New Resource: AI


MIT Technology Review (The Download): “If you have ever dealt with sexual harassment in the workplace, there is now a private online place for you to go for help. Botler AI, a startup based in Montreal, on Wednesday launched a system that provides free information and guidance to those who have been sexually harassed and are unsure of their legal rights.

Using deep learning, the AI system was trained on more than 300,000 U.S. and Canadian criminal court documents, including over 57,000 documents and complaints related to sexual harassment. Using this information, the software predicts whether the situation explained by the user qualifies as sexual harassment, and notes which laws may have been violated under the criminal code. It then generates an incident report that the user can hand over to relevant authorities….

The tool starts by asking simple questions that can guide the software, like what state you live in and when the incident occured. Then, you explain your situation in plain language. The software then creates a report based on that account and what it has learned from the court cases on which it was trained.

The company’s ultimate goal is to provide free legal tools to help with a multitude of issues, not just sexual harassment. In this Botler isn’t alone—a similar company called DoNotPay started as an automated way to fight parking tickets but has since expanded massively (see “This Chatbot Will Help You Sue Anyone“)….(More).

Factors Influencing Decisions about Crowdsourcing in the Public Sector: A Literature Review


Paper by Regina Lenart‑Gansiniec: “Crowdsourcing is a relatively new notion, nonetheless raising more and more interest with researchers. In short, it means selection of functions which until present have been performed by employees and transferring them, in the form of an open on‑line call, to an undefined virtual community. In economic practice it has become amegatrend, which drives innovations, collaboration in the field of scientific research, business, or society. It is reached by more and more organisations, for instance considering its potential business value (Rouse 2010; Whitla 2009).

The first paper dedicated to crowdsourcing appeared relatively recently, in 2006 thanks to J. Howe’s article entitled:“The Rise of Crowdsourcing”. Although crowdsourcing is more and more the subject of scientific research, one may note in the literature many ambiguities, which result from proliferation of various research approaches and perspectives. Therefore, this may lead to many misunderstandings (Hopkins, 2011). This especially concerns the key aspects and factors, which have an impact on making decisions about crowdsourcing by organisations, particularly the public ones.

The aim of this article is identification of the factors that influence making decisions about implementing crowdsourcing by public organisations in their activity, in particular municipal offices in Poland. The article is of a theoretical and review nature. Searching for the answer to this question, a literature review was conducted and an analysis of crowdsourcing initiatives used by self‑government units in Poland was made….(More)”.

Crowdsourcing Accurately and Robustly Predicts Supreme Court Decisions


Paper by Katz, Daniel Martin and Bommarito, Michael James and Blackman, Josh: “Scholars have increasingly investigated “crowdsourcing” as an alternative to expert-based judgment or purely data-driven approaches to predicting the future. Under certain conditions, scholars have found that crowd-sourcing can outperform these other approaches. However, despite interest in the topic and a series of successful use cases, relatively few studies have applied empirical model thinking to evaluate the accuracy and robustness of crowdsourcing in real-world contexts.

In this paper, we offer three novel contributions. First, we explore a dataset of over 600,000 predictions from over 7,000 participants in a multi-year tournament to predict the decisions of the Supreme Court of the United States. Second, we develop a comprehensive crowd construction framework that allows for the formal description and application of crowdsourcing to real-world data. Third, we apply this framework to our data to construct more than 275,000 crowd models. We find that in out-of-sample historical simulations, crowdsourcing robustly outperforms the commonly-accepted null model, yielding the highest-known performance for this context at 80.8% case level accuracy. To our knowledge, this dataset and analysis represent one of the largest explorations of recurring human prediction to date, and our results provide additional empirical support for the use of crowdsourcing as a prediction method….(More)”.

7 lessons learned from $5 million in open innovation prizes


Sara Holoubek in the Lab Report: “Prize competitions have long been used to accelerate innovation. In the 18th century, Britain offered a significant prize purse for advancements in seafaring navigation, and Napoleon’s investment in a competition led to innovation in food preservation. More recently, DARPA’s Grand Challenge ignited a decade of progress in autonomous vehicle technology.

Challenges are considered a branch of “open innovation,” an idea that has been around for decades but became more popular after the University of California’s Henry Chesbrough published a book on the topic in 2003. Chesbrough describes open innovation as “a paradigm that assumes that firms can and should use external ideas as well as internal ideas, and internal and external paths to market, as the firms look to advance their technology.”…Here’s what we’ve learned…:

1. It’s a long game.

Clients get more out of open innovation when they reject a “one and done” mentality, opting instead to build an open innovation competency, socialize best practices across the broader organization, and determine the best moments to push the innovation envelope. …

2. Start with problem statement definition.

If a company isn’t in agreement on the problem to be solved, its challenge won’t be successful. …

3. Know what would constitute a “big win.”

Many of our clients are tasked with balancing near-term expectations while navigating what it will take for the organization to thrive in the long term. Rather than meeting in the middle, we ask what would constitute a “big win.” …

4. Invest in challenge design.

The market is flooded with platforms that aim to democratize challenges — and better access to tools is great. But in the absence of challenge design, a competition run on the best platform will fail. ….

5. Understand what it takes to close the gap between concept and viability.

…Solvers often tell us this “virtual accelerator” period — which includes education and exercises in empathy-building, subject matter knowledge, rapid prototyping, and business modeling — is of more value to their teams than prize money.

6. Hug the lawyers — as early as possible.

… Faced with unique constraints, we encourage clients to engage counsel early in the process. …

7. Really, really good marketing is essential.

A key selling point for challenge platforms is the size of their database. Some even monetize “communities.” …(More)”

The Wikipedia competitor that’s harnessing blockchain for epistemological supremacy


Peter Rubin at Wired: “At the time of this writing, the opening sentence of Larry Sanger’s Everipedia entry is pretty close to his Wikipedia entry. It describes him as “an American Internet project developer … best known as co-founder of Wikipedia.” By the time you read this, however, it may well mention a new, more salient fact—that Sanger recently became the Chief Information Officer of Everipedia itself, a site that seeks to become a better version of the online encyclopedia than the one he founded back in 2001. To do that, Sanger’s new employer is trying something that no other player in the space has done: moving to a blockchain.

Oh, blockchain, that decentralized “global ledger” that provides the framework for cryptocurrencies like Bitcoin (as well as a thousand explainer videos, and seemingly a thousand startups’ business plans). Blockchain already stands to make medical patient data easier to move and improve food safety; now, Everipedia’s founders hope, it will allow for a more powerful, accountable encyclopedia.

Here’s how it’ll work. Everipedia already uses a points system where creating articles and approved edits amasses “IQ.” In January, when the site moves over to a blockchain, Everipedia will convert IQ scores to a token-based currency, giving all existing editors an allotment proportionate to their IQ—and giving them a real, financial stake in Everipedia. From then on, creating and curating articles will allow users to earn tokens, which act as virtual shares of the platform. To prevent bad actors from trying to cash in with ill-founded or deliberately false articles and edits, Everipedia will force users to put up a token of their own in order to submit. If their work is accepted, they get their token back, plus a little bit for their contribution; if not, they lose their token. The assumption is that other users, motivated by the desire to maintain the site’s value, will actively seek to prevent such efforts….

This isn’t the first time a company has proposed a decentralized blockchain-based encyclopedia; earlier this year, a company called Lunyr announced similar plans. However, judging from Lunyr’s most recent roadmap, Everipedia will beat it to market with room to spare….(More)”.

There’s more to evidence-based policies than data: why it matters for healthcare


 at The Conversation: “The big question is: how can countries strengthen their health systems to deliver accessible, affordable and equitable care when they are often under-financed and governed in complex ways?

One answer lies in governments developing policies and programmes that are informed by evidence of what works or doesn’t. This should include what we would call “traditional data”, but should also include a broader definition of evidence. This would mean including, for example, information from citizens and stakeholders as well as programme evaluations. In this way, policies can be made more relevant for the people they affect.

Globally there is an increasing appreciation for this sort of policymaking that relies of a broader definition of evidence. Countries such as South Africa, Ghana and Thailand provide good examples.

What is evidence?

Using evidence to inform the development of health care has grown out of the use of science to choose the best decisions. It is based on data being collected in a methodical way. This approach is useful but it can’t always be neatly applied to policymaking. There are several reasons for this.

The first is that there are many different types of evidence. Evidence is more than data, even though the terms are often used to mean the same thing. For example, there is statistical and administrative data, research evidence, citizen and stakeholder information as well as programme evaluations.

The challenge is that some of these are valued more than others. More often than not, statistical data is more valued in policymaking. But both researchers and policymakers must acknowledge that for policies to be sound and comprehensive, different phases of policymaking process would require different types of evidence.

Secondly, data-as-evidence is only one input into policymaking. Policymakers face a long list of pressures they must respond to, including time, resources, political obligations and unplanned events.

Researchers may push technically excellent solutions designed in research environments. But policymakers may have other priorities in mind: are the solutions being put to them practical and affordable?Policymakers also face the limitations of having to balance various constituents while straddling the constraints of the bureaucracies they work in.

Researchers must recognise that policymakers themselves are a source of evidence of what works or doesn’t. They are able to draw on their own experiences, those of their constituents, history and their contextual knowledge of the terrain.

What this boils down to is that for policies that are based on evidence to be effective, fewer ‘push/pull’ models of evidence need to be used. Instead the models where evidence is jointly fashioned should be employed.

This means that policymakers, researchers and other key actors (like health managers or communities) must come together as soon as a problem is identified. They must first understand each other’s ideas of evidence and come to a joint conclusion of what evidence would be appropriate for the solution.

In South Africa, for example, the Department of Environmental Affairshas developed a four-phase process to policymaking. In the first phase, researchers and policymakers come together to set the agenda and agree on the needed solution. Their joint decision is then reviewed before research is undertaken and interpreted together….(More)”.

Big data in social and psychological science: theoretical and methodological issues


Paper by Lin Qiu, Sarah Hian May Chan and David Chan in the Journal of Computational Social Science: “Big data presents unprecedented opportunities to understand human behavior on a large scale. It has been increasingly used in social and psychological research to reveal individual differences and group dynamics. There are a few theoretical and methodological challenges in big data research that require attention. In this paper, we highlight four issues, namely data-driven versus theory-driven approaches, measurement validity, multi-level longitudinal analysis, and data integration. They represent common problems that social scientists often face in using big data. We present examples of these problems and propose possible solutions….(More)”.