Stefaan Verhulst
New set of case studies by The GovLab: “Government at all levels — federal, state and local — collects and processes troves of data in order to administer public programs, fulfill regulatory mandates or conduct research¹. This government-held data, which often contains personally identifiable information about the individuals government serves is known as “administrative data” and it can be analyzed to evaluate and improve how government and the social sector deliver services.
For example, the Social Security Administration (SSA) collects and manages data on social, welfare and disability benefit payments of nearly the entire US population as well as data such as individual lifetime records of wages and self employment earnings. The SSA uses this administrative data for, among other things, analysis of policy interventions and to develop models to project demographic and economic characteristics of the population. State governments collect computerized hospital discharge data for both Government (Medicare and Medicaid) and commercial payers while the Department of Justice (through the Bureau of Justice Standards) collects prison admission and release data to monitor correctional populations and to address several policy questions, including those on recidivism and prisoner reentry.
Though they have long collected data, increasingly in digital form, government agencies have struggled to create the infrastructure and acquire the skills needed to make use of this administrative data to realize the promise of evidence-based policymaking.
The goal of this collection of eight case studies is to look at how governments are beginning to “get smarter” about using their own data. By comparing the ways in which they have chosen to collaborate with researchers and make often sensitive data usable to government employees and researchers in ethical and responsible ways, we hope to increase our understanding of what is required to be able to make better use of administrative data including the governance structures, technology infrastructure and key personnel. The hope is to enable other public institutions to know what is required to be able to make better use of administrative data. What follows is a summary of the learnings from those case studies. We start with an articulation of the value proposition for greater use of administrative data followed by the key learnings and the case studies themselves….(More)”
Read the case studies here.
Book by Shannon Mattern: “For years, pundits have trumpeted the earthshattering changes that big data and smart networks will soon bring to our cities. But what if cities have long been built for intelligence, maybe for millennia? In Code and Clay, Data and Dirt Shannon Mattern advances the provocative argument that our urban spaces have been “smart” and mediated for thousands of years.
Offering powerful new ways of thinking about our cities, Code and Clay, Data and Dirt goes far beyond the standard historical concepts of origins, development, revolutions, and the accomplishments of an elite few. Mattern shows that in their architecture, laws, street layouts, and civic knowledge—and through technologies including the telephone, telegraph, radio, printing, writing, and even the human voice—cities have long negotiated a rich exchange between analog and digital, code and clay, data and dirt, ether and ore.
Mattern’s vivid prose takes readers through a historically and geographically broad range of stories, scenes, and locations, synthesizing a new narrative for our urban spaces. Taking media archaeology to the city’s streets, Code and Clay, Data and Dirt reveals new ways to write our urban, media, and cultural histories….(More)”.
Book by Rachel Botsman: “If you can’t trust those in charge, who can you trust? From government to business, banks to media, trust in institutions is at an all-time low. But this isn’t the age of distrust–far from it.
OECD Report: “In 2007, the OECD Principles and Guidelines for Access to Research Data from Public Funding were published and in the intervening period there has been an increasing emphasis on open science. At the same time, the quantity and breadth of research data has massively expanded. So called “Big Data” is no longer limited to areas such as particle physics and astronomy, but is ubiquitous across almost all fields of research. This is generating exciting new opportunities, but also challenges.
The promise of open research data is that they will not only accelerate scientific discovery and improve reproducibility, but they will also speed up innovation and improve citizen engagement with research. In short, they will benefit society as a whole. However, for the benefits of open science and open research data to be realised, these data need to be carefully and sustainably managed so that they can be understood and used by both present and future generations of researchers.
Data repositories – based in local and national research institutions and international bodies – are where the long-term stewardship of research data takes place and hence they are the foundation of open science. Yet good data stewardship is costly and research budgets are limited. So, the development of sustainable business models for research data repositories needs to be a high priority in all countries. Surprisingly, perhaps, little systematic analysis has been done on income streams, costs, value propositions, and business models for data repositories, and that is the gap this report attempts to address, from a science policy perspective…..
This project was designed to take up the challenge and to contribute to a better understanding of how research data repositories are funded, and what developments are occurring in their funding. Central questions included:
- How are data repositories currently funded, and what are the key revenue sources?
- What innovative revenue sources are available to data repositories?
- How do revenue sources fit together into sustainable business models?
- What incentives for, and means of, optimising costs are available?
- What revenue sources and business models are most acceptable to key stakeholders?…(More)”
Free ebook by Jonny Schneider: “Highly touted methodologies, such as Agile, Lean, and Design Thinking, leave many organizations bamboozled by an unprecedented array of processes, tools, and methods for digital product development. Many teams meet their peril trying to make sense of these options. How do the methods fit together to achieve the right outcome? What’s the best approach for your circumstances?
In this insightful report, Jonny Schneider from ThoughtWorks shows you how to diagnose your situation, understand where you need more insight to move forward, and then choose from a range of tactics that can move your team closer to clarity.
Blindly applying any model, framework, or method seldom delivers the desired result. Agile began as a better answer for delivering software. Lean focuses on product success. And Design Thinking is an approach for exploring opportunities and problems to solve. This report shows you how to evaluate your situation before committing to one, two, or all three of these techniques.
- Understand how design thinking, the lean movement, and agile software development can make a difference
- Define your beliefs and assumptions as well as your strategy
- Diagnose the current condition and explore possible futures
- Decide what to learn, and how to learn it, through fast research and experimentation
- Decentralize decisions with purpose-driven, collaborative teams
- Prioritize and measure value by responding to customer demand…(More)”
MIT Technology Review (The Download): “If you have ever dealt with sexual harassment in the workplace, there is now a private online place for you to go for help. Botler AI, a startup based in Montreal, on Wednesday launched a system that provides free information and guidance to those who have been sexually harassed and are unsure of their legal rights.
Using deep learning, the AI system was trained on more than 300,000 U.S. and Canadian criminal court documents, including over 57,000 documents and complaints related to sexual harassment. Using this information, the software predicts whether the situation explained by the user qualifies as sexual harassment, and notes which laws may have been violated under the criminal code. It then generates an incident report that the user can hand over to relevant authorities….
The tool starts by asking simple questions that can guide the software, like what state you live in and when the incident occured. Then, you explain your situation in plain language. The software then creates a report based on that account and what it has learned from the court cases on which it was trained.
The company’s ultimate goal is to provide free legal tools to help with a multitude of issues, not just sexual harassment. In this Botler isn’t alone—a similar company called DoNotPay started as an automated way to fight parking tickets but has since expanded massively (see “This Chatbot Will Help You Sue Anyone“)….(More).
Paper by Regina Lenart‑Gansiniec: “Crowdsourcing is a relatively new notion, nonetheless raising more and more interest with researchers. In short, it means selection of functions which until present have been performed by employees and transferring them, in the form of an open on‑line call, to an undefined virtual community. In economic practice it has become amegatrend, which drives innovations, collaboration in the field of scientific research, business, or society. It is reached by more and more organisations, for instance considering its potential business value (Rouse 2010; Whitla 2009).
The first paper dedicated to crowdsourcing appeared relatively recently, in 2006 thanks to J. Howe’s article entitled:“The Rise of Crowdsourcing”. Although crowdsourcing is more and more the subject of scientific research, one may note in the literature many ambiguities, which result from proliferation of various research approaches and perspectives. Therefore, this may lead to many misunderstandings (Hopkins, 2011). This especially concerns the key aspects and factors, which have an impact on making decisions about crowdsourcing by organisations, particularly the public ones.
The aim of this article is identification of the factors that influence making decisions about implementing crowdsourcing by public organisations in their activity, in particular municipal offices in Poland. The article is of a theoretical and review nature. Searching for the answer to this question, a literature review was conducted and an analysis of crowdsourcing initiatives used by self‑government units in Poland was made….(More)”.
Paper by Katz, Daniel Martin and Bommarito, Michael James and Blackman, Josh: “Scholars have increasingly investigated “crowdsourcing” as an alternative to expert-based judgment or purely data-driven approaches to predicting the future. Under certain conditions, scholars have found that crowd-sourcing can outperform these other approaches. However, despite interest in the topic and a series of successful use cases, relatively few studies have applied empirical model thinking to evaluate the accuracy and robustness of crowdsourcing in real-world contexts.
In this paper, we offer three novel contributions. First, we explore a dataset of over 600,000 predictions from over 7,000 participants in a multi-year tournament to predict the decisions of the Supreme Court of the United States. Second, we develop a comprehensive crowd construction framework that allows for the formal description and application of crowdsourcing to real-world data. Third, we apply this framework to our data to construct more than 275,000 crowd models. We find that in out-of-sample historical simulations, crowdsourcing robustly outperforms the commonly-accepted null model, yielding the highest-known performance for this context at 80.8% case level accuracy. To our knowledge, this dataset and analysis represent one of the largest explorations of recurring human prediction to date, and our results provide additional empirical support for the use of crowdsourcing as a prediction method….(More)”.
Sara Holoubek in the Lab Report: “Prize competitions have long been used to accelerate innovation. In the 18th century, Britain offered a significant prize purse for advancements in seafaring navigation, and Napoleon’s investment in a competition led to innovation in food preservation. More recently, DARPA’s Grand Challenge ignited a decade of progress in autonomous vehicle technology.
Challenges are considered a branch of “open innovation,” an idea that has been around for decades but became more popular after the University of California’s Henry Chesbrough published a book on the topic in 2003. Chesbrough describes open innovation as “a paradigm that assumes that firms can and should use external ideas as well as internal ideas, and internal and external paths to market, as the firms look to advance their technology.”…Here’s what we’ve learned…:
1. It’s a long game.
Clients get more out of open innovation when they reject a “one and done” mentality, opting instead to build an open innovation competency, socialize best practices across the broader organization, and determine the best moments to push the innovation envelope. …
2. Start with problem statement definition.
If a company isn’t in agreement on the problem to be solved, its challenge won’t be successful. …
3. Know what would constitute a “big win.”
Many of our clients are tasked with balancing near-term expectations while navigating what it will take for the organization to thrive in the long term. Rather than meeting in the middle, we ask what would constitute a “big win.” …
4. Invest in challenge design.
The market is flooded with platforms that aim to democratize challenges — and better access to tools is great. But in the absence of challenge design, a competition run on the best platform will fail. ….
5. Understand what it takes to close the gap between concept and viability.
…Solvers often tell us this “virtual accelerator” period — which includes education and exercises in empathy-building, subject matter knowledge, rapid prototyping, and business modeling — is of more value to their teams than prize money.
6. Hug the lawyers — as early as possible.
… Faced with unique constraints, we encourage clients to engage counsel early in the process. …
7. Really, really good marketing is essential.
A key selling point for challenge platforms is the size of their database. Some even monetize “communities.” …(More)”
Peter Rubin at Wired: “At the time of this writing, the opening sentence of Larry Sanger’s Everipedia entry is pretty close to his Wikipedia entry. It describes him as “an American Internet project developer … best known as co-founder of Wikipedia.” By the time you read this, however, it may well mention a new, more salient fact—that Sanger recently became the Chief Information Officer of Everipedia itself, a site that seeks to become a better version of the online encyclopedia than the one he founded back in 2001. To do that, Sanger’s new employer is trying something that no other player in the space has done: moving to a blockchain.
Oh, blockchain, that decentralized “global ledger” that provides the framework for cryptocurrencies like Bitcoin (as well as a thousand explainer videos, and seemingly a thousand startups’ business plans). Blockchain already stands to make medical patient data easier to move and improve food safety; now, Everipedia’s founders hope, it will allow for a more powerful, accountable encyclopedia.
Here’s how it’ll work. Everipedia already uses a points system where creating articles and approved edits amasses “IQ.” In January, when the site moves over to a blockchain, Everipedia will convert IQ scores to a token-based currency, giving all existing editors an allotment proportionate to their IQ—and giving them a real, financial stake in Everipedia. From then on, creating and curating articles will allow users to earn tokens, which act as virtual shares of the platform. To prevent bad actors from trying to cash in with ill-founded or deliberately false articles and edits, Everipedia will force users to put up a token of their own in order to submit. If their work is accepted, they get their token back, plus a little bit for their contribution; if not, they lose their token. The assumption is that other users, motivated by the desire to maintain the site’s value, will actively seek to prevent such efforts….
This isn’t the first time a company has proposed a decentralized blockchain-based encyclopedia; earlier this year, a company called Lunyr announced similar plans. However, judging from Lunyr’s most recent roadmap, Everipedia will beat it to market with room to spare….(More)”.