Using data and design to support people to stay in work


 at Civil Service Quarterly: “…Data and digital are fairly understandable concepts in policy-making. But design? Why is it one of the three Ds?

Policy Lab believes that design approaches are particularly suited to complex issues that have multiple causes and for which there is no one, simple answer. Design encourages people to think about the user’s needs (not just the organisation’s needs), brings in different perspectives to innovate new ideas, and then prototypes (mocks them up and tries them out) to iteratively improve ideas until they find one that can be scaled up.

Composite graph and segmentation analysis collection
Segmentation analysis of those who reported being on health-related benefits in the Understanding Society survey

Policy Lab also recognises that data alone cannot solve policy problems, and has been experimenting with how to combine numerical and more human practices. Data can explain what is happening, while design research methods – such as ethnography, observing people’s behaviours – can explain why things are happening. Data can be used to automate and tailor public services; while design means frontline delivery staff and citizens will actually know about and use them. Data-rich evidence is highly valued by policy-makers; and design can make it understandable and accessible to a wider group of people, opening up policy-making in the process.

The Lab is also experimenting with new data methods.

Data science can be used to look at complex, unstructured data (social media data, for example), in real time. Digital data, such as social media data or internet searches, can reveal how people behave (rather than how they say they behave). It can also look at huge amounts of data far quicker than humans, and find unexpected patterns hidden in the data. Powerful computers can identify trends from historical data and use these to predict what might happen in the future.

Supporting people in work project

The project took a DDD approach to generating insight and then creating ideas. The team (including the data science organisation Mastodon C and design agency Uscreates) used data science techniques together with ethnography to create a rich picture about what was happening. Then it used design methods to create ideas for digital services with the user in mind, and these were prototyped and tested with users.

The data science confirmed many of the known risk factors, but also revealed some new insights. It told us what was happening at scale, and the ethnography explained why.

  • The data science showed that people were more likely to go onto sickness benefits if they had been in the job a shorter time. The ethnography explained that the relationship with the line manager and a sense of loyalty were key factors in whether someone stayed in work or went onto benefits.
  • The data science showed that women with clinical depression were less likely to go onto sickness benefits than men with the same condition. The ethnography revealed how this played out in real life:
    • For example, Ella [not her real name], a teacher from London who had been battling with depression at work for a long time but felt unable to go to her boss about it. She said she was “relieved” when she got cancer, because she could talk to her boss about a physical condition and got time off to deal with both illnesses.
  • The data science also allowed the segmentation of groups of people who said they were on health-related benefits. Firstly, the clustering revealed that two groups had average health ratings, indicating that other non-health-related issues might be driving this. Secondly, it showed that these two groups were very different (one older group of men with previously high pay and working hours; the other of much younger men with previously low pay and working hours). The conclusion was that their motivations and needs to stay in work – and policy interventions – would be different.
  • The ethnography highlighted other issues that were not captured in the data but would be important in designing solutions, such as: a lack of shared information across the system; the need of the general practitioner (GP) to refer patients to other non-health services as well as providing a fit note; and the importance of coaching, confidence-building and planning….(More)”

Conceptualizing Big Social Data


Ekaterina Olshannikova, Thomas OlssonJukka Huhtamäki and Hannu Kärkkäinen in the Journal of Big Data: “The popularity of social media and computer-mediated communication has resulted in high-volume and highly semantic data about digital social interactions. This constantly accumulating data has been termed as Big Social Data or Social Big Data, and various visions about how to utilize that have been presented. However, as relatively new concepts, there are no solid and commonly agreed definitions of them. We argue that the emerging research field around these concepts would benefit from understanding about the very substance of the concept and the different viewpoints to it. With our review of earlier research, we highlight various perspectives to this multi-disciplinary field and point out conceptual gaps, the diversity of perspectives and lack of consensus in what Big Social Data means. Based on detailed analysis of related work and earlier conceptualizations, we propose a synthesized definition of the term, as well as outline the types of data that Big Social Data covers. With this, we aim to foster future research activities around this intriguing, yet untapped type of Big Data

https://static-content.springer.com/image/art%3A10.1186%2Fs40537-017-0063-x/MediaObjects/40537_2017_63_Fig1_HTML.gif

Conceptual map of various BSD/SBD interpretations in the related literature. This illustration depicts four main domains, which were studied by different researchers from various perspectives and intersections of science field/data types….(More)”.

 

 

Numbers and the Making of Us: Counting and the Course of Human Cultures


Cover: Numbers and the Making of Us in HARDCOVERBook by Caleb Everett: “Carved into our past, woven into our present, numbers shape our perceptions of the world and of ourselves much more than we commonly think. Numbers and the Making of Us is a sweeping account of how numbers radically enhanced our species’ cognitive capabilities and sparked a revolution in human culture. Caleb Everett brings new insights in psychology, anthropology, primatology, linguistics, and other disciplines to bear in explaining the myriad human behaviors and modes of thought numbers have made possible, from enabling us to conceptualize time in new ways to facilitating the development of writing, agriculture, and other advances of civilization.

Number concepts are a human invention—a tool, much like the wheel, developed and refined over millennia. Numbers allow us to grasp quantities precisely, but they are not innate. Recent research confirms that most specific quantities are not perceived in the absence of a number system. In fact, without the use of numbers, we cannot precisely grasp quantities greater than three; our minds can only estimate beyond this surprisingly minuscule limit.

Everett examines the various types of numbers that have developed in different societies, showing how most number systems derived from anatomical factors such as the number of fingers on each hand. He details fascinating work with indigenous Amazonians who demonstrate that, unlike language, numbers are not a universal human endowment. Yet without numbers, the world as we know it would not exist….(More)”.

Using GitHub in Government: A Look at a New Collaboration Platform


Justin Longo at the Center for Policy Informatics: “…I became interested in the potential for using GitHub to facilitate collaboration on text documents. This was largely inspired by the 2012 TED Talk by Clay Shirky where he argued that open source programmers could teach us something about how to do open governance:

Somebody put up a tool during the copyright debate last year in the Senate, saying, “It’s strange that Hollywood has more access to Canadian legislators than Canadian citizens do. Why don’t we use GitHub to show them what a citizen-developed bill might look like?” …

For this research, we undertook a census of Canadian government and public servant accounts on GitHub and surveyed those users, supplemented by interviews with key government technology leaders.

This research has now been published in the journal Canadian Public Administration. (If you don’t have access to the full document through the publisher, you can also find it here).

Despite the growing enthusiasm for GitHub (mostly from those familiar with open source software development), and the general rhetoric in favour of collaboration, we suspected that getting GitHub used in public sector organizations for text collaboration might be an uphill battle – not least of which because of the steep learning curve involved in using GitHub, and its inflexibility when being used to edit text.

The history of computer-supported collaborative work platforms is littered with really cool interfaces that failed to appeal to users. The experience to date with GitHub in Canadian governments reflects this, as far as our research shows.

We found few government agencies having an active presence on GitHub compared to social media presence in general. And while federal departments and public servants on GitHub are rare, provincial, territorial, First Nations and local governments are even rarer.

For individual accounts held by public servants, most were found in the federal government at higher rates than those found in broader society (see Mapping Collaborative Software). Within this small community, the distribution of contributions per user follows the classic long-tail distribution with a small number of contributors responsible for most of the work, a larger number of contributors doing very little on average, and many users contributing nothing.

GitHub is still resisted by all but the most technically savvy. With a peculiar terminology and work model that presupposes a familiarity with command line computer operations and the language of software coding, using GitHub presents many barriers to the novice user. But while it is tempting to dismiss GitHub, as it currently exists, as ill-suited as a collaboration tool to support document writing, it holds potential as a useful platform for facilitating collaboration in the public sector.

As an example, to help understand how GitHub might be used within governments for collaboration on text documents, we discuss a briefing note document flow in the paper (see the paper for a description of this lovely graphic).

screen-shot-2017-01-21-at-8-54-24-pm

A few other finding are addressed in the paper, from why public servants may choose not to collaborate even though they believe it’s the right thing to do, to an interesting story about what propelled the use of GitHub in the government of Canada in the first place….(More)”

Can artificial intelligence wipe out bias unconscious bias from your workplace?


Lydia Dishman at Fast Company: “Unconscious bias is exactly what it sounds like: The associations we make whenever we face a decision are buried so deep (literally—the gland responsible for this, the amygdala, is surrounded by the brain’s gray matter) that we’re as unaware of them as we are of having to breathe.

So it’s not much of a surprise that Ilit Raz, cofounder and CEO of Joonko, a new application that acts as diversity “coach” powered by artificial intelligence, wasn’t even aware at first of the unconscious bias she was facing as a woman in the course of a normal workday. Raz’s experience coming to grips with that informs the way she and her cofounders designed Joonko to work.

The tool joins a crowded field of AI-driven solutions for the workplace, but most of what’s on the market is meant to root out bias in recruiting and hiring. Joonko, by contrast, is setting its sights on illuminating unconscious bias in the types of workplace experiences where few people even think to look for it….

so far, a lot of these resources have been focused on addressing the hiring process. An integral part of the problem, after all, is getting enough diverse candidates in the recruiting pipeline so they can be considered for jobs. Apps like Blendoor hide a candidate’s name, age, employment history, criminal background, and even their photo so employers can focus on qualifications. Interviewing.io’s platform even masks applicants’ voices. Text.io uses AI to parse communications in order to make job postings more gender-neutral. Unitive’s technology also focuses on hiring, with software designed to detect unconscious bias in Applicant Tracking Systems that read resumes and decide which ones to keep or scrap based on certain keywords.

But as Intel recently discovered, hiring diverse talent doesn’t always mean they’ll stick around. And while one 2014 estimate by Margaret Regan, head of the global diversity consultancy FutureWork Institute, found that 20% of large U.S. employers with diversity programs now provide unconscious-bias training—a number that could reach 50% by next year—that training doesn’t always work as intended. The reasons why vary, from companies putting programs on autopilot and expecting them to run themselves, to the simple fact that many employees who are trained ultimately forget what they learned a few days later.

Joonko doesn’t solve these problems. “We didn’t even start with recruiting,” Raz admits. “We started with task management.” She explains that when a company finally hires a diverse candidate, it needs to understand that the best way to retain them is to make sure they feel included and are given the same opportunities as everyone else. That’s where Joonko sees an opening…(More)”.

Public services and the new age of data


 at Civil Service Quaterly: “Government holds massive amounts of data. The potential in that data for transforming the way government makes policy and delivers public services is equally huge. So, getting data right is the next phase of public service reform. And the UK Government has a strong foundation on which to build this future.

Public services have a long and proud relationship with data. In 1858, more than 50 years before the creation of the Cabinet Office, Florence Nightingale produced her famous ‘Diagram of the causes of mortality in the army in the east’ during the Crimean War. The modern era of statistics in government was born at the height of the Second World War with the creation of the Central Statistical Office in 1941.

How data can help

However, the huge advances we’ve seen in technology mean there are significant new opportunities to use data to improve public services. It can help us:

  • understand what works and what doesn’t, through data science techniques, so we can make better decisions: improving the way government works and saving money
  • change the way that citizens interact with government through new better digital services built on reliable data;.
  • boost the UK economy by opening and sharing better quality data, in a secure and sensitive way, to stimulate new data-based businesses
  • demonstrate a trustworthy approach to data, so citizens know more about the information held about them and how and why it’s being used

In 2011 the Government embarked upon a radical improvement in its digital capability with the creation of the Government Digital Service, and over the last few years we have seen a similar revolution begin on data. Although there is much more to do, in areas like open data, the UK is already seen as world-leading.

…But if government is going to seize this opportunity, it needs to make some changes in:

  • infrastructure – data is too often hard to find, hard to access, and hard to work with; so government is introducing developer-friendly open registers of trusted core data, such as countries and local authorities, and better tools to find and access personal data where appropriate through APIs for transformative digital services;
  • approach – we need the right policies in place to enable us to get the most out of data for citizens and ensure we’re acting appropriately; and the introduction of new legislation on data access will ensure government is doing the right thing – for example, through the data science code of ethics;
  • data science skills – those working in government need the skills to be confident with data; that means recruiting more data scientists, developing data science skills across government, and using those skills on transformative projects….(More)”.

Scientists have a word for studying the post-truth world: agnotology


 and  in The Conversation: “But scientists have another word for “post-truth”. You might have heard of epistemology, or the study of knowledge. This field helps define what we know and why we know it. On the flip side of this is agnotology, or the study of ignorance. Agnotology is not often discussed, because studying the absence of something — in this case knowledge — is incredibly difficult.

Doubt is our product

Agnotology is more than the study of what we don’t know; it’s also the study of why we are not supposed to know it. One of its more important aspects is revealing how people, usually powerful ones, use ignorance as a strategic tool to hide or divert attention from societal problems in which they have a vested interest.

A perfect example is the tobacco industry’s dissemination of reports that continuously questioned the link between smoking and cancer. As one tobacco employee famously stated, “Doubt is our product.”

In a similar way, conservative think tanks such as The Heartland Institute work to discredit the science behind human-caused climate change.

Despite the fact that 97% of scientists support the anthropogenic causes of climate change, hired “experts” have been able to populate talk shows, news programmes, and the op-ed pages to suggest a lack of credible data or established consensus, even with evidence to the contrary.

These institutes generate pseudo-academic reports to counter scientific results. In this way, they are responsible for promoting ignorance….

Under agnotology 2.0, truth becomes a moot point. It is the sensation that counts. Public media leaders create an impact with whichever arguments they can muster based in whatever fictional data they can create…Donald Trump entering the White House is the pinnacle of agnotology 2.0. Washington Post journalist Fareed Zakaria has argued that in politics, what matters is no longer the economy but identity; we would like to suggest that the problem runs deeper than that.

The issue is not whether we should search for identity, for fame, or for sensational opinions and entertainment. The overarching issue is the fallen status of our collective search for truth, in its many forms. It is no longer a positive attribute to seek out truth, determine biases, evaluate facts, or share knowledge.

Under agnotology 2.0, scientific thinking itself is under attack. In a post-fact and post-truth era, we could very well become post-science….(More)”.

Harnessing the Power of Feedback Loops


Thomas Kalil and David Wilkinson at the White House: “When it comes to strengthening the public sector, the Federal Government looks for new ways to achieve better results for the people we serve. One promising tool that has gained momentum across numerous sectors in the last few years is the adoption of feedback loops.  Systematically collecting data and learning from client and customer insights can benefit organizations across all sectors.

The collection of these valuable insights—and acting on them—remains an underutilized tool.  The people who receive services are the experts on their effectiveness and usefulness.  While the private sector has used customer feedback to improve products and services, the government and nonprofit sectors have often lagged behind.  User experience is a critically important factor in driving positive outcomes.  Getting honest feedback from service recipients can help nonprofit service providers and agencies at all levels of government ensure their work effectively addresses the needs of the people they serve. It’s equally important to close the loop by letting those who provided feedback know that their input was put to good use.

In September, the White House Office of Social Innovation and the White House Office of Science and Technology Policy (OSTP) hosted a workshop at the White House on data-driven feedback loops for the social and public sectors.  The event brought together leaders across the philanthropy, nonprofit, and business sectors who discussed ways to collect and utilize feedback.

The program featured organizations in the nonprofit sector that use feedback to learn what works, what might not be working as well, and how to fix it. One organization, which offers comprehensive employment services to men and women with recent criminal convictions, explained that it has sought feedback from clients on its training program and learned that many people were struggling to find their work site locations and get to the sessions on time. The organization acted on this feedback, shifting their start times and providing maps and clearer directions to their participants.  These two simple changes increased both participation in and satisfaction with their program.

Another organization collected feedback to learn whether factory workers attend and understand trainings on fire evacuation procedures. By collecting and acting on this feedback in Brazil, the organization was able to help a factory reduce fire-drill evacuation time from twelve minutes to two minutes—a life-saving result of seeking feedback.

With results such as these in mind, the White House has emphasized the importance of evidence and data-driven solutions across the Federal Government.  …

USAID works to end extreme poverty in over 100 countries around the world. The Agency has recently changed its operational policy to enable programs to adapt to feedback from the communities in which they work. They did this by removing bureaucratic obstacles and encouraging more flexibility in their program design. For example, if a USAID-funded project designed to increase agricultural productivity is unexpectedly impacted by drought, the original plan may no longer be relevant or effective; the community may want drought-resistant crops instead.  The new, more flexible policy is intended to ensure that such programs can pivot if a community provides feedback that its needs have changed or projects are not succeeding…(More)”

How Mobile Crowdsourcing Can Improve Occupational Safety


Batu Sayici & Beth Simone Noveck at The GovLab’s Medium: “With 150 workers dying each day from hazardous working conditions, work safety continues to be a serious problem in the U.S. Using mobile technology to collect information about workplace safety conditions from those on the ground could help prevent serious injuries and save lives by accelerating the ability to spot unsafe conditions. The convergence of wireless devices, low-cost sensors, big data, and crowdsourcing can transform the way we assess risk in our workplaces. Government agencies, labor unions, workers’ rights organizations, contractors and crowdsourcing technology providers should work together to create new tools and frameworks in a way that can improve safety and provide value to all stakeholders.

Crowdsourcing (the act of soliciting help from a distributed audience) can provide a real-time source of data to complement data collected by government agencies as part of the regulatory processes of monitoring workplace safety. Having access to this data could help government agencies to more effectively monitor safety-related legal compliance, help building owners, construction companies and procurement entities to more easily identify “responsible contractors and subcontractors,” and aid workers and unions in making more informed choices and becoming better advocates for their own protection. Just as the FitBit and Nike Wristband provide individuals with a real-time reflection of their habits designed to create the incentive for healthier living, crowdsourcing safety data has the potential to provide employers and employees alike with a more accurate picture of conditions and accelerate the time needed to take action….(More)”

The social data revolution will be crowdsourced


Nicholas B. Adams at SSRC Parameters: “It is now abundantly clear to librarians, archivists, computer scientists, and many social scientists that we are in a transformational age. If we can understand and measure meaning from all of these data describing so much of human activity, we will finally be able to test and revise our most intricate theories of how the world is socially constructed through our symbolic interactions….

We cannot write enough rules to teach a computer to read like us. And because the social world is not a game per se, we can’t design a reinforcement-learning scenario teaching a computer to “score points” and just ‘win.’ But AlphaGo’s example does show a path forward. Recall that much of AlphaGo’s training came in the form of supervised machine learning, where humans taught it to play like them by showing the machine how human experts played the game. Already, humans have used this same supervised learning approach to teach computers to classify images, identify parts of speech in text, or categorize inventories into various bins. Without writing any rules, simply by letting the computer guess, then giving it human-generated feedback about whether it guessed right or wrong, humans can teach computers to label data as we do. The problem is (or has been): humans label textual data slowly—very, very slowly. So, we have generated precious little data with which to teach computers to understand natural language as we do. But that is going to change….

The single greatest factor dilating the duration of such large-scale text-labeling projects has been workforce training and turnover. ….The key to organizing work for the crowd, I had learned from talking to computer scientists, was task decomposition. The work had to be broken down into simple pieces that any (moderately intelligent) person could do through a web interface without requiring face-to-face training. I knew from previous experiments with my team that I could not expect a crowd worker to read a whole article, or to know our whole conceptual scheme defining everything of potential interest in those articles. Requiring either or both would be asking too much. But when I realized that my conceptual scheme could actually be treated as multiple smaller conceptual schemes, the idea came to me: Why not have my RAs identify units of text that corresponded with the units of analysis of my conceptual scheme? Then, crowd workers reading those much smaller units of text could just label them according to a smaller sub-scheme. Moreover, I came to realize, we could ask them leading questions about the text to elicit information about the variables and attributes in the scheme, so they wouldn’t have to memorize the scheme either. By having them highlight the words justifying their answers, they would be labeling text according to our scheme without any face-to-face training. Bingo….

This approach promises more, too. The databases generated by crowd workers, citizen scientists, and students can also be used to train machines to see in social data what we humans see comparatively easily. Just as AlphaGo learned from humans how to play a strategy game, our supervision can also help it learn to see the social world in textual or video data. The final products of social data analysis assembly lines, therefore, are not merely rich and massive databases allowing us to refine our most intricate, elaborate, and heretofore data-starved theories; they are also computer algorithms that will do most or all social data labeling in the future. In other words, whether we know it or not, we social scientists hold the key to developing artificial intelligences capable of understanding our social world….

At stake is a social science with the capacity to quantify and qualify so many of our human practices, from the quotidian to mythic, and to lead efforts to improve them. In decades to come, we may even be able to follow the path of other mature sciences (including physics, biology, and chemistry) and shift our focus toward engineering better forms of sociality. All the more so because it engages the public, a crowd-supported social science could enlist a new generation in the confident and competent re-construction of society….(More)”