Data Was Supposed to Fix the U.S. Education System. Here’s Why It Hasn’t.


Simon Rodberg at Harvard Business School: “For too long, the American education system failed too many kids, including far too many poor kids and kids of color, without enough public notice or accountability. To combat this, leaders of all political persuasions championed the use of testing to measure progress and drive better results. Measurement has become so common that in school districts from coast to coast you can now find calendars marked “Data Days,” when teachers are expected to spend time not on teaching, but on analyzing data like end-of-year and mid-year exams, interim assessments, science and social studies and teacher-created and computer-adaptive tests, surveys, attendance and behavior notes. It’s been this way for more than 30 years, and it’s time to try a different approach.

The big numbers are necessary, but the more they proliferate, the less value they add. Data-based answers lead to further data-based questions, testing, and analysis; and the psychology of leaders and policymakers means that the hunt for data gets in the way of actual learning. The drive for data responded to a real problem in education, but bad thinking about testing and data use has made the data cure worse than the disease….

The leadership decision at stake is how much data to collect. I’ve heard variations on “In God we trust; all others bring data” at any number of conferences and beginning-of-school-year speeches. But the mantra “we believe in data” is actually only shorthand for “we believe our actions should be informed by the best available data.” In education, that mostly means testing. In other fields, the kind of process is different, but the issue is the same. The key question is not, “will the data be useful?” (of course it can be) or, “will the data be interesting?” (Yes, again.) The proper question for leaders to ask is: will the data help us make better-enough decisions to be worth the cost of getting and using it? So far, the answer is “no.”

Nationwide data suggests that the growth of data-driven schooling hasn’t worked even by its own lights. Harvard professor Daniel Koretz says “The best estimate is that test-based accountability may have produced modest gains in elementary-school mathematics but no appreciable gains in either reading or high-school mathematics — even though reading and mathematics have been its primary focus.”

We wanted data to help us get past the problem of too many students learning too little, but it turns out that data is an insufficient, even misleading answer. It’s possible that all we’ve learned from our hyper-focus on data is that better instruction won’t come from more detailed information, but from changing what people do. That’s what data-driven reform is meant for, of course: convincing teachers of the need to change and focusing where they need to change….(More)”.

Firm Led by Google Veterans Uses A.I. to ‘Nudge’ Workers Toward Happiness


Daisuke Wakabayashi in the New York Times: “Technology companies like to promote artificial intelligence’s potential for solving some of the world’s toughest problems, like reducing automobile deaths and helping doctors diagnose diseases. A company started by three former Google employees is pitching A.I. as the answer to a more common problem: being happier at work.

The start-up, Humu, is based in Google’s hometown, and it builds on some of the so-called people-analytics programs pioneered by the internet giant, which has studied things like the traits that define great managers and how to foster better teamwork.

Humu wants to bring similar data-driven insights to other companies. It digs through employee surveys using artificial intelligence to identify one or two behavioral changes that are likely to make the biggest impact on elevating a work force’s happiness. Then it uses emails and text messages to “nudge” individual employees into small actions that advance the larger goal.

At a company where workers feel that the way decisions are made is opaque, Humu might nudge a manager before a meeting to ask the members of her team for input and to be prepared to change her mind. Humu might ask a different employee to come up with questions involving her team that she would like to have answered.

At the heart of Humu’s efforts is the company’s “nudge engine” (yes, it’s trademarked). It is based on the economist Richard Thaler’s Nobel Prize-winning research into how people often make decisions because of what is easier rather than what is in their best interest, and how a well-timed nudge can prompt them to make better choices.

Google has used this approach to coax employees into the corporate equivalent of eating their vegetables, prodding them to save more for retirement, waste less food at the cafeteria and opt for healthier snacks….

But will workers consider the nudges useful or manipulative?

Todd Haugh, an assistant professor of business law and ethics at Indiana University’s Kelley School of Business, said nudges could push workers into behaving in ways that benefited their employers’ interests over their own.

“The companies are the only ones who know what the purpose of the nudge is,” Professor Haugh said. “The individual who is designing the nudge is the one whose interests are going to be put in the forefront.”…(More)”.


The Datafication of Employment


Report by Sam Adler-Bell and Michelle Miller at the Century Foundation: “We live in a surveillance society. Our every preference, inquiry, whim, desire, relationship, and fear can be seen, recorded, and monetized by thousands of prying corporate eyes. Researchers and policymakers are only just beginning to map the contours of this new economy—and reckon with its implications for equity, democracy, freedom, power, and autonomy.

For consumers, the digital age presents a devil’s bargain: in exchange for basically unfettered access to our personal data, massive corporations like Amazon, Google, and Facebook give us unprecedented connectivity, convenience, personalization, and innovation. Scholars have exposed the dangers and illusions of this bargain: the corrosion of personal liberty, the accumulation of monopoly power, the threat of digital redlining,1 predatory ad-targeting,2 and the reification of class and racial stratification.3 But less well understood is the way data—its collection, aggregation, and use—is changing the balance of power in the workplace.

This report offers some preliminary research and observations on what we call the “datafication of employment.” Our thesis is that data-mining techniques innovated in the consumer realm have moved into the workplace. Firms who’ve made a fortune selling and speculating on data acquired from consumers in the digital economy are now increasingly doing the same with data generated by workers. Not only does this corporate surveillance enable a pernicious form of rent-seeking—in which companies generate huge profits by packaging and selling worker data in marketplace hidden from workers’ eyes—but also, it opens the door to an extreme informational asymmetry in the workplace that threatens to give employers nearly total control over every aspect of employment.

The report begins with an explanation of how a regime of ubiquitous consumer surveillance came about, and how it morphed into worker surveillance and the datafication of employment. The report then offers principles for action for policymakers and advocates seeking to respond to the harmful effects of this new surveillance economy. The final sections concludes with a look forward at where the surveillance economy is going, and how researchers, labor organizers, and privacy advocates should prepare for this changing landscape….(More)”

The Everyday Life of an Algorithm


Book by Daniel Neyland: “This open access book begins with an algorithm–a set of IF…THEN rules used in the development of a new, ethical, video surveillance architecture for transport hubs. Readers are invited to follow the algorithm over three years, charting its everyday life. Questions of ethics, transparency, accountability and market value must be grasped by the algorithm in a series of ever more demanding forms of experimentation. Here the algorithm must prove its ability to get a grip on everyday life if it is to become an ordinary feature of the settings where it is being put to work. Through investigating the everyday life of the algorithm, the book opens a conversation with existing social science research that tends to focus on the power and opacity of algorithms. In this book we have unique access to the algorithm’s design, development and testing, but can also bear witness to its fragility and dependency on others….(More)”.

Using Artificial Intelligence to Promote Diversity


Paul R. Daugherty, H. James Wilson, and Rumman Chowdhury at MIT Sloan Management Review:  “Artificial intelligence has had some justifiably bad press recently. Some of the worst stories have been about systems that exhibit racial or gender bias in facial recognition applications or in evaluating people for jobs, loans, or other considerations. One program was routinely recommending longer prison sentences for blacks than for whites on the basis of the flawed use of recidivism data.

But what if instead of perpetuating harmful biases, AI helped us overcome them and make fairer decisions? That could eventually result in a more diverse and inclusive world. What if, for instance, intelligent machines could help organizations recognize all worthy job candidates by avoiding the usual hidden prejudices that derail applicants who don’t look or sound like those in power or who don’t have the “right” institutions listed on their résumés? What if software programs were able to account for the inequities that have limited the access of minorities to mortgages and other loans? In other words, what if our systems were taught to ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand?

AI can do all of this — with guidance from the human experts who create, train, and refine its systems. Specifically, the people working with the technology must do a much better job of building inclusion and diversity into AI design by using the right data to train AI systems to be inclusive and thinking about gender roles and diversity when developing bots and other applications that engage with the public.

Design for Inclusion

Software development remains the province of males — only about one-quarter of computer scientists in the United States are women— and minority racial groups, including blacks and Hispanics, are underrepresented in tech work, too.  Groups like Girls Who Code and AI4ALL have been founded to help close those gaps. Girls Who Code has reached almost 90,000 girls from various backgrounds in all 50 states,5 and AI4ALL specifically targets girls in minority communities….(More)”.

Using Data to Raise the Voices of Working Americans


Ida Rademacher at the Aspen Institute: “…At the Aspen Institute Financial Security Program, we sense a growing need to ground these numbers in what people experience day-to-day. We’re inspired by projects like the Financial Diaries that helped create empathy for what the statistics mean. …the Diaries was a time-delimited project, and the insights we can gain from major banking institutions are somewhat limited in their ability to show the challenges of economically marginalized populations. That’s why we’ve recently launched a consumer insights initiative to develop and translate a more broadly sourced set of data that lifts the curtain on the financial lives of low- and moderate-income US consumers. What does it really mean to lack $400 when you need it? How do people cope? What are the aspirations and anxieties that fuel choices? Which strategies work and which fall flat? Our work exists to focus the dialogue about financial insecurity by keeping an ear to the ground and amplifying what we hear. Our ultimate goal: Inspire new solutions that react to reality, ones that can genuinely improve the financial well-being of many.

Our consumer insights initiative sees power in partnerships and collaboration. We’re building a big tent for a range of actors to query and share what their data says: private sector companies, public programs, and others who see unique angles into the financial lives of low- and moderate-income households. We are creating a new forum to lift up these firms serving consumers – and in doing so, we’re raising the voices of consumers themselves.

One example of this work is our Consumer Insights Collaborative (CIC), a group of nine leading non-profits from across the country. Each has a strong sense of challenges and opportunities on the ground because every day their work brings them face-to-face with a wide array of consumers, many of whom are low- and moderate-income families. And most already work independently to learn from their data. Take EARN and its Big Data on Small Savings project; the Financial Clinic’s insights series called Change Matters; Mission Asset Fund’s R&D Lab focused on human-centered design; and FII which uses data collection as part of its main service.

Through the CIC, they join forces to see more than any one nonprofit can on their own. Together CIC members articulate common questions and synthesize collective answers. In the coming months we will publish a first-of-its-kind report on a jointly posed question: What are the dimensions and drivers of short term financial stability?

An added bonus of partnerships like the CIC is the community of practice that naturally emerges. We believe that data scientists from all walks can, and indeed must, learn from each other to have the greatest impact. Our initiative especially encourages cooperative capacity-building around data security and privacy. We acknowledge that as access to information grows, so does the risk to consumers themselves. We endorse collaborative projects that value ethics, respect, and integrity as much as they value cross-organizational learning.

As our portfolio grows, we will invite an even broader network to engage. We’re already working with NEST Insights to draw on NEST’s extensive administrative data on retirement savings, with an aim to understand more about the long-term implications of non-traditional work and unstable household balance sheets on financial security….(More)”.

Public Attitudes Toward Computer Algorithms


Aaron Smith at the Pew Research Center: “Algorithms are all around us, utilizing massive stores of data and complex analytics to make decisions with often significant impacts on humans. They recommend books and movies for us to read and watch, surface news stories they think we might find relevant, estimate the likelihood that a tumor is cancerous and predict whether someone might be a criminal or a worthwhile credit risk. But despite the growing presence of algorithms in many aspects of daily life, a Pew Research Center survey of U.S. adults finds that the public is frequently skeptical of these tools when used in various real-life situations.

This skepticism spans several dimensions. At a broad level, 58% of Americans feel that computer programs will always reflect some level of human bias – although 40% think these programs can be designed in a way that is bias-free. And in various contexts, the public worries that these tools might violate privacy, fail to capture the nuance of complex situations, or simply put the people they are evaluating in an unfair situation. Public perceptions of algorithmic decision-making are also often highly contextual. The survey shows that otherwise similar technologies can be viewed with support or suspicion depending on the circumstances or on the tasks they are assigned to do….

The following are among the major findings.

The public expresses broad concerns about the fairness and acceptability of using computers for decision-making in situations with important real-world consequences

Majorities of Americans find it unacceptable to use algorithms to make decisions with real-world consequences for humans

By and large, the public views these examples of algorithmic decision-making as unfair to the people the computer-based systems are evaluating. Most notably, only around one-third of Americans think that the video job interview and personal finance score algorithms would be fair to job applicants and consumers. When asked directly whether they think the use of these algorithms is acceptable, a majority of the public says that they are not acceptable. Two-thirds of Americans (68%) find the personal finance score algorithm unacceptable, and 67% say the computer-aided video job analysis algorithm is unacceptable….

Attitudes toward algorithmic decision-making can depend heavily on context

Despite the consistencies in some of these responses, the survey also highlights the ways in which Americans’ attitudes toward algorithmic decision-making can depend heavily on the context of those decisions and the characteristics of the people who might be affected….

When it comes to the algorithms that underpin the social media environment, users’ comfort level with sharing their personal information also depends heavily on how and why their data are being used. A 75% majority of social media users say they would be comfortable sharing their data with those sites if it were used to recommend events they might like to attend. But that share falls to just 37% if their data are being used to deliver messages from political campaigns.

Across age groups, social media users are comfortable with their data being used to recommend events - but wary of that data being used for political messaging

In other instances, different types of users offer divergent views about the collection and use of their personal data. For instance, about two-thirds of social media users younger than 50 find it acceptable for social media platforms to use their personal data to recommend connecting with people they might want to know. But that view is shared by fewer than half of users ages 65 and older….(More)”.

A Doctor’s Prescription: Data May Finally Be Good for Your Health


Interview by Art Kleiner: “In 2015, Robert Wachter published The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, a skeptical account of digitization in hospitals. Despite the promise offered by the digital transformation of healthcare, electronic health records had not delivered better care and greater efficiency. The cumbersome design, legacy procedures, and resistance from staff were frustrating everyone — administrators, nurses, consultants, and patients. Costs continued to rise, and preventable medical mistakes were not spotted. One patient at Wachter’s own hospital, one of the nation’s finest, was given 39 times the correct dose of antibiotics by an automated system that nobody questioned. The teenager survived, but it was clear that there needed to be a new approach to the management and use of data.

Wachter has for decades considered the delivery of healthcare through a lens focused on patient safety and quality. In 1996, he coauthored a paper in the New England Journal of Medicine that coined the term hospitalist in describing and promoting a new way of managing patients in hospitals: having one doctor — the hospitalist — “own” the patient journey from admission to discharge. The primary goal was to improve outcomes and save lives. Wachter argued it would also reduce costs and increase efficiency, making the business case for better healthcare. And he was right. Today there are more than 50,000 hospitalists, and it took just two years from the article’s publication to have the first data proving his point. In 2016, Wachter was named chair of the Department of Medicine at the University of California, San Francisco (UCSF), where he has worked since 1990.

Today, Wachter is, to paraphrase the title of a recent talk, less grumpy than he used to be about health tech. The hope part of his book’s title has materialized in some areas faster than he predicted. AI’s advances in imaging are already helping the detection of cancers become more accurate. As data collection has become better systematized, big technology firms such as Google, Amazon, and Apple are entering (in Google’s case, reentering) the field and having more success focusing their problem-solving skills on healthcare issues. In his San Francisco office, Wachter sat down with strategy+businessto discuss why the healthcare system may finally be about to change….

Systems for Fresh Thinking

S+B: The changes you appreciate seem to have less to do with technological design and more to do with people getting used to the new systems, building their own variations, and making them work.
WACHTER:
 The original electronic health record was just a platform play to get the data in digital form. It didn’t do anything particularly helpful in terms of helping the physicians make better decisions or helping to connect one kind of doctor with another kind of doctor. But it was a start.

I remember that when we were starting to develop our electronic health record at UCSF, 12 or 13 years ago, I hired a physician who is now in charge of our health computer system. I said to him, “We don’t have our electronic health record in yet, but I’m pretty sure we will in seven or eight years. What will your job be when that’s done?” I actually thought once the system was fully implemented, we’d be done with the need to innovate and evolve in health IT. That, of course, was asinine.

S+B: That’s like saying to an auto mechanic, “What will your job be when we have automatic transmissions?”
WACHTER:
 Right, but even more so, because many of us saw electronic health records as the be-all and end-all of digitally facilitated medicine. But putting in the electronic health record is just step one of 10. Then you need to start connecting all the pieces, and then you add analytics that make sense of the data and make predictions. Then you build tools and apps to fit into the workflow and change the way you work.

One of my biggest epiphanies was this: When you digitize, in any industry, nobody is clever enough to actually change anything. All they know how to do is digitize the old practice. You only start seeing real progress when smart people come in, begin using the new system, and say, “Why the hell do we do it that way?” And then you start thinking freshly about the work. That’s when you have a chance to reimagine the work in a digital environment…(More)”.

Human Rights in the Big Data World


Paper by Francis Kuriakose and Deepa Iyer: “Ethical approach to human rights conceives and evaluates law through the underlying value concerns. This paper examines human rights after the introduction of big data using an ethical approach to rights. First, the central value concerns such as equity, equality, sustainability and security are derived from the history of digital technological revolution. Then, the properties and characteristics of big data are analyzed to understand emerging value concerns such as accountability, transparency, tracability, explainability and disprovability.

Using these value points, this paper argues that big data calls for two types of evaluations regarding human rights. The first is the reassessment of existing human rights in the digital sphere predominantly through right to equality and right to work. The second is the conceptualization of new digital rights such as right to privacy and right against propensity-based discrimination. The paper concludes that as we increasingly share the world with intelligence systems, these new values expand and modify the existing human rights paradigm….(More)”.

Text Analysis Systems Mine Workplace Emails to Measure Staff Sentiments


Alan Rothman at LLRX: “…For all of these good, bad or indifferent workplaces, a key question is whether any of the actions of management to engage the staff and listen to their concerns ever resulted in improved working conditions and higher levels of job satisfaction?

The answer is most often “yes”. Just having a say in, and some sense of control over, our jobs and workflows can indeed have a demonstrable impact on morale, camaraderie and the bottom line. As posited in the Hawthorne Effect, also termed the “Observer Effect”, this was first discovered during studies in the 1920’s and 1930’s when the management of a factory made improvements to the lighting and work schedules. In turn, worker satisfaction and productivity temporarily increased. This was not so much because there was more light, but rather, that the workers sensed that management was paying attention to, and then acting upon, their concerns. The workers perceived they were no longer just cogs in a machine.

Perhaps, too, the Hawthorne Effect is in some ways the workplace equivalent of the Heisenberg’s Uncertainty Principle in physics. To vastly oversimplify this slippery concept, the mere act of observing a subatomic particle can change its position.¹

Giving the processes of observation, analysis and change at the enterprise level a modern (but non-quantum) spin, is a fascinating new article in the September 2018 issue of The Atlantic entitled What Your Boss Could Learn by Reading the Whole Company’s Emails, by Frank Partnoy.  I highly recommend a click-through and full read if you have an opportunity. I will summarize and annotate it, and then, considering my own thorough lack of understanding of the basics of y=f(x), pose some of my own physics-free questions….

Today the text analytics business, like the work done by KeenCorp, is thriving. It has been long-established as the processing behind email spam filters. Now it is finding other applications including monitoring corporate reputations on social media and other sites.²

The finance industry is another growth sector, as investment banks and hedge funds scan a wide variety of information sources to locate “slight changes in language” that may point towards pending increases or decreases in share prices. Financial research providers are using artificial intelligence to mine “insights” from their own selections of news and analytical sources.

But is this technology effective?

In a paper entitled Lazy Prices, by Lauren Cohen (Harvard Business School and NBER), Christopher Malloy (Harvard Business School and NBER), and Quoc Nguyen (University of Illinois at Chicago), in a draft dated February 22, 2018, these researchers found that the share price of company, in this case NetApp in their 2010 annual report, measurably went down after the firm “subtly changes” its reporting “descriptions of certain risks”. Algorithms can detect such changes more quickly and effectively than humans. The company subsequently clarified in its 2011 annual report their “failure to comply” with reporting requirements in 2010. A highly skilled stock analyst “might have missed that phrase”, but once again its was captured by “researcher’s algorithms”.

In the hands of a “skeptical investor”, this information might well have resulted in them questioning the differences in the 2010 and 2011 annual reports and, in turn, saved him or her a great deal of money. This detection was an early signal of a looming decline in NetApp’s stock. Half a year after the 2011 report’s publication, it was reported that the Syrian government has bought the company and “used that equipment to spy on its citizen”, causing further declines.

Now text analytics is being deployed at a new target: The composition of employees’ communications. Although it has been found that workers have no expectations of privacy in their workplaces, some companies remain reluctant to do so because of privacy concerns. Thus, companies are finding it more challenging to resist the “urge to mine employee information”, especially as text analysis systems continue to improve.

Among the evolving enterprise applications are the human resources departments in assessing overall employee morale. For example, Vibe is such an app that scans through communications on Slack, a widely used enterprise platform. Vibe’s algorithm, in real-time reporting, measures the positive and negative emotions of a work team….(More)”.