New Data Tools Connect American Workers to Education and Job Opportunities


Department of Commerce: “These are the real stories of the people that recently participated in the Census Bureau initiative called The Opportunity Project—a novel, collaborative effort between government agencies, technology companies, and nongovernment organizations to translate government open data into user-friendly tools that solve real world problems for families, communities, and businesses nationwide.  On March 1, they came together to share their projects at The Opportunity Project’s Demo Day. Projects like theirs help veterans, aspiring technologists, and all Americans connect with the career and educational opportunities, like Bryan and Olivia did.

One barrier for many American students and workers is the lack of clear data to help match them with educational opportunities and jobs.  Students want information on the best courses that lead to high paying and high demand jobs. Job seekers want to find the jobs that best match their skills, or where to find new skills that open up career development opportunities.  Despite the increasing availability of big data and the long-standing, highly regarded federal statistical system, there remain significant data gaps about basic labor market questions.

  • What is the payoff of a bachelor’s degree versus an apprenticeship, 2-year degree, industry certification, or other credential?
  • What are the jobs of the future?  Which jobs of today also will be the jobs of the future? What skills and experience do companies value most?

The Opportunity Project brings government, communities, and companies like IBM, the veteran-led Shift.org, and Nepris together to create tools to answer simple questions related to education, employment, health, transportation, housing, and many other matters that are critical to helping Americans advance in their lives and careers….(More)”.

The Technology Trap: Capital, Labor, and Power in the Age of Automation


Book by Carl Benedikt Frey: “From the Industrial Revolution to the age of artificial intelligence, The Technology Trap takes a sweeping look at the history of technological progress and how it has radically shifted the distribution of economic and political power among society’s members. As Carl Benedikt Frey shows, the Industrial Revolution created unprecedented wealth and prosperity over the long run, but the immediate consequences of mechanization were devastating for large swaths of the population. Middle-income jobs withered, wages stagnated, the labor share of income fell, profits surged, and economic inequality skyrocketed. These trends, Frey documents, broadly mirror those in our current age of automation, which began with the Computer Revolution.

Just as the Industrial Revolution eventually brought about extraordinary benefits for society, artificial intelligence systems have the potential to do the same. But Frey argues that this depends on how the short term is managed. In the nineteenth century, workers violently expressed their concerns over machines taking their jobs. The Luddite uprisings joined a long wave of machinery riots that swept across Europe and China. Today’s despairing middle class has not resorted to physical force, but their frustration has led to rising populism and the increasing fragmentation of society. As middle-class jobs continue to come under pressure, there’s no assurance that positive attitudes to technology will persist.
The Industrial Revolution was a defining moment in history, but few grasped its enormous consequences at the time. The Technology Trap demonstrates that in the midst of another technological revolution, the lessons of the past can help us to more effectively face the present….(More)”.

Saying yes to State Longitudinal Data Systems: building and maintaining cross agency relationships


Report by the National Skills Coalition: “In order to provide actionable information to stakeholders, state longitudinal data systems use administrative data that state agencies collect through administering programs. Thus, state longitudinal data systems must maintain strong working relationships with the state agencies collecting necessary administrative data. These state agencies can include K-12 and higher education agencies, workforce agencies, and those administering social service programs such as the Supplemental Nutrition Assistance Program or Temporary Assistance for Needy Families.

When state longitudinal data systems have strong relationships with agencies, agencies willingly and promptly share their data with the system, engage with data governance when needed, approve research requests in a timely manner, and continue to cooperate with the system over the long term. If state agencies do not participate with their state’s longitudinal data system, the work of the system is put into jeopardy. States may find that research and performance reporting can be stalled or stopped outright.

Kentucky and Virginia have been able to build and maintain support for their systems among state agencies. Their example demonstrates how states can effectively utilize their state longitudinal data systems….(More)”.

How Data Sharing Can Improve Frontline Worker Development


Digital Promise: “Frontline workers, or the workers who interact directly with customers and provide services in industries like retail, healthcare, food service, and hospitality, help make up the backbone of today’s workforce.

However, frontline workforce talent development presents numerous challenges. Frontline workers may not be receiving the education and training they need to advance in their careers and sustain gainful employment. They also likely do not have access to data regarding their own skills and learning, and do not know what skills employers seek in quality workers.

Today, Digital Promise, a nonprofit authorized by Congress to support comprehensive research and development of programs to advance innovation in education, launched “Tapping Data for Frontline Talent Development,” a new, interactive report that shares how the seamless and secure sharing of data is key to creating more effective learning and career pathways for frontline service workers.

The research revealed that the current learning ecosystem that serves frontline workers—which includes stakeholders like education and training providers, funders, and employers—is complex, siloed, and removes agency from the worker.

Although many data types are collected, in today’s system much of the data is duplicative and rarely used to inform impact and long-term outcomes. The processes and systems in the ecosystem do not support the flow of data between stakeholders or frontline workers.

And yet, data sharing systems and collaborations are beginning to emerge as providers, funders, and employers recognize the power in data-driven decision-making and the benefits to data sharing. Not only can data sharing help to improve programs and services, it can create more personalized interventions for education providers supporting frontline workers, and it can also improve talent pipelines for employers.

In addition to providing three case studies with valuable examples of employersa community, and a state focused on driving change based on data, this new report identifies key recommendations that have the potential to move the current system toward a more data-driven, collaborative, worker-centered learning ecosystem, including:

  1. Creating awareness and demand among stakeholders
  2. Ensuring equity and inclusion for workers/learners through access and awareness
  3. Creating data sharing resources
  4. Advocating for data standards
  5. Advocating for policies and incentives
  6. Spurring the creation of technology systems that enable data sharing/interoperability

We invite you to read our new report today for more information, and sign up for updates on this important work….(More)”

Commonism


/ˈkɑmənɪz(ə)m/

“A new radical, practice-based ideology […] based on the values of sharing, common (intellectual) ownership and new social co-operations.”

Distinctive, yet with perhaps an interesting hint from “communism”, the term “Commonism” was first coined by Tom DeWeese, the president of the American Policy Center yet more recently redefined in a new book Commonism: A New Aesthetics of the Real edited by Nico Dockx and Pascal Gielen.

According to their introduction:

“After half a century of neoliberalism, a new radical, practice-based ideology is making its way from the margins: commonism, with an o in the middle. It is based on the values of sharing, common (intellectual) ownership and new social co-operations. Commoners assert that social relationships can replace money (contract) relationships. They advocate solidarity and they trust in peer-to-peer relationships to develop new ways of production.

“Commonism maps those new ideological thoughts. How do they work and, especially, what is their aesthetics? How do they shape the reality of our living together? Is there another, more just future imaginable through the commons? What strategies and what aesthetics do commoners adopt? This book explores this new political belief system, alternating between theoretical analysis, wild artistic speculation, inspiring art examples, almost empirical observations and critical reflection.”

In an interview excerpted from the book, author Pascal Gielen, Vrije Universiteit Brussel professor Sonja Lavaert, and philosopher Antonio Negri discuss how commonism has the ability to transcend the ideological spectrum. The commons, regardless of political leanings, collaborate to “[re-appropriate] that of which they were robbed by capital.” Examples put forward in the interview include “liberal politicians write books about the importance of the basic income; neonationalism presents itself as a longing for social cohesion; religiously inspired political parties emphasize communion and the community, et cetera.”

In another piece, Louis Volont and Walter van Andel, both of the Culture Commons Quest Office, argue that an application of commonism can be found in blockchain. They argue that Blockchain’s attributes are capable of addressing the three elements of the tragedy of the commons, which are “overuse, (absence of) communication, and scale”. Further, its decentralization feature enables a “common” creation of value.

Although, the authors caution of a potential tragedy of blockchain by asserting that:

“But what would happen when that one thing that makes the world go around – money (be it virtual, be it actual) – enters the picture? One does not need to look far: many cryptocurrencies, Bitcoin among them, are facilitated by blockchain technology. Even though it is ‘horizontally organized’, ‘decentralized’ or ‘functioning beyond the market and the state’, the blockchain-facilitated experiment of virtual money relates to nothing more than exchange value. Indeed, the core question one should ask when speculating on the potentialities of the blockchain experiment, is whether it is put to use for exchange value on the one hand, or for use value on the other. The latter, still, is where the commons begin. The former (that is, the imperatives of capital and its incessant drive for accumulation through trade), is where the blockchain mutates from a solution to a tragedy, to a comedy in itself.”

Data Was Supposed to Fix the U.S. Education System. Here’s Why It Hasn’t.


Simon Rodberg at Harvard Business School: “For too long, the American education system failed too many kids, including far too many poor kids and kids of color, without enough public notice or accountability. To combat this, leaders of all political persuasions championed the use of testing to measure progress and drive better results. Measurement has become so common that in school districts from coast to coast you can now find calendars marked “Data Days,” when teachers are expected to spend time not on teaching, but on analyzing data like end-of-year and mid-year exams, interim assessments, science and social studies and teacher-created and computer-adaptive tests, surveys, attendance and behavior notes. It’s been this way for more than 30 years, and it’s time to try a different approach.

The big numbers are necessary, but the more they proliferate, the less value they add. Data-based answers lead to further data-based questions, testing, and analysis; and the psychology of leaders and policymakers means that the hunt for data gets in the way of actual learning. The drive for data responded to a real problem in education, but bad thinking about testing and data use has made the data cure worse than the disease….

The leadership decision at stake is how much data to collect. I’ve heard variations on “In God we trust; all others bring data” at any number of conferences and beginning-of-school-year speeches. But the mantra “we believe in data” is actually only shorthand for “we believe our actions should be informed by the best available data.” In education, that mostly means testing. In other fields, the kind of process is different, but the issue is the same. The key question is not, “will the data be useful?” (of course it can be) or, “will the data be interesting?” (Yes, again.) The proper question for leaders to ask is: will the data help us make better-enough decisions to be worth the cost of getting and using it? So far, the answer is “no.”

Nationwide data suggests that the growth of data-driven schooling hasn’t worked even by its own lights. Harvard professor Daniel Koretz says “The best estimate is that test-based accountability may have produced modest gains in elementary-school mathematics but no appreciable gains in either reading or high-school mathematics — even though reading and mathematics have been its primary focus.”

We wanted data to help us get past the problem of too many students learning too little, but it turns out that data is an insufficient, even misleading answer. It’s possible that all we’ve learned from our hyper-focus on data is that better instruction won’t come from more detailed information, but from changing what people do. That’s what data-driven reform is meant for, of course: convincing teachers of the need to change and focusing where they need to change….(More)”.

Firm Led by Google Veterans Uses A.I. to ‘Nudge’ Workers Toward Happiness


Daisuke Wakabayashi in the New York Times: “Technology companies like to promote artificial intelligence’s potential for solving some of the world’s toughest problems, like reducing automobile deaths and helping doctors diagnose diseases. A company started by three former Google employees is pitching A.I. as the answer to a more common problem: being happier at work.

The start-up, Humu, is based in Google’s hometown, and it builds on some of the so-called people-analytics programs pioneered by the internet giant, which has studied things like the traits that define great managers and how to foster better teamwork.

Humu wants to bring similar data-driven insights to other companies. It digs through employee surveys using artificial intelligence to identify one or two behavioral changes that are likely to make the biggest impact on elevating a work force’s happiness. Then it uses emails and text messages to “nudge” individual employees into small actions that advance the larger goal.

At a company where workers feel that the way decisions are made is opaque, Humu might nudge a manager before a meeting to ask the members of her team for input and to be prepared to change her mind. Humu might ask a different employee to come up with questions involving her team that she would like to have answered.

At the heart of Humu’s efforts is the company’s “nudge engine” (yes, it’s trademarked). It is based on the economist Richard Thaler’s Nobel Prize-winning research into how people often make decisions because of what is easier rather than what is in their best interest, and how a well-timed nudge can prompt them to make better choices.

Google has used this approach to coax employees into the corporate equivalent of eating their vegetables, prodding them to save more for retirement, waste less food at the cafeteria and opt for healthier snacks….

But will workers consider the nudges useful or manipulative?

Todd Haugh, an assistant professor of business law and ethics at Indiana University’s Kelley School of Business, said nudges could push workers into behaving in ways that benefited their employers’ interests over their own.

“The companies are the only ones who know what the purpose of the nudge is,” Professor Haugh said. “The individual who is designing the nudge is the one whose interests are going to be put in the forefront.”…(More)”.


The Datafication of Employment


Report by Sam Adler-Bell and Michelle Miller at the Century Foundation: “We live in a surveillance society. Our every preference, inquiry, whim, desire, relationship, and fear can be seen, recorded, and monetized by thousands of prying corporate eyes. Researchers and policymakers are only just beginning to map the contours of this new economy—and reckon with its implications for equity, democracy, freedom, power, and autonomy.

For consumers, the digital age presents a devil’s bargain: in exchange for basically unfettered access to our personal data, massive corporations like Amazon, Google, and Facebook give us unprecedented connectivity, convenience, personalization, and innovation. Scholars have exposed the dangers and illusions of this bargain: the corrosion of personal liberty, the accumulation of monopoly power, the threat of digital redlining,1 predatory ad-targeting,2 and the reification of class and racial stratification.3 But less well understood is the way data—its collection, aggregation, and use—is changing the balance of power in the workplace.

This report offers some preliminary research and observations on what we call the “datafication of employment.” Our thesis is that data-mining techniques innovated in the consumer realm have moved into the workplace. Firms who’ve made a fortune selling and speculating on data acquired from consumers in the digital economy are now increasingly doing the same with data generated by workers. Not only does this corporate surveillance enable a pernicious form of rent-seeking—in which companies generate huge profits by packaging and selling worker data in marketplace hidden from workers’ eyes—but also, it opens the door to an extreme informational asymmetry in the workplace that threatens to give employers nearly total control over every aspect of employment.

The report begins with an explanation of how a regime of ubiquitous consumer surveillance came about, and how it morphed into worker surveillance and the datafication of employment. The report then offers principles for action for policymakers and advocates seeking to respond to the harmful effects of this new surveillance economy. The final sections concludes with a look forward at where the surveillance economy is going, and how researchers, labor organizers, and privacy advocates should prepare for this changing landscape….(More)”

The Everyday Life of an Algorithm


Book by Daniel Neyland: “This open access book begins with an algorithm–a set of IF…THEN rules used in the development of a new, ethical, video surveillance architecture for transport hubs. Readers are invited to follow the algorithm over three years, charting its everyday life. Questions of ethics, transparency, accountability and market value must be grasped by the algorithm in a series of ever more demanding forms of experimentation. Here the algorithm must prove its ability to get a grip on everyday life if it is to become an ordinary feature of the settings where it is being put to work. Through investigating the everyday life of the algorithm, the book opens a conversation with existing social science research that tends to focus on the power and opacity of algorithms. In this book we have unique access to the algorithm’s design, development and testing, but can also bear witness to its fragility and dependency on others….(More)”.

Using Artificial Intelligence to Promote Diversity


Paul R. Daugherty, H. James Wilson, and Rumman Chowdhury at MIT Sloan Management Review:  “Artificial intelligence has had some justifiably bad press recently. Some of the worst stories have been about systems that exhibit racial or gender bias in facial recognition applications or in evaluating people for jobs, loans, or other considerations. One program was routinely recommending longer prison sentences for blacks than for whites on the basis of the flawed use of recidivism data.

But what if instead of perpetuating harmful biases, AI helped us overcome them and make fairer decisions? That could eventually result in a more diverse and inclusive world. What if, for instance, intelligent machines could help organizations recognize all worthy job candidates by avoiding the usual hidden prejudices that derail applicants who don’t look or sound like those in power or who don’t have the “right” institutions listed on their résumés? What if software programs were able to account for the inequities that have limited the access of minorities to mortgages and other loans? In other words, what if our systems were taught to ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand?

AI can do all of this — with guidance from the human experts who create, train, and refine its systems. Specifically, the people working with the technology must do a much better job of building inclusion and diversity into AI design by using the right data to train AI systems to be inclusive and thinking about gender roles and diversity when developing bots and other applications that engage with the public.

Design for Inclusion

Software development remains the province of males — only about one-quarter of computer scientists in the United States are women— and minority racial groups, including blacks and Hispanics, are underrepresented in tech work, too.  Groups like Girls Who Code and AI4ALL have been founded to help close those gaps. Girls Who Code has reached almost 90,000 girls from various backgrounds in all 50 states,5 and AI4ALL specifically targets girls in minority communities….(More)”.