Do we know what jobs are in high demand?


Emma Rindlisbacher at Work Shift: “…Measuring which fields are in demand is harder than it sounds. Many of the available data sources, experts say, have significant flaws. And that causes problems for education providers who are trying to understand market demand and map their programs to it.

“If you are in higher education and trying to understand where the labor market is going, use BLS data as a general guide but do not rely too heavily on it when it comes to building programs and making investments,” said Jason Tyszko, the Vice President of the Center for Education and Workforce at the US Chamber of Commerce Foundation.

What’s In-Demand?

Why it matters: Colleges are turning to labor market data as they face increasing pressure from lawmakers and the public to demonstrate value and financial ROI. A number of states also have launched specialized grant and “free college” programs for residents pursuing education in high-demand fields. And many require state agencies to determine which fields are in high demand as part of workforce planning processes.

Virginia is one of those states. To comply with state law, the Board of Workforce Development has to regularly update a list of high demand occupations. Deciding how to do so can be challenging.

According to a presentation given at a September 2021 meeting, the board chose to determine which occupations are in high demand by using BLS data. The reason: the BLS data is publicly available.

“Although in some instances, proprietary data sources have different or additional nuances, in service of guiding principle #1 (transparency, replicability), our team has relied exclusively on publicly available data for this exercise,” the presentation said. (A representative from the board declined to comment, citing the still ongoing nature of constructing the high demand occupations list.)

The limits of the gold standard

For institutions looking to study job market trends, there are typically two main data sources available. The first, from BLS, are official government statistics primarily designed to track economic indicators such as the unemployment rate. The second, from proprietary companies such as Emsi Burning Glass, typically relies on postings to job board websites like LinkedIn. 

The details: The two sources have different strengths and weaknesses. The Emsi Burning Glass data can be considered “real time” data, because it identifies new job postings as they are released online. The BLS data, on the other hand, is updated less frequently but is comprehensive.

The BLS data is designed to compare economic trends across decades, and to map to state systems so that statistics like unemployment rates can be compared across states. For those reasons, the agency is reluctant to change the definitions underlying the data. That consistency, however, can make it difficult for education providers to use the data to determine which fields are in high demand.

BLS data is broken down according to the Standard Occupation Classification system, or SOC, a taxonomy used to classify different occupations. That taxonomy is designed to be public facing—the BLS website, for example, features a guide for job seekers that purports to tell them which occupation codes have the highest wages or the greatest potential for growth.

But the taxonomy was last updated in 2010, according to a BLS spokesperson…(More)”.

New York City passed a bill requiring ‘bias audits’ of AI hiring tech


Kate Kaye at Protocol: “Let the AI auditing vendor brigade begin. A year since it was introduced, New York City Council passed a bill earlier this week requiring companies that sell AI technologies for hiring to obtain audits assessing the potential of those products to discriminate against job candidates. The bill requiring “bias audits” passed with overwhelming support in a 38-4 vote.

The bill is intended to weed out the use of tools that enable already unlawful employment discrimination in New York City. If signed into law, it will require providers of automated employment decision tools to have those systems evaluated each year by an audit service and provide the results to companies using those systems.

AI for recruitment can include software that uses machine learning to sift through resumes and help make hiring decisions, systems that attempt to decipher the sentiments of a job candidate, or even tech involving games to pick up on subtle clues about someone’s hiring worthiness. The NYC bill attempts to encompass the full gamut of AI by covering everything from old-school decision trees to more complex systems operating through neural networks.

The legislation calls on companies using automated decision tools for recruitment not only to tell job candidates when they’re being used, but to tell them what information the technology used to evaluate their suitability for a job.

The bill, however, fails to go into detail on what constitutes a bias audit other than to define one as “an impartial evaluation” that involves testing. And it already has critics who say it was rushed into passage and doesn’t address discrimination related to disability or age…(More)”.

The Census Mapper


Google blog: “…The U.S. Census is one of the largest data sets journalists can access. It has layers and layers of important data that can help reporters tell detailed stories about their own communities. But the challenge is sorting through that data and visualizing it in a way that helps readers understand trends and the bigger picture.

Today we’re launching a new tool to help reporters dig through all that data to find stories and embed visualizations on their sites. The Census Mapper project is an embeddable map that displays Census data at the national, state and county level, as well as census tracts. It was produced in partnership with Pitch Interactive and Big Local News, as part of the 2020 Census Co-op (supported by the Google News Initiative and in cooperation with the JSK Journalism Fellowships).

This image shows a detailed, country level view of the Census Mapper, showing arrows across the US depicting movements of people and other demographic information from the Census

Census Mapper shows where populations have grown over time.

The Census data is pulled from the data collected and processed by The Associated Press, one of the Census Co-op partners. Census Mapper then lets local journalists easily embed maps showing population change at any level, helping them tell powerful stories in a more visual way about their communities.

This image shows changing demographic data from North Carolina, with arrows showing different movements around the state.

With the tool, you can zoom into states and below, such as North Carolina, shown here.

As part of our investment in data journalism we’re also making improvements to our Common Knowledge Project, a data explorer and visual journalism project to allow US journalists to explore local data. Built with journalists for journalists, the new version of Common Knowledge integrates journalist feedback and new features including geographic comparisons, new charts and visuals…(More)”.

Why Are We Failing at AI Ethics?


Article by Anja Kaspersen and Wendell Wallach: “…Extremely troubling is the fact that the people who are most vulnerable to negative impacts from such rapid expansions of AI systems are often the least likely to be able to join the conversation about these systems, either because they have no or restricted digital access or their lack of digital literacy makes them ripe for exploitation.

Such vulnerable groups are often theoretically included in discussions, but not empowered to take a meaningful part in making decisions. This engineered inequity, alongside human biases, risks amplifying otherness through neglect, exclusion, misinformation, and disinformation.

Society should be deeply concerned that nowhere near enough substantive progress is being made to develop and scale actionable legal, ethical oversight while simultaneously addressing existing inequalities.

So, why hasn’t more been done? There are three main issues at play: 

First, many of the existing dialogues around the ethics of AI and governance are too narrow and fail to understand the subtleties and life cycles of AI systems and their impacts.

Often, these efforts focus only on the development and deployment stages of the technology life cycle, when many of the problems occur during the earlier stages of conceptualization, research, and design. Or they fail to comprehend when and if AI system operates at a level of maturity required to avoid failure in complex adaptive systems.

Or they focus on some aspects of ethics, while ignoring other aspects that are more fundamental and challenging. This is the problem known as “ethics washing” – creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed, to justify pressing forward with systems that end up deepening current patterns.

Let’s be clear: every choice entails tradeoffs. “Ethics talk” is often about underscoring the various tradeoffs entailed in differing courses of action. Once a course has been selected, comprehensive ethical oversight is also about addressing the considerations not accommodated by the options selected, which is essential to any future verification effort. This vital part of the process is often a stumbling block for those trying to address the ethics of AI.

The second major issue is that to date all the talk about ethics is simply that: talk. 

We’ve yet to see these discussions translate into meaningful change in managing the ways in which AI systems are being embedded into various aspect of our lives….

A third issue at play is that discussions on AI and ethics are still largely confined to the ivory tower.

There is an urgent need for more informed public discourse and serious investment in civic education around the societal impact of the bio-digital revolution. This could help address the first two problems, but most of what the general public currently perceives about AI comes from sci-fi tropes and blockbuster movies.

A few examples of algorithmic bias have penetrated the public discourse. But the most headline-grabbing research on AI and ethics tends to focus on far-horizon existential risks. More effort needs to be invested in communicating to the public that, beyond the hypothetical risks of future AI, there are real and imminent risks posed by why and how we embed AI systems that currently shape everyone’s daily lives….(More)”.

Do Awards Incentivize Non-Winners to Work Harder on CSR?


Article by Jiangyan Li, Juelin Yin, Wei Shi, And Xiwei Yi: “As corporate lists and awards that rank and recognize firms for superior social reputation have proliferated in recent years, the field of CSR is also replete with various types of awards given out to firms or CEOs, such as Fortune’s “Most Admired Companies” rankings and “Best 100 Companies to Work For” lists. Such awards serve to both reward and incentivize firms to become more dedicated to CSR. Prior research has primarily focused on the effects of awards on award-winning firms; however, the effectiveness and implications of such awards as incentives to non-winning firms remain understudied. Therefore, in the article of “Keeping up with the Joneses: Role of CSR Awards in Incentivizing Non-Winners’ CSR” published by Business & Society, we are curious about whether such CSR awards could successfully incentivize non-winning firms to catch up with their winning competitors.

Drawing on the awareness-motivation-capability (AMC) framework developed in the competitive dynamics literature, we use a sample of Chinese listed firms from 2009 to 2015 to investigate how competitors’ CSR award winning can influence focal firms’ CSR. The empirical results show that non-winning firms indeed improve their CSR after their competitors have won CSR awards. However, non-winning firms’ improvement in CSR may vary in different scenarios. For instance, media exposure can play an important informational role in reducing information asymmetries and inducing competitive actions among competitors, therefore, non-winning firms’ improvement in CSR is more salient when award-winning firms are more visible in the media. Meanwhile, when CSR award winners perform better financially, non-winners will be more motivated to respond to their competitors’ wins. Further, firms with a higher level of prior CSR are more capable of improving their CSR and therefore are more likely to respond to their competitors’ wins…(More)”.

The “9Rs Framework”: Establishing the Business Case for Data Collaboration and Re-Using Data in the Public Interest


Article by Stefaan G. Verhulst, Andrew Young, and Andrew J. Zahuranec: “When made accessible and re-used responsibly, privately held data has the potential to generate enormous public value. Whether it’s enabling better science, supporting evidence-based government programs, or helping community groups to identify people who need help, data can be used to make better public interest decisions and improve people’s lives.

Yet, for all the discussion of the societal value of having organizations provide access to their data, there’s been little discussion of the business case on why to make data available for reuse. What motivates an organization to make its datasets accessible for societal purposes? How does doing so support their organizational goals? What’s the return on investment of using organizational resources to make data available to others?

GRAPHIC: The 9Rs Framework: The Business Case for Data Reuse in the Public Interest

The Open Data Policy Lab addresses these questions with its “9R Framework,” a method for describing and identifying the business case for data reuse for the public good. The 9R Framework consists of nine motivations identified through several years of studying and establishing data collaboratives, categorized by different types of return on investment: license to operate, brand equity, or knowledge and insights. Considered together, these nine motivations add up to a model to help organizations understand the business value of making their data assets accessible….(More)”.

Falling in love with the problem, not the solution


Blog by Kyle Novak: “Fall in love with the problem, not your solution.”  It’s a maxim that I first heard spoken a few years ago by USAID’s former Chief Innovation Officer Ann Mei Chang. I’ve found myself frequently reflecting on those words as I’ve been thinking about the challenges of implementing public policy. I spent the past year on Capitol Hill in Washington, D.C. working as a legislative fellow, funded through a grant to bring scientists to improve evidence-based policymaking within the federal government. I spent much of the year trying to better understand how legislation and oversight work together in context of policy and politics. To learn what makes good public policy, I wanted to understand how to better implement it. Needless to say, I took a course in Problem Driven Iterative Adaptation (PDIA), a framework to manage risk in complex policy challenges by embracing experimentation and “learning through doing.”

Congress primarily uses legislation and budget to control and implement policy initiatives through the federal agencies. Legislation is drafted and introduced by lawmakers with input from constituents, interest groups, and agencies; the Congressional budget is explicitly planned out each year based on input from the agencies; and accountability is built into the process through oversight mechanisms. Congress largely provides the planning and lock-in of “plan and control” management based on majority political party control and congruence with policy priorities of the Administration.  But, it is difficult to successfully implement a plan-and-control approach when political, social, or economic situations are changing.

Take the problem of data privacy and protection. A person’s identity is becoming largely digital. Every day each of us produces almost a gigabyte of information—our location is shared by our mobile phones, our preferences and interpersonal connections are tagged on social media, our purchases analyzed, and our actions recorded on increasingly ubiquitous surveillance cameras. Monetization of this information, often bought and sold through data brokers, enables an invasive and oppressive system that affects all aspects of our lives.  Algorithms mine our data to make decisions about our employment, healthcare, education, credit, and policing. Machine learning and digital redlining skirts protections that prohibit discrimination on basis of race, gender, and religion. Targeted and automated disinformation campaigns suppress fundamental rights of speech and expression. And digital technologies magnify existing inequities. While misuse of personal data has the potential to do incredible harm, responsible use of that data has the power to do incredible good. The challenge of data privacy and protection is one that impacts all of us, our civil liberties, and the foundations of a democratic society.

The success of members of Congress are often measured in the solutions they propose, not the problems that they identify….(More)”

Has COVID-19 been the making of Open Science?


Article by Lonni Besançon, Corentin Segalas and Clémence Leyrat: “Although many concepts fall under the umbrella of Open Science, some of its key concepts are: Open Access, Open Data, Open Source, and Open Peer Review. How far these four principles were embraced by researchers during the pandemic and where there is room for improvement, is what we, as early career researchers, set out to assess by looking at data on scientific articles published during the Covid-19 pandemic….Open Source and Open Data practices consist in making all the data and materials used to gather or analyse data available on relevant repositories. While we can find incredibly useful datasets shared publicly on COVID-19 (for instance those provided by the European Centre for Disease Control), they remain the exception rather than the norm. A spectacular example of this were the papers utilising data from the company Surgisphere, that led to retracted papers in The Lancet and The New England Journal of Medicine. In our paper, we highlight 4 papers that could have been retracted much earlier (and perhaps would never have been accepted) had the data been made accessible from the time of publication. As we argue in our paper, this presents a clear case for making open data and open source the default, with exceptions for privacy and safety. While some journals already have such policies, we go further in asking that, when data cannot be shared publicly, editors/publishers and authors/institutions should agree on a third party to check the existence and reliability/validity of the data and the results presented. This not only would strengthen the review process, but also enhance the reproducibility of research and further accelerate the production of new knowledge through data and code sharing…(More)”.

The Uselessness of Useful Knowledge


Maggie Chiang for Quanta Magazine: “Is artificial intelligence the new alchemy? That is, are the powerful algorithms that control so much of our lives — from internet searches to social media feeds — the modern equivalent of turning lead into gold? Moreover: Would that be such a bad thing?

According to the prominent AI researcher Ali Rahimi and others, today’s fashionable neural networks and deep learning techniques are based on a collection of tricks, topped with a good dash of optimism, rather than systematic analysis. Modern engineers, the thinking goes, assemble their codes with the same wishful thinking and misunderstanding that the ancient alchemists had when mixing their magic potions.

It’s true that we have little fundamental understanding of the inner workings of self-learning algorithms, or of the limits of their applications. These new forms of AI are very different from traditional computer codes that can be understood line by line. Instead, they operate within a black box, seemingly unknowable to humans and even to the machines themselves.

This discussion within the AI community has consequences for all the sciences. With deep learning impacting so many branches of current research — from drug discovery to the design of smart materials to the analysis of particle collisions — science itself may be at risk of being swallowed by a conceptual black box. It would be hard to have a computer program teach chemistry or physics classes. By deferring so much to machines, are we discarding the scientific method that has proved so successful, and reverting to the dark practices of alchemy?

Not so fast, says Yann LeCun, co-recipient of the 2018 Turing Award for his pioneering work on neural networks. He argues that the current state of AI research is nothing new in the history of science. It is just a necessary adolescent phase that many fields have experienced, characterized by trial and error, confusion, overconfidence and a lack of overall understanding. We have nothing to fear and much to gain from embracing this approach. It’s simply that we’re more familiar with its opposite.

After all, it’s easy to imagine knowledge flowing downstream, from the source of an abstract idea, through the twists and turns of experimentation, to a broad delta of practical applications. This is the famous “usefulness of useless knowledge,” advanced by Abraham Flexner in his seminal 1939 essay (itself a play on the very American concept of “useful knowledge” that emerged during the Enlightenment).

A canonical illustration of this flow is Albert Einstein’s general theory of relativity. It all began with the fundamental idea that the laws of physics should hold for all observers, independent of their movements. He then translated this concept into the mathematical language of curved space-time and applied it to the force of gravity and the evolution of the cosmos. Without Einstein’s theory, the GPS in our smartphones would drift off course by about 7 miles a day…(More)”.

Nonprofit Websites Are Riddled With Ad Trackers


Article by By Alfred Ng and Maddy Varner: “Last year, nearly 200 million people visited the website of Planned Parenthood, a nonprofit that many people turn to for very private matters like sex education, access to contraceptives, and access to abortions. What those visitors may not have known is that as soon as they opened plannedparenthood.org, some two dozen ad trackers embedded in the site alerted a slew of companies whose business is not reproductive freedom but gathering, selling, and using browsing data.

The Markup ran Planned Parenthood’s website through our Blacklight tool and found 28 ad trackers and 40 third-party cookies tracking visitors, in addition to so-called “session recorders” that could be capturing the mouse movements and keystrokes of people visiting the homepage in search of things like information on contraceptives and abortions. The site also contained trackers that tell Facebook and Google if users visited the site.

The Markup’s scan found Planned Parenthood’s site communicating with companies like Oracle, Verizon, LiveRamp, TowerData, and Quantcast—some of which have made a business of assembling and selling access to masses of digital data about people’s habits.

Katie Skibinski, vice president for digital products at Planned Parenthood, said the data collected on its website is “used only for internal purposes by Planned Parenthood and our affiliates,” and the company doesn’t “sell” data to third parties.

“While we aim to use data to learn how we can be most impactful, at Planned Parenthood, data-driven learning is always thoughtfully executed with respect for patient and user privacy,” Skibinski said. “This means using analytics platforms to collect aggregate data to gather insights and identify trends that help us improve our digital programs.”

Skibinski did not dispute that the organization shares data with third parties, including data brokers.

Blacklight scan of Planned Parenthood Gulf Coast—a localized website specifically for people in the Gulf region, including Texas, where abortion has been essentially outlawed—churned up similar results.

Planned Parenthood is not alone when it comes to nonprofits, some operating in sensitive areas like mental health and addiction, gathering and sharing data on website visitors.

Using our Blacklight tool, The Markup scanned more than 23,000 websites of nonprofit organizations, including those belonging to abortion providers and nonprofit addiction treatment centers. The Markup used the IRS’s nonprofit master file to identify nonprofits that have filed a tax return since 2019 and that the agency categorizes as focusing on areas like mental health and crisis intervention, civil rights, and medical research. We then examined each nonprofit’s website as publicly listed in GuideStar. We found that about 86 percent of them had third-party cookies or tracking network requests. By comparison, when The Markup did a survey of the top 80,000 websites in 2020, we found 87 percent used some type of third-party tracking.

About 11 percent of the 23,856 nonprofit websites we scanned had a Facebook pixel embedded, while 18 percent used the Google Analytics “Remarketing Audiences” feature.

The Markup found that 439 of the nonprofit websites loaded scripts called session recorders, which can monitor visitors’ clicks and keystrokes. Eighty-nine of those were for websites that belonged to nonprofits that the IRS categorizes as primarily focusing on mental health and crisis intervention issues…(More)”.