Who represents the human in the digital age?


Anni Rowland-Campbell at NPC: “In his book The Code Economy Philip E. Auerswald talks about the long history of humans developing codeas a mechanism by which to create and regulate activities and markets.[1] We have Codes of Practice, Ethical Codes, Building Codes, and Legal Codes, just to name a few.

Each and every one of these is based on the data of human behaviour, and that data can now be collected, analysed, harvested and repurposed as never before through the application of intelligent machines that operate and are instructed by algorithms. Anything that can be articulated as an algorithm—a self-contained sequence of actions to be performed—is now fertile ground for machine analysis, and increasingly machine activity.

So, what does this mean for us humans who, are ourselves a conglomeration of DNA code? I have spent many years thinking about this. Not that long ago my friends and family tolerated my speculations with good humour, but a fair degree of scepticism. Now I run workshops for boards and even my children are listening far more intently. Because people are sensing that the invasion of the ‘Social Machine’ is changing our relationship with such things as privacy, as well as with both ourselves and each other. It is changing how we understand our role as humans.

The Social Machine is the name given to the systems we have created that blur the lines between computational processes and human input, of which the World Wide Web is the largest and best known example. These ‘smart machines’ are increasingly pervading almost every aspect of human existenceand, in many ways, getting to know us better than we know ourselves.

So who stands up for us humans? Who determines how society will harness and utilise the power of information technologies whilst ensuring that the human remain both relevant and important?…

Philanthropists must equip themselves with the knowledge they need in order to do good with digital

Consider the Luddites as they smashed the looms in the early 1800s. Their struggle is instructive because they were amongst the first to experience technological displacement. They sensed the degradation of human kind and they fought for social equality and fairness in the distribution of the benefits of science and technology to all. If knowledge is power, philanthropy must arm itself with knowledge of digital to ensure the power of digital lies with the many and not the few.

The best place to start in understanding the digital world as it stands now is to begin to see the world, and all human activities, through the lens of data and as a form of digital currency. This links back to the earlier idea of codes. Our activities, up until recently, were tacit and experiential, but now they are becoming increasingly explicit and quantified. Where we go, who we meet, what we say, what we do is all being registered, monitored and measured as long as we are connected to the digital infrastructure.

A new currency is emerging that is based on the world’s most valuable resource: data. It is this currency that connects the arteries and capillaries, and reaches across all disciplines and fields of expertise. The kind of education that is required now is to be able to make connections and to see the opportunities in the interstice between policy and day-to-day reality.

The dominant players in this space thus far have been the large corporations and governments that have harnessed and exploited digital currencies for their own benefit. Shoshana Zuboff describes this as the ‘surveillance economy’. But this data actually belongs to each and every human who generates it. As people begin to wake up to this we are gradually realising that this is what fuels the social currency of entrepreneurship, leadership and innovation, and provides the legitimacy upon which trust is based.

Trust is an outcome of experiences and interactions, but governments and corporations have transactionalised their interactions with citizens and consumer through exploiting data. As a consequence they have eroded the esteem with which they are held. The more they try to garner greater insights through data and surveillance, the more they alienate the people they seek to reach.

If we are smart what we need to do, as philanthropists, is to understand the fundamentals of data as a currency and integrate this in to each and every interaction we have. This will enable us to create relationships with the people that are based on the authenticity of purpose, supported by the data of proof. Yes, there have been some instances where the sector has not done as well as it could and betrayed that trust. But this only serves as a lesson as to how fragile the world of trust and legitimacy are. It shows how crucial it is that we define all that we do in terms of social outcomes and impact, however that is defined….(More)”

Political Lawyering for the 21st Century


Paper by Deborah N. Archer: “Legal education purports to prepare the next generation of lawyers capable of tackling the urgent and complex social justice challenges of our time. But law schools are failing in that public promise. Clinical education offers the best opportunity to overcome those failings by teaching the skills lawyers need to tackle systemic and interlocking legal and social problems. But too often even clinical education falls short: it adheres to conventional pedagogical methodologies that are overly narrow and, in the end, limit students’ abilities to manage today’s complex racial and social justice issues. This article contends that clinical education needs to embrace and reimagine political lawyering for the 21st century in order to prepare aspiring lawyers to tackle both new and chronic issues of injustice through a broad array of advocacy strategies….(More)”.

Can the UN Include Indigenous Peoples in its Development Goals?: There’s An App For That


Article by Jacquelyn Kovarik at NACA: “…Last year, during a high-level event of the General Assembly, a coalition of states along with the European Union and the International Labour Organization announced a new technology for monitoring the rights of Indigenous people. The proposal was a web application called “Indigenous Navigator,” designed to enable native peoples to monitor their rights from within their communities. The project is extremely seductive: why rely on the General Assembly to represent Indigenous peoples when they can represent themselves—remotely and via cutting-edge data-collecting technology? Could an app be the answer to over a decade of failed attempts to include Indigenous peoples in the international body?

The web application, which officially launched in 11 countries early this year, is comprised of four “community-based monitoring tools” that are designed to bridge the gap between Indigenous rights implementation and the United Nations goals. The toolbox, which is available open-access to anyone with internet, consists of: a set of two impressively comprehensive surveys designed to collect data on Indigenous rights at a community and national level; a comparative matrix that illustrates the links between the UN Declaration on Indigenous Rights and the UN development goals; an index designed to quickly compare Indigenous realities across communities, regions, or states; and a set of indicators designed to measure the realization of Indigenous rights in communities or states. The surveys are divided by sections based on the UN Declaration on the Rights of Indigenous Peoples, and include such categories as cultural integrity, land rights, access to justice, health, cross-border contacts, freedom of expression and media, education, and economic and social development. The surveys also include tips for methodological administration. For example, in questions about poverty rates in the community, a tip provided reads: “Most people/communities have their own criteria for defining who are poor and who are not poor. Here you are asked to estimate how many of the men of your people/community are considered poor, according to your own criteria for poverty.” It then suggests that it may be helpful to first discuss what are the perceived characteristics of a poor person within the community, before answering the question….(More)”.

Privacy and Interoperability Challenges Could Limit the Benefits of Education Technology


Report by Katharina Ley Best and John F. Pane: “The expansion of education technology is transforming the learning environment in classrooms, schools, school systems, online, and at home. The rise of education technology brings with it an increased opportunity for the collection and application of data, which are valuable resources for educators, schools, policymakers, researchers, and software developers.

RAND researchers examine some of the possible implications of growing data collection and availability related to education technology. Specifically, this Perspective discusses potential data infrastructure challenges that could limit data usefulness, consider data privacy implications in an education technology context, and review privacy principles that could help educators and policymakers evaluate the changing education data privacy landscape in anticipation of potential future changes to regulations and best practices….(More)”.

Emerging Labour Market Data Sources towards Digital Technical and Vocational Education and Training (TVET)


Paper by Nikos Askitas, Rafik Mahjoubi, Pedro S. Martins, Koffi Zougbede for Paris21/OECD: “Experience from both technology and policy making shows that solutions for labour market improvements are simply choices of new, more tolerable problems. All data solutions supporting digital Technical and Vocational Education and Training (TVET) will have to incorporate a roadmap of changes rather than an unrealistic super-solution. The ideal situation is a world in which labour market participants engage in intelligent strategic behavior in an informed, fair and sophisticated manner.

Labour market data captures transactions within labour market processes. In order to successfully capture such data, we need to understand the specifics of these market processes. Designing an ecosystem of labour market matching facilitators and rules of engagement for contributing to a lean and streamlined Logistics Management and Information System (LMIS) is the best way to create Big Data with context relevance. This is in contrast with pre-existing Big Data captured by global job boards or social media for which relevance is limited by the technology access gap and its variations across the developing world.

Network effects occur in technology and job facilitation, as seen in the developed world. Managing and instigating the right network effects might be crucial to avoid fragmented stagnation and inefficiency. This is key to avoid throwing money behind wrong choices that do not gain traction.

A mixed mode approach is possibly the ideal approach for developing countries. Mixing offline and online elements correctly will be crucial in bridging the technology access gap and reaping the benefits of digitisation at the same time.

Properly incentivising the various entities is critical for progression, and more specifically the private sector, which is significantly more agile and inventive, has “skin in the game” and a long-term commitment to the conditions in the field, has intimate knowledge of how to solve the the technology gap and brings a better understanding of the particular ambient context they are operating in. To summarise: Big Data starts small.

Managing expectations and creating incentives for the various stakeholders will be crucial in establishing digitally supported TVET. Developing the right business models will be crucial in the short term and beyond, and it will be the result of creating the right mix of technological and policy expertise with good knowledge of the situation on the ground….(More)”.

The New York City Business Atlas: Leveling the Playing Field for Small Businesses with Open Data


Chapter by Stefaan Verhulst and Andrew Young in Smarter New York City:How City Agencies Innovate. Edited by André Corrêa d’Almeida: “While retail entrepreneurs, particularly those operating in the small-business space, are experts in their respective trades, they often lack access to high-quality information about social, environmental, and economic conditions in the neighborhoods where they operate or are considering operating.

The New York City Business Atlas, conceived by the Mayor’s Office of Data Analytics (MODA) and the Department of Small Business Services, is designed to alleviate that information gap by providing a public web-based tool that gives small businesses access to high-quality data to help them decide where to establish a new business or expand an existing one. e tool brings together a diversity of data, including business-fling data from the Department of Consumer Affairs, sales-tax data from the Department of Finance, demographic data from the census, and traffic data from Placemeter, a New York City startup focusing on real-time traffic information.

The initial iteration of the Business Atlas made useful and previously inaccessible data available to small-business owners and entrepreneurs in an innovative manner. After a few years, however, it became clear that the tool was not experiencing the level of use or creating the level of demonstrable impact anticipated. Rather than continuing down the same path or abandoning the effort entirely, MODA pivoted to a new approach, moving from the Business Atlas as a single information-providing tool to the Business Atlas as a suite of capabilities aimed at bolstering New York’s small-business community.

Through problem- and user-centered efforts, the Business Atlas is now making important insights available to stakeholders who can put it to meaningful use—from how long it takes to open a restaurant in the city to which areas are most in need of education and outreach to improve their code compliance. This chapter considers the open data environment from which the Business Atlas was launched, details the initial version of the Business Atlas and the lessons it generated and describes the pivot to this new approach….(More)”.

Making Wage Data Work: Creating a Federal Resource for Evidence and Transparency


Christina Pena at the National Skills Coalition: “Administrative data on employment and earnings, commonly referred to as wage data or wage records, can be used to assess the labor market outcomes of workforce, education, and other programs, providing policymakers, administrators, researchers, and the public with valuable information. However, there is no single readily accessible federal source of wage data which covers all workers. Noting the importance of employment and earnings data to decision makers, the Commission on Evidence-Based Policymaking called for the creation of a single federal source of wage data for statistical purposes and evaluation. They recommended three options for further exploration: expanding access to systems that already exist at the U.S. Census Bureau or the U.S. Department of Health and Human Services (HHS), or creating a new database at the U.S. Department of Labor (DOL).

This paper reviews current coverage and allowable uses, as well as federal and state actions required to make each option viable as a single federal source of wage data that can be accessed by government agencies and authorized researchers. Congress and the President, in conjunction with relevant federal and state agencies, should develop one or more of those options to improve wage information for multiple purposes. Although not assessed in the following review, financial as well as privacy and security considerations would influence the viability of each scenario. Moreover, if a system like the Commission-recommended National Secure Data Service for sharing data between agencies comes to fruition, then a wage system might require additional changes to work with the new service….(More)”

Digital Deceit II: A Policy Agenda to Fight Disinformation on the Internet


We have developed here a broad policy framework to address the digital threat to democracy, building upon basic principles to recommend a set of specific proposals.

Transparency: As citizens, we have the right to know who is trying to influence our political views and how they are doing it. We must have explicit disclosure about the operation of dominant digital media platforms — including:

  • Real-time and archived information about targeted political advertising;
  • Clear accountability for the social impact of automated decision-making;
  • Explicit indicators for the presence of non-human accounts in digital media.

Privacy: As individuals with the right to personal autonomy, we must be given more control over how our data is collected, used, and monetized — especially when it comes to sensitive information that shapes political decision-making. A baseline data privacy law must include:

  • Consumer control over data through stronger rights to access and removal;
  • Transparency for the user of the full extent of data usage and meaningful consent;
  • Stronger enforcement with resources and authority for agency rule-making.

Competition: As consumers, we must have meaningful options to find, send and receive information over digital media. The rise of dominant digital platforms demonstrates how market structure influences social and political outcomes. A new competition policy agenda should include:

  • Stronger oversight of mergers and acquisitions;
  • Antitrust reform including new enforcement regimes, levies, and essential services regulation;
  • Robust data portability and interoperability between services.

There are no single-solution approaches to the problem of digital disinformation that are likely to change outcomes. … Awareness and education are the first steps toward organizing and action to build a new social contract for digital democracy….(More)”

How AI Addresses Unconscious Bias in the Talent Economy


Announcement by Bob Schultz at IBM: “The talent economy is one of the great outcomes of the digital era — and the ability to attract and develop the right talent has become a competitive advantage in most industries. According to a recent IBM study, which surveyed over 2,100 Chief Human Resource Officers, 33 percent of CHROs believe AI will revolutionize the way they do business over the next few years. In that same study, 65 percent of CEOs expect that people skills will have a strong impact on their businesses over the next several years. At IBM, we see AI as a tremendous untapped opportunity to transform the way companies attract, develop, and build the workforce for the decades ahead.

Consider this: The average hiring manager has hundreds of applicants a day for key positions and spends approximately six seconds on each resume. The ability to make the right decision without analytics and AI’s predictive abilities is limited and has the potential to create unconscious bias in hiring.

That is why today, I am pleased to announce the rollout of IBM Watson Recruitment’s Adverse Impact Analysis capability, which identifies instances of bias related to age, gender, race, education, or previous employer by assessing an organization’s historical hiring data and highlighting potential unconscious biases. This capability empowers HR professionals to take action against potentially biased hiring trends — and in the future, choose the most promising candidate based on the merit of their skills and experience alone. This announcement is part of IBM’s largest ever AI toolset release, tailor made for nine industries and professions where AI will play a transformational role….(More)”.

Computers Can Solve Your Problem. You May Not Like The Answer


David Scharfenberg at the Boston Globe: “Years of research have shown that teenagers need their sleep. Yet high schools often start very early in the morning. Starting them later in Boston would require tinkering with elementary and middle school schedules, too — a Gordian knot of logistics, pulled tight by the weight of inertia, that proved impossible to untangle.

Until the computers came along.

Last year, the Boston Public Schools asked MIT graduate students Sébastien Martin and Arthur Delarue to build an algorithm that could do the enormously complicated work of changing start times at dozens of schools — and rerouting the hundreds of buses that serve them….

The algorithm was poised to put Boston on the leading edge of a digital transformation of government. In New York, officials were using a regression analysis tool to focus fire inspections on the most vulnerable buildings. And in Allegheny County, Pa., computers were churning through thousands of health, welfare, and criminal justice records to help identify children at risk of abuse….

While elected officials tend to legislate by anecdote and oversimplify the choices that voters face, algorithms can chew through huge amounts of complicated information. The hope is that they’ll offer solutions we’ve never imagined ­— much as Google Maps, when you’re stuck in traffic, puts you on an alternate route, down streets you’ve never traveled.

Dataphiles say algorithms may even allow us to filter out the human biases that run through our criminal justice, social service, and education systems. And the MIT algorithm offered a small window into that possibility. The data showed that schools in whiter, better-off sections of Boston were more likely to have the school start times that parents prize most — between 8 and 9 a.m. The mere act of redistributing start times, if aimed at solving the sleep deprivation problem and saving money, could bring some racial equity to the system, too.

Or, the whole thing could turn into a political disaster.

District officials expected some pushback when they released the new school schedule on a Thursday night in December, with plans to implement in the fall of 2018. After all, they’d be messing with the schedules of families all over the city.

But no one anticipated the crush of opposition that followed. Angry parents signed an online petition and filled the school committee chamber, turning the plan into one of the biggest crises of Mayor Marty Walsh’s tenure. The city summarily dropped it. The failure would eventually play a role in the superintendent’s resignation.

It was a sobering moment for a public sector increasingly turning to computer scientists for help in solving nagging policy problems. What had gone wrong? Was it a problem with the machine? Or was it a problem with the people — both the bureaucrats charged with introducing the algorithm to the public, and the public itself?…(More)”