Better ways to measure the new economy


Valerie Hellinghausen and Evan Absher at Kauffman Foundation: “The old measure of “jobs numbers” as an economic indicator is shifting to new metrics to measure a new economy.

With more communities embracing inclusive entrepreneurial ecosystems as the new model of economic development, entrepreneurs, ecosystem builders, and government agencies – at all levels – need to work together on data-driven initiatives. While established measures still have a place, new metrics have the potential to deliver the timely and granular information that is more useful at the local level….

Three better ways to measure the new economy:

  1. National and local datasets:Numbers used to discuss the economy are national level and usually not very timely. These numbers are useful to understand large trends, but fail to capture local realities. One way to better measure local economies is to use local administrative datasets. There are many obstacles with this approach, but the idea is gaining interest. Data infrastructure, policies, and projects are building connections between local and national agencies. Joining different levels of government data will provide national scale and local specificity.
  1. Private and public data:The words private and public typically reflect privacy issues, but there is another public and private dimension. Public institutions possess vast amounts of data, but so do private companies. For instance, sites like PayPal, Square, Amazon, and Etsy possess data that could provide real-time assessment of an individual company’s financial health. The concept of credit and risk could be expanded to benefit those currently underserved, if combined with local administrative information like tax, wage, and banking data. Fair and open use of private data could open credit to currently underfunded entrepreneurs.
  1. New metrics:Developing connections between different datasets will result in new metrics of entrepreneurial activity: metrics that measure human connection, social capital, community creativity, and quality of life. Metrics that capture economic activity at the community level and in real time. For example, the Kauffman Foundation has funded research that uses labor data from private job-listing sites to better understand the match between the workforce entrepreneurs need and the workforce available within the immediate community. But new metrics are not enough, they must connect to the final goal of economic independence. Using new metrics to help ecosystems understand how policies and programs impact entrepreneurship is the final step to measuring local economies….(More)”.

The effects of ICT use and ICT Laws on corruption: A general deterrence theory perspective


Anol Bhattacherjee and Utkarsh Shrivastava in Government Information Quarterly: “Investigations of white collar crimes such as corruption are often hindered by the lack of information or physical evidence. Information and communication technologies (ICT), by virtue of their ability to monitor, track, record, analyze, and share vast amounts of information may help countries identify and prosecute criminals, and deter future corruption. While prior studies have demonstrated that ICT is an important tool in reducing corruption at the country level, they provide little explanation as to how ICT influences corruption and when does it work best.

We explore these gaps in the literature using the hypothetico-deductive approach to research, by using general deterrence theory to postulate a series of main and moderating effects relating ICT use and corruption, and then testing those effects using secondary data analysis. Our analysis suggests that ICT use influences corruption by increasing the certainty and celerity of punishment related to corruption. Moreover, ICT laws moderate the effect of ICT use on corruption, suggesting that ICT investments may have limited effect on corruption, unless complemented with appropriate ICT laws. Implications of our findings for research and practice are discussed….(More)”.

Behavioural science and policy: where are we now and where are we going?


Michael Sanders et al in Behavioral Public Policy: “The use of behavioural sciences in government has expanded and matured in the last decade. Since the Behavioural Insights Team (BIT) has been part of this movement, we sketch out the history of the team and the current state of behavioural public policy, recognising that other works have already told this story in detail. We then set out two clusters of issues that have emerged from our work at BIT. The first cluster concerns current challenges facing behavioural public policy: the long-term effects of interventions; repeated exposure effects; problems with proxy measures; spillovers and general equilibrium effects and unintended consequences; cultural variation; ‘reverse impact’; and the replication crisis. The second cluster concerns opportunities: influencing the behaviour of government itself; scaling interventions; social diffusion; nudging organisations; and dealing with thorny problems. We conclude that the field will need to address these challenges and take these opportunities in order to realise the full potential of behavioural public policy….(More)”.

Odd Numbers: Algorithms alone can’t meaningfully hold other algorithms accountable


Frank Pasquale at Real Life Magazine: “Algorithms increasingly govern our social world, transforming data into scores or rankings that decide who gets credit, jobs, dates, policing, and much more. The field of “algorithmic accountability” has arisen to highlight the problems with such methods of classifying people, and it has great promise: Cutting-edge work in critical algorithm studies applies social theory to current events; law and policy experts seem to publish new articles daily on how artificial intelligence shapes our lives, and a growing community of researchers has developed a field known as “Fairness, Accuracy, and Transparency in Machine Learning.”

The social scientists, attorneys, and computer scientists promoting algorithmic accountability aspire to advance knowledge and promote justice. But what should such “accountability” more specifically consist of? Who will define it? At a two-day, interdisciplinary roundtable on AI ethics I recently attended, such questions featured prominently, and humanists, policy experts, and lawyers engaged in a free-wheeling discussion about topics ranging from robot arms races to computationally planned economies. But at the end of the event, an emissary from a group funded by Elon Musk and Peter Thiel among others pronounced our work useless. “You have no common methodology,” he informed us (apparently unaware that that’s the point of an interdisciplinary meeting). “We have a great deal of money to fund real research on AI ethics and policy”— which he thought of as dry, economistic modeling of competition and cooperation via technology — “but this is not the right group.” He then gratuitously lashed out at academics in attendance as “rent seekers,” largely because we had the temerity to advance distinctive disciplinary perspectives rather than fall in line with his research agenda.

Most corporate contacts and philanthrocapitalists are more polite, but their sense of what is realistic and what is utopian, what is worth studying and what is mere ideology, is strongly shaping algorithmic accountability research in both social science and computer science. This influence in the realm of ideas has powerful effects beyond it. Energy that could be put into better public transit systems is instead diverted to perfect the coding of self-driving cars. Anti-surveillance activism transmogrifies into proposals to improve facial recognition systems to better recognize all faces. To help payday-loan seekers, developers might design data-segmentation protocols to show them what personal information they should reveal to get a lower interest rate. But the idea that such self-monitoring and data curation can be a trap, disciplining the user in ever finer-grained ways, remains less explored. Trying to make these games fairer, the research elides the possibility of rejecting them altogether….(More)”.

The Risks of Dangerous Dashboards in Basic Education


Lant Pritchett at the Center for Global Development: “On June 1, 2009 Air France flight 447 from Rio de Janeiro to Paris crashed into the Atlantic Ocean killing all 228 people on board. While the Airbus 330 was flying on auto-pilot, the different speed indicators received by the on-board navigation computers started to give conflicting speeds, almost certainly because the pitot tubes responsible for measuring air speed had iced over. Since the auto-pilot could not resolve conflicting signals and hence did not know how fast the plane was actually going, it turned control of the plane over to the two first officers (the captain was out of the cockpit). Subsequent flight simulator trials replicating the conditions of the flight conclude that had the pilots done nothing at all everyone would have lived—nothing was actually wrong; only the indicators were faulty, not the actual speed. But, tragically, the pilots didn’t do nothing….

What is the connection to education?

Many countries’ systems of basic education are in “stall” condition.

A recent paper of Beatty et al. (2018) uses information from the Indonesia Family Life Survey, a representative household survey that has been carried out in several waves with the same individuals since 2000 and contains information on whether individuals can answer simple arithmetic questions. Figure 1, showing the relationship between the level of schooling and the probability of answering a typical question correctly, has two shocking results.

First, the difference in the likelihood a person can answer a simple mathematics question correctly differs by only 20 percent between individuals who have completed less than primary school (<PS)—who can answer correctly (adjusted for guessing) about 20 percent of the time—and those who have completed senior secondary school or more (>=SSS), who answer correctly only about 40 percent of the time. These are simple multiple choice questions like whether 56/84 is the same fraction as (can be reduced to) 2/3, and whether 1/3-1/6 equals 1/6. This means that in an entire year of schooling, less than 2 additional children per 100 gain the ability to answer simple arithmetic questions.

Second, this incredibly poor performance in 2000 got worse by 2014. …

What has this got to do with education dashboards? The way large bureaucracies prefer to work is to specify process compliance and inputs and then measure those as a means of driving performance. This logistical mode of managing an organization works best when both process compliance and inputs are easily “observable” in the economist’s sense of easily verifiable, contractible, adjudicated. This leads to attention to processes and inputs that are “thin” in the Clifford Geertz sense (adopted by James Scott as his primary definition of how a “high modern” bureaucracy and hence the state “sees” the world). So in education one would specify easily-observable inputs like textbook availability, class size, school infrastructure. Even if one were talking about “quality” of schooling, a large bureaucracy would want this too reduced to “thin” indicators, like the fraction of teachers with a given type of formal degree, or process compliance measures, like whether teachers were hired based on some formal assessment.

Those involved in schooling can then become obsessed with their dashboards and the “thin” progress that is being tracked and easily ignore the loud warning signals saying: Stall!…(More)”.

Countries Can Learn from France’s Plan for Public Interest Data and AI


Nick Wallace at the Center for Data Innovation: “French President Emmanuel Macron recently endorsed a national AI strategy that includes plans for the French state to make public and private sector datasets available for reuse by others in applications of artificial intelligence (AI) that serve the public interest, such as for healthcare or environmental protection. Although this strategy fails to set out how the French government should promote widespread use of AI throughout the economy, it will nevertheless give a boost to AI in some areas, particularly public services. Furthermore, the plan for promoting the wider reuse of datasets, particularly in areas where the government already calls most of the shots, is a practical idea that other countries should consider as they develop their own comprehensive AI strategies.

The French strategy, drafted by mathematician and Member of Parliament Cédric Villani, calls for legislation to mandate repurposing both public and private sector data, including personal data, to enable public-interest uses of AI by government or others, depending on the sensitivity of the data. For example, public health services could use data generated by Internet of Things (IoT) devices to help doctors better treat and diagnose patients. Researchers could use data captured by motorway CCTV to train driverless cars. Energy distributors could manage peaks and troughs in demand using data from smart meters.

Repurposed data held by private companies could be made publicly available, shared with other companies, or processed securely by the public sector, depending on the extent to which sharing the data presents privacy risks or undermines competition. The report suggests that the government would not require companies to share data publicly when doing so would impact legitimate business interests, nor would it require that any personal data be made public. Instead, Dr. Villani argues that, if wider data sharing would do unreasonable damage to a company’s commercial interests, it may be appropriate to only give public authorities access to the data. But where the stakes are lower, companies could be required to share the data more widely, to maximize reuse. Villani rightly argues that it is virtually impossible to come up with generalizable rules for how data should be shared that would work across all sectors. Instead, he argues for a sector-specific approach to determining how and when data should be shared.

After making the case for state-mandated repurposing of data, the report goes on to highlight four key sectors as priorities: health, transport, the environment, and defense. Since these all have clear implications for the public interest, France can create national laws authorizing extensive repurposing of personal data without violating the General Data Protection Regulation (GDPR) which allows national laws that permit the repurposing of personal data where it serves the public interest. The French strategy is the first clear effort by an EU member state to proactively use this clause in aid of national efforts to bolster AI….(More)”.

Most Public Engagement is Worthless


Charles Marohn at Strong Towns: “…Our thinking is a byproduct of the questions we ask. …I’m a planner and I’m a policy nerd. I had all the training in how to hold a public meeting and solicit feedback through SWOT (strengths, weaknesses, opportunities, threats) questions. I’ve been taught how to reach out to marginalized groups and make sure they too have a voice in the process. That is, so long as that voice fit into the paradigm of a planner and a policy nerd. Or so long as I could make it fit.

Modern Planner: What percentage of the city budget should we spend on parks?

Steve Jobs: Do you use the park?

Our planning efforts should absolutely be guided by the experiences of real people. But their actions are the data we should be collecting, not their stated preferences. To do the latter is to get comfortable trying to build a better Walkman.  We should be designing the city equivalent of the iPod: something that responds to how real people actually live. It’s a messier and less affirming undertaking.

I’ve come to the point in my life where I think municipal comprehensive planning is worthless. More often than not, it is a mechanism to wrap a veneer of legitimacy around the large policy objectives of influential people. Most cities would be better off putting together a good vision statement and a set of guiding principles for making decisions, then getting on with it.

That is, get on with the hard work of iteratively building a successful city. That work is a simple, four-step process:

  1. Humbly observe where people in the community struggle.
  2. Ask the question: What is the next smallest thing we can do right now to address that struggle?
  3. Do that thing. Do it right now.
  4. Repeat.

It’s challenging to be humble, especially when you are in a position, or are part of a profession, whose internal narrative tells you that you already knowwhat to do. It’s painful to observe, especially when that means confronting messy realities that do not fit with your view of the world. It’s unsatisfying, at times, to try many small things when the “obvious” fix is right there. If only those around you just shared your “courage” to undertake it (of course, with no downside to you if you’re wrong). If only people had the patience to see it through (while they, not you, continue to struggle in the interim).

Yet what if we humbly observe where people in our community struggle—if we use the experiences of others as our data—and we continually take the actions we are capable of taking, right now, to alleviate those struggles? And what if we do this in neighborhood after neighborhood across the entire city, month after month and year after year? If we do that, not only will we make the lowest risk, highest returning public investments it is possible to make, we won’t help but improve people’s lives in the process….(More)”.

Programmers need ethics when designing the technologies that influence people’s lives


Cherri M. Pancake at The Conversation: “Computing professionals are on the front lines of almost every aspect of the modern world. They’re involved in the response when hackers steal the personal information of hundreds of thousands of people from a large corporation. Their work can protect – or jeopardize – critical infrastructure like electrical grids and transportation lines. And the algorithms they write may determine who gets a job, who is approved for a bank loan or who gets released on bail.

Technological professionals are the first, and last, lines of defense against the misuse of technology. Nobody else understands the systems as well, and nobody else is in a position to protect specific data elements or ensure the connections between one component and another are appropriate, safe and reliable. As the role of computing continues its decades-long expansion in society, computer scientists are central to what happens next.

That’s why the world’s largest organization of computer scientists and engineers, the Association for Computing Machinery, of which I am president, has issued a new code of ethics for computing professionals. And it’s why ACM is taking other steps to help technologists engage with ethical questions….

ACM’s new ethics code has several important differences from the 1992 version. One has to do with unintended consequences. In the 1970s and 1980s, technologists built software or systems whose effects were limited to specific locations or circumstances. But over the past two decades, it has become clear that as technologies evolve, they can be applied in contexts very different from the original intent.

For example, computer vision research has led to ways of creating 3D models of objects – and people – based on 2D images, but it was never intended to be used in conjunction with machine learning in surveillance or drone applications. The old ethics code asked software developers to be sure a program would actually do what they said it would. The new version also exhorts developers to explicitly evaluate their work to identify potentially harmful side effects or potential for misuse.

Another example has to do with human interaction. In 1992, most software was being developed by trained programmers to run operating systems, databases and other basic computing functions. Today, many applications rely on user interfaces to interact directly with a potentially vast number of people. The updated code of ethics includes more detailed considerations about the needs and sensitivities of very diverse potential users – including discussing discrimination, exclusion and harassment….(More)”.

How Taiwan’s online democracy may show future of humans and machines


Shuyang Lin at the Sydney Morning Herald: “Taiwanese citizens have spent the past 30 years prototyping future democracy since the lift of martial law in 1987. Public participation in Taiwan has been developed in several formats, from face-to-face to deliberation over the internet. This trajectory coincides with the advancement of technology, and as new tools arrived, democracy evolved.

The launch of vTaiwan (v for virtual, vote, voice and verb), an experiment that prototypes an open consultation process for the civil society, showed that by using technology creatively humanity can facilitate deep and fair conversations, form collective consensus, and deliver solutions we can all live with.

It is a prototype that helps us envision what future democracy could look like….

Decision-making is not an easy task, especially when it has to do with a larger group of people. Group decision-making could take several protocols, such as mandate, to decide and take questions; advise, to listen before decisions; consent, to decide if no one objects; and consensus, to decide if everyone agrees. So there is a pressing need for us to be able to collaborate together in a large scale decision-making process to update outdated standards and regulations.

The future of human knowledge is on the web. Technology can help us to learn, communicate, and make better decisions faster with larger scale. The internet could be the facilitation and AI could be the catalyst. It is extremely important to be aware that decision-making is not a one-off interaction. The most important direction of decision-making technology development is to have it allow humans to be engaged in the process anytime and also have an invitation to request and submit changes.

Humans have started working with computers, and we will continue to work with them. They will help us in the decision-making process and some will even make decisions for us; the actors in collaboration don’t necessarily need to be just humans. While it is up to us to decide what and when to opt in or opt out, we should work together with computers in a transparent, collaborative and inclusive space.

Where shall we go as a society? What do we want from technology? As Audrey Tang,  Digital Minister without Portfolio of Taiwan, puts it: “Deliberation — listening to each other deeply, thinking together and working out something that we can all live with — is magical.”…(More)”.

Introducing the (World’s First) Ethical Operating System


Article by Paula Goldman and Raina Kumra: “Is it possible for tech developers to anticipate future risks? Or are these future risks so unknowable to us here in the present that, try as we might to make our tech safe, continued exposure to risks is simply the cost of engagement?

 Today, in collaboration with Institute for the Future (IFTF), a leading non-profit strategic futures organization, Omidyar Network is excited to introduce the Ethical Operating System (or Ethical OS for short), a toolkit for helping developers and designers anticipate the future impact of technologies they’re working on today. We designed the Ethical OS to facilitate better product development, faster deployment, and more impactful innovation — all while striving to minimize technical and reputational risks. The hope is that, with the Ethical OS in hand, technologists can begin to build responsibility into core business and product decisions, and contribute to a thriving tech industry.

The Ethical OS is already being piloted by nearly 20 tech companies, schools, and startups, including Mozilla and Techstars. We believe it can better equip technologists to grapple with three of the most pressing issues facing our community today:

    • If the technology you’re building right now will someday be used in unexpected ways, how can you hope to be prepared?

 

    • What new categories of risk should you pay special attention to right now?

 

  • Which design, team, or business model choices can actively safeguard users, communities, society, and your company from future risk?

As large sections of the public grow weary of a seemingly constant stream of data safety and security issues, and with growing calls for heightened government intervention and oversight, the time is now for the tech community to get this right.

We created the Ethical OS as a pilot to help make ethical thinking and future risk mitigation integral components of all design and development processes. It’s not going to be easy. The industry has far more work to do, both inside individual companies and collectively. But with our toolkit as a guide, developers will have a practical means of helping to begin working to ensure their tech is as good as their intentions…(More)”.