Are you doing what’s needed to get the state to respond to its citizens? Or are you part of the problem?


Vanessa Herringshaw at Making All Voices Count: ” …I’ve been reading over the incredibly rich and practically-orientated research and practice papers already on the Making All Voices Count website, and some of those coming out soon. There’s a huge amount of useful and challenging learning, and I’ll be putting out a couple of papers summarising some important threads later this year.

But as different civic tech and accountability actors prepare to come together in Brighton for Making All Voices Count’s final learning event later this week, I’m going to focus here on three things that really stood out and would benefit from the kind of ‘group grappling’ that such a gathering can best support. And I aim to be provocative!

  1. Improving state responsiveness to citizens is a complex business – even more than we realised – and a lot more complex than most interventions are designed to address. If we don’t address this complexity, our interventions won’t work. And that risks making things worse, not better.
  2. It’s clear we need to make more of a shift from narrow, ‘tactical’ approaches to more complex, systems-based ‘strategic’ approaches. Thinking is developing on how to do this in theory. But it’s not clear that our current institutional approaches will support, or even allow, a major shift in practice.
  3. So when we each look at our individual roles and those of our organisations, are we part of the solution, or part of the problem?

Let’s look at each of these in turn….(More)”

2017 CPA-Zicklin Index of Corporate Political Disclosure and Accountability


Introduction to the 2017 CPA-Zicklin Index by Morris Pearl: “In our modern financial system, investors, by necessity, delegate virtually all control over the businesses in which they invest to a board of directors. That board then, perhaps by necessity, perhaps not, often delegates virtually all control to the officers who run the company day to day. That usually works out pretty well. The interests of the officers are generally aligned with that of the shareholders, and most boards have a compensation committee which (hopefully) deals with the obvious conflicts around the pay of the officers. That, however, is not enough. Occasionally the officers use corporate resources for politics, sometimes with disastrous consequences. The practice of spending money on politics can open up the corporation to both subtle and not-so-subtle coercion from government officials. Indeed, the first campaign finance regulations were favored by business people who found themselves under a barrage of demands for money from government officials who had some power over their businesses. There are some things that businesses can do to defend themselves. Chief among those are:

  • An official corporate policy on high level approval of political expenditures. Based on my experience, telling someone soliciting a donation that they are welcome to make their case, publicly, to a board committee, can be great fun.
  • Openness – making records of whatever the business does available to the general public. Based again on my experience, people doing things that they don’t want to be publicly known are often doing things that they should not be doing.

We do not have the ability to end the practice, but by publicly giving companies credit for doing those two things, the CPA-Zicklin Index is making a difference….(Full Report)”.

Enabling Blockchain Innovation in the U.S. Federal Government


Primer by the American Council for Technology – Industry Advisory Council: “… intended to be a foundational tool in the understanding of blockchain and its use cases within the United States federal government. To that end, it should help allay the concerns that some may have about this new technology by providing an introduction to blockchain and its related technologies, and how blockchain can be safely and securely applied to the right government use cases. Blockchain has the potential to help government to reduce fraud, errors and the cost of paper-intensive processes, while enabling collaboration across multiple divisions and agencies to provide more efficient and effective services to citizens. Moreover, the adoption of blockchain may also allow governmental agencies to provide new value-added services to businesses and others which can generate new sources of revenue for these agencies….(More)”.

Our laws don’t do enough to protect our health data


 at the Conversation: “A particularly sensitive type of big data is medical big data. Medical big data can consist of electronic health records, insurance claims, information entered by patients into websites such as PatientsLikeMeand more. Health information can even be gleaned from web searches, Facebook and your recent purchases.

Such data can be used for beneficial purposes by medical researchers, public health authorities, and healthcare administrators. For example, they can use it to study medical treatments, combat epidemics and reduce costs. But others who can obtain medical big data may have more selfish agendas.

I am a professor of law and bioethics who has researched big data extensively. Last year, I published a book entitled Electronic Health Records and Medical Big Data: Law and Policy.

I have become increasingly concerned about how medical big data might be used and who could use it. Our laws currently don’t do enough to prevent harm associated with big data.

What your data says about you

Personal health information could be of interest to many, including employers, financial institutions, marketers and educational institutions. Such entities may wish to exploit it for decision-making purposes.

For example, employers presumably prefer healthy employees who are productive, take few sick days and have low medical costs. However, there are laws that prohibit employers from discriminating against workers because of their health conditions. These laws are the Americans with Disabilities Act (ADA) and the Genetic Information Nondiscrimination Act. So, employers are not permitted to reject qualified applicants simply because they have diabetes, depression or a genetic abnormality.

However, the same is not true for most predictive information regarding possible future ailments. Nothing prevents employers from rejecting or firing healthy workers out of the concern that they will later develop an impairment or disability, unless that concern is based on genetic information.

What non-genetic data can provide evidence regarding future health problems? Smoking status, eating preferences, exercise habits, weight and exposure to toxins are all informative. Scientists believe that biomarkers in your blood and other health details can predict cognitive decline, depression and diabetes.

Even bicycle purchases, credit scores and voting in midterm elections can be indicators of your health status.

Gathering data

How might employers obtain predictive data? An easy source is social media, where many individuals publicly post very private information. Through social media, your employer might learn that you smoke, hate to exercise or have high cholesterol.

Another potential source is wellness programs. These programs seek to improve workers’ health through incentives to exercise, stop smoking, manage diabetes, obtain health screenings and so on. While many wellness programs are run by third party vendors that promise confidentiality, that is not always the case.

In addition, employers may be able to purchase information from data brokers that collect, compile and sell personal information. Data brokers mine sources such as social media, personal websites, U.S. Census records, state hospital records, retailers’ purchasing records, real property records, insurance claims and more. Two well-known data brokers are Spokeo and Acxiom.

Some of the data employers can obtain identify individuals by name. But even information that does not provide obvious identifying details can be valuable. Wellness program vendors, for example, might provide employers with summary data about their workforce but strip away particulars such as names and birthdates. Nevertheless, de-identified information can sometimes be re-identified by experts. Data miners can match information to data that is publicly available….(More)”.

The Arsenal of Exclusion and Inclusion


Book by Tobias Armborst, Daniel D’Oca and Georgeen Theodore: “Urban History 101 teaches us that the built environment is not the product of invisible, uncontrollable market forces, but of human-made tools that could have been used differently (or not at all). The Arsenal of Exclusion & Inclusion is an encyclopedia of 202 tools–or what we call “weapons”–used by architects, planners, policy-makers, developers, real estate brokers, activists, and other urban actors in the United States use to restrict or increase access to urban space. The Arsenal of Exclusion & Inclusion inventories these weapons, examines how they have been used, and speculates about how they might be deployed (or retired) to make more open cities in which more people feel welcome in more spaces.

The Arsenal of Exclusion & Inclusion includes minor, seemingly benign weapons like no loitering signs and bouncers, but also big, headline-grabbing things like eminent domaon and city-county consolidation. It includes policies like expulsive zoning and annexation, but also practices like blockbusting, institutions like neighborhood associations, and physical things like bombs and those armrests that park designers put on benches to make sure homeless people don’t get too comfortable. It includes historical things that aren’t talked about too much any more (e.g., ugly laws), things that seem historical but aren’t (e.g., racial steering), and things that are brand new (e.g., aging improvement district).

With contributions from over fifty of the best minds in architecture, urban planning, urban history, and geography, The Arsenal of Exclusion & Inclusion offers a wide-ranging view of the policies, institutions, and social practices that shape our cities. It can be read as a historical account of the making of the modern American city, a toolbox of best practices for creating better, more just spaces, or as an introduction to the process of city-making in The United States….(More)”.

Reboot for the AI revolution


Yuval Noah Harari in Nature: “The ongoing artificial-intelligence revolution will change almost every line of work, creating enormous social and economic opportunities — and challenges. Some believe that intelligent computers will push humans out of the job market and create a new ‘useless class’; others maintain that automation will generate a wide range of new human jobs and greater prosperity for all. Almost everybody agrees that we should take action to prevent the worst-case scenarios….

Governments might decide to deliberately slow down the pace of automation, to lessen the resulting shocks and allow time for readjustments. But it will probably be both impossible and undesirable to prevent automation and job loss completely. That would mean giving up the immense positive potential of AI and robotics. If self-driving vehicles drive more safely and cheaply than humans, it would be counterproductive to ban them just to protect the jobs of taxi and lorry drivers.

A more sensible strategy is to create new jobs. In particular, as routine jobs are automated, opportunities for new non-routine jobs will mushroom. For example, general physicians who focus on diagnosing known diseases and administering familiar treatments will probably be replaced by AI doctors. Precisely because of that, there will be more money to pay human experts to do groundbreaking medical research, develop new medications and pioneer innovative surgical techniques.

This calls for economic entrepreneurship and legal dexterity. Above all, it necessitates a revolution in education…Creating new jobs might prove easier than retraining people to fill them. A huge useless class might appear, owing to both an absolute lack of jobs and a lack of relevant education and mental flexibility….

With insights gleaned from early warning signs and test cases, scholars should strive to develop new socio-economic models. The old ones no longer hold. For example, twentieth-century socialism assumed that the working class was crucial to the economy, and socialist thinkers tried to teach the proletariat how to translate its immense economic power into political clout. In the twenty-first century, if the masses lose their economic value they might have to struggle against irrelevance rather than exploitation….The challenges posed in the twenty-first century by the merger of infotech and biotech are arguably bigger than those thrown up by steam engines, railways, electricity and fossil fuels. Given the immense destructive power of our modern civilization, we cannot afford more failed models, world wars and bloody revolutions. We have to do better this time….(More)”

Laboratories for news? Experimenting with journalism hackathons


Jan Lauren Boyles in Journalism: “Journalism hackathons are computationally based events in which participants create news product prototypes. In the ideal case, the gatherings are rooted in local community, enabling a wide set of institutional stakeholders (legacy journalists, hacker journalists, civic hackers, and the general public) to gather in conversation around key civic issues. This study explores how and to what extent journalism hackathons operate as a community-based laboratory for translating open data from practitioners to the public. Surfaced from in-depth interviews with event organizers encompassing nine countries, the findings illustrate that journalism hackathons are most successful when collaboration integrates civic organizations and community leaders….(More)”.

How people update beliefs about climate change: good news and bad news


Paper by Cass R. Sunstein, Sebastian Bobadilla-Suarez, Stephanie C. Lazzaro & Tali Sharot: “People are frequently exposed to competing evidence about climate change. We examined how new information alters people’s beliefs. We find that people who are not sure that man-made climate change is occurring, and who do not favor an international agreement to reduce greenhouse gas emissions, show a form of asymmetrical updating: They change their beliefs in response to unexpected good news (suggesting that average temperature rise is likely to be less than previously thought) and fail to change their beliefs in response to unexpected bad news (suggesting that average temperature rise is likely to be greater than previously thought). By contrast, people who strongly believe that manmade climate change is occurring, and who favor an international agreement, show the opposite asymmetry: They change their beliefs far more in response to unexpected bad news (suggesting that average temperature rise is likely to be greater than previously thought) than in response to unexpected good news (suggesting that average temperature rise is likely to be smaller than previously thought). The results suggest that exposure to varied scientific evidence about climate change may increase polarization within a population due to asymmetrical updating. We explore the implications of our findings for how people will update their beliefs upon receiving new evidence about climate change, and also for other beliefs relevant to politics and law….(More)”.

How “Big Data” Went Bust


The problem with “big data” is not that data is bad. It’s not even that big data is bad: Applied carefully, massive data sets can reveal important trends that would otherwise go undetected. It’s the fetishization of data, and its uncritical use, that tends to lead to disaster, as Julia Rose West recently wrote for Slate. And that’s what “big data,” as a catchphrase, came to represent.

By its nature, big data is hard to interpret. When you’re collecting billions of data points—clicks or cursor positions on a website; turns of a turnstile in a large public space; hourly wind speed observations from around the world; tweets—the provenance of any given data point is obscured. This in turn means that seemingly high-level trends might turn out to be artifacts of problems in the data or methodology at the most granular level possible. But perhaps the bigger problem is that the data you have are usually only a proxy for what you really want to know. Big data doesn’t solve that problem—it magnifies it….

Aside from swearing off data and reverting to anecdote and intuition, there are at least two viable ways to deal with the problems that arise from the imperfect relationship between a data set and the real-world outcome you’re trying to measure or predict.

One is, in short: moar data. This has long been Facebook’s approach. When it became apparent that users’ “likes” were a flawed proxy for what they actually wanted to see more of in their feeds, the company responded by adding more and more proxies to its model. It began measuring other things, like the amount of time they spent looking at a post in their feed, the amount of time they spent reading a story they had clicked on, and whether they hit “like” before or after they had read the piece. When Facebook’s engineers had gone as far as they could in weighting and optimizing those metrics, they found that users were still unsatisfied in important ways. So the company added yet more metrics to the sauce: It started running huge user-survey panels, added new reaction emojis by which users could convey more nuanced sentiments, and started using A.I. to detect clickbait-y language in posts by pages and publishers. The company knows none of these proxies are perfect. But by constantly adding more of them to the mix, it can theoretically edge ever closer to an algorithm that delivers to users the posts that they most want to see.

One downside of the moar data approach is that it’s hard and expensive. Another is that the more variables are added to your model, the more complex, opaque, and unintelligible its methodology becomes. This is part of the problem Pasquale articulated in The Black Box Society. Even the most sophisticated algorithm, drawing on the best data sets, can go awry—and when it does, diagnosing the problem can be nigh-impossible. There are also the perils of “overfitting” and false confidence: The more sophisticated your model becomes, the more perfectly it seems to match up with all your past observations, and the more faith you place in it, the greater the danger that it will eventually fail you in a dramatic way. (Think mortgage crisis, election prediction models, and Zynga.)

Another possible response to the problems that arise from biases in big data sets is what some have taken to calling “small data.” Small data refers to data sets that are simple enough to be analyzed and interpreted directly by humans, without recourse to supercomputers or Hadoop jobs. Like “slow food,” the term arose as a conscious reaction to the prevalence of its opposite….(More)”

 

A Brief History of Living Labs: From Scattered Initiatives to Global Movement


Paper by Seppo Leminen, Veli-Pekka Niitamo, and Mika Westerlund presented at the Open Living Labs Day Conference: “This paper analyses the emergence of living labs based on a literature review and interviews with early living labs experts. Our study makes a contribution to the growing literature of living labs by analysing the emergence of living labs from the perspectives of (i) early living lab pioneers, (ii) early living lab activities in Europe and especially Nokia Corporation, (iii) framework programs of the European Union supporting the development of living labs, (iv) emergence of national living lab networks, and (v) emergence of the European Network of Living Labs (ENoLL). Moreover, the paper highlights major events in the emergence of living lab movement and labels three consecutive phases of the global living lab movement as (i) toward a new paradigm, (ii) practical experiences, and (iii) professional living labs….(More)”.