Open Data Charter Measurement Guide


Guide by Ana Brandusescu and Danny Lämmerhirt: “We are pleased to announce the launch of our Open Data Charter Measurement Guide. The guide is a collaborative effort of the Charter’s Measurement and Accountability Working Group (MAWG). It analyses the Open Data Charter principles and how they are assessed based on current open government data measurement tools. Governments, civil society, journalists, and researchers may use it to better understand how they can measure open data activities according to the Charter principles.

What can I find in the Measurement Guide?

  • An executive summary for people who want to quickly understand what measurement tools exist and for what principles.
  • An analysis of how each Charter principle is measured, including a comparison of indicators that are currently used to measure each Charter principle and its commitments. This analysis is based on the open data indicators used by the five largest measurement tools — the Web Foundation’s Open Data Barometer, Open Knowledge International’s Global Open Data Index, Open Data Watch’s Open Data Inventory, OECD’s OURdata Index, and the European Open Data Maturity Assessment . For each principle, we also highlight case studies of how Charter adopters have practically implemented the commitments of that principle.
  • Comprehensive indicator tables show how each Charter principle commitment can be measured. This table is especially helpful when used to compare how different indices approach the same commitment, and where gaps exist. Here, you can see an example of the indicator tables for Principle 1.
  • A methodology section that details how the Working Group conducted the analysis of mapping existing measurements indices against Charter commitments.
  • A recommended list of resources for anyone that wants to read more about measurement and policy.

The Measurement Guide is available online in the form of a Gitbook and in a printable PDF version

This is your office on AI


Article by Jeffrey Brown at a Special Issue of the Wilson Quarterly on AI: “The future has arrived and it’s your first day at your new job. You step across the threshold sporting a nervous smile and harboring visions of virtual handshakes and brain-computer interfaces. After all, this is one of those newfangled, modern offices that science-fiction writers have been dreaming up for ages. Then you bump up against something with a thud. No, it’s not one of the ubiquitous glass walls, but the harsh reality of an office that, at first glance, doesn’t appear much different from what you’re accustomed to. Your new colleagues shuffle between meetings clutching phones and laptops. A kitchenette stocked with stale donuts lurks in the background. And, by the way, you were fifteen minutes late because the commute is still hell.

So where is the fabled “office of the future”? After all, many of us have only ever fantasized about the ways in which technology – and especially artificial intelligence – might transform our working lives for the better. In fact, the AI-enabled office will usher in far more than next-generation desk supplies. It’s only over subsequent weeks that you come to appreciate how the office of the future feels, operates, and yes, senses. It also slowly dawns on you that work itself has changed and that what it means to be a worker has undergone a similar retrofit.

With AI already deployed in everything from the fight against ISIS to the hunt for exoplanets and your cat’s Alexa-enabled Friskies order, its application to the office should come as no surprise. As workers pretty much everywhere can attest, today’s office has issues: It can’t intuitively crack a window when your officemate decides to microwave leftover catfish. It seems to willfully disregard your noise, temperature, light, and workflow preferences. And it certainly doesn’t tell its designers – or your manager – what you are really thinking as you plop down in your annoyingly stiff chair to sip your morning cup of mud.

Now, you may be thinking to yourself, “These seem like trivial issues that can be worked out simply by chatting with another human being, so why do we even need AI in my office?” If so, read on. In your lifetime, companies and workers will channel AI to unlock new value – and immense competitive advantage….(More)”.

Tech Platforms and the Knowledge Problem


Frank Pasquale at American Affairs: “Friedrich von Hayek, the preeminent theorist of laissez-faire, called the “knowledge problem” an insuperable barrier to central planning. Knowledge about the price of supplies and labor, and consumers’ ability and willingness to pay, is so scattered and protean that even the wisest authorities cannot access all of it. No person knows everything about how goods and services in an economy should be priced. No central decision-maker can grasp the idiosyncratic preferences, values, and purchasing power of millions of individuals. That kind of knowledge, Hayek said, is distributed.

In an era of artificial intelligence and mass surveillance, however, the possibility of central planning has reemerged—this time in the form of massive firms. Having logged and analyzed billions of transactions, Amazon knows intimate details about all its customers and suppliers. It can carefully calibrate screen displays to herd buyers toward certain products or shopping practices, or to copy sellers with its own, cheaper, in-house offerings. Mark Zuckerberg aspires to omniscience of consumer desires, by profiling nearly everyone on Facebook, Instagram, and WhatsApp, and then leveraging that data trove to track users across the web and into the real world (via mobile usage and device fingerprinting). You don’t even have to use any of those apps to end up in Facebook/Instagram/WhatsApp files—profiles can be assigned to you. Google’s “database of intentions” is legendary, and antitrust authorities around the world have looked with increasing alarm at its ability to squeeze out rivals from search results once it gains an interest in their lines of business. Google knows not merely what consumers are searching for, but also what other businesses are searching, buying, emailing, planning—a truly unparalleled matching of data-processing capacity to raw communication flows.

Nor is this logic limited to the online context. Concentration is paying dividends for the largest banks (widely assumed to be too big to fail), and major health insurers (now squeezing and expanding the medical supply chain like an accordion). Like the digital giants, these finance and insurance firms not only act as middlemen, taking a cut of transactions, but also aspire to capitalize on the knowledge they have gained from monitoring customers and providers in order to supplant them and directly provide services and investment. If it succeeds, the CVS-Aetna merger betokens intense corporate consolidations that will see more vertical integration of insurers, providers, and a baroque series of middlemen (from pharmaceutical benefit managers to group purchasing organizations) into gargantuan health providers. A CVS doctor may eventually refer a patient to a CVS hospital for a CVS surgery, to be followed up by home health care workers employed by CVS who bring CVS pharmaceuticals—allcovered by a CVS/Aetna insurance plan, which might penalize the patient for using any providers outside the CVS network. While such a panoptic firm may sound dystopian, it is a logical outgrowth of health services researchers’ enthusiasm for “integrated delivery systems,” which are supposed to provide “care coordination” and “wraparound services” more efficiently than America’s current, fragmented health care system.

The rise of powerful intermediaries like search engines and insurers may seem like the next logical step in the development of capitalism. But a growing chorus of critics questions the size and scope of leading firms in these fields. The Institute for Local Self-Reliance highlights Amazon’s manipulation of both law and contracts to accumulate unfair advantages. International antitrust authorities have taken Google down a peg, questioning the company’s aggressive use of its search engine and Android operating system to promote its own services (and demote rivals). They also question why Google and Facebook have for years been acquiring companies at a pace of more than two per month. Consumer advocates complain about manipulative advertising. Finance scholars lambaste megabanks for taking advantage of the implicit subsidies that too-big-to-fail status confers….(More)”.

CrowdLaw Manifesto


At the Rockefeller Foundation Bellagio Center this spring, assembled participants  met to discuss CrowdLaw, namely how to use technology to improve the quality and effectiveness of law and policymaking through greater public engagement. We put together and signed 12 principles to promote the use of CrowdLaw by local legislatures and national parliaments, calling for legislatures, technologists and the public to participate in creating more open and participatory lawmaking practices. We invite you to sign the Manifesto using the form below.

Draft dated May 29, 2018

  1. To improve public trust in democratic institutions, we must improve how we govern in the 21st century.
  2. CrowdLaw is any law, policy-making or public decision-making that offers a meaningful opportunity for the public to participate in one or multiples stages of decision-making, including but not limited to the processes of problem identification, solution identification, proposal drafting, ratification, implementation or evaluation.
  3. CrowdLaw draws on innovative processes and technologies and encompasses diverse forms of engagement among elected representatives, public officials, and those they represent.
  4. When designed well, CrowdLaw may help governing institutions obtain more relevant facts and knowledge as well as more diverse perspectives, opinions and ideas to inform governing at each stage and may help the public exercise political will.
  5. When designed well, CrowdLaw may help democratic institutions build trust and the public to play a more active role in their communities and strengthen both active citizenship and democratic culture.
  6. When designed well, CrowdLaw may enable engagement that is thoughtful, inclusive, informed but also efficient, manageable and sustainable.
  7. Therefore, governing institutions at every level should experiment and iterate with CrowdLaw initiatives in order to create formal processes for diverse members of society to participate in order to improve the legitimacy of decision-making, strengthen public trust and produce better outcomes.
  8. Governing institutions at every level should encourage research and learning about CrowdLaw and its impact on individuals, on institutions and on society.
  9. The public also has a responsibility to improve our democracy by demanding and creating opportunities to engage and then actively contributing expertise, experience, data and opinions.
  10. Technologists should work collaboratively across disciplines to develop, evaluate and iterate varied, ethical and secure CrowdLaw platforms and tools, keeping in mind that different participation mechanisms will achieve different goals.
  11. Governing institutions at every level should encourage collaboration across organizations and sectors to test what works and share good practices.
  12. Governing institutions at every level should create the legal and regulatory frameworks necessary to promote CrowdLaw and better forms of public engagement and usher in a new era of more open, participatory and effective governing.

The CrowdLaw Manifesto has been signed by the following individuals and organizations:

Individuals

  • Victoria Alsina, Senior Fellow at The GovLab and Faculty Associate at Harvard Kennedy School, Harvard University
  • Marta Poblet Balcell , Associate Professor, RMIT University
  • Robert Bjarnason — President & Co-founder, Citizens Foundation; Better Reykjavik
  • Pablo Collada — Former Executive Director, Fundación Ciudadano Inteligente
  • Mukelani Dimba — Co-chair, Open Government Partnership
  • Hélène Landemore, Associate Professor of Political Science, Yale University
  • Shu-Yang Lin, re:architect & co-founder, PDIS.tw
  • José Luis Martí , Vice-Rector for Innovation and Professor of Legal Philosophy, Pompeu Fabra University
  • Jessica Musila — Executive Director, Mzalendo
  • Sabine Romon — Chief Smart City Officer — General Secretariat, Paris City Council
  • Cristiano Ferri Faría — Director, Hacker Lab, Brazilian House of Representatives
  • Nicola Forster — President and Founder, Swiss Forum on Foreign Policy
  • Raffaele Lillo — Chief Data Officer, Digital Transformation Team, Government of Italy
  • Tarik Nesh-Nash — CEO & Co-founder, GovRight; Ashoka Fellow
  • Beth Simone Noveck, Director, The GovLab and Professor at New York University Tandon School of Engineering
  • Ehud Shapiro , Professor of Computer Science and Biology, Weizmann Institute of Science

Organizations

  • Citizens Foundation, Iceland
  • Fundación Ciudadano Inteligente, Chile
  • International School for Transparency, South Africa
  • Mzalendo, Kenya
  • Smart Cities, Paris City Council, Paris, France
  • Hacker Lab, Brazilian House of Representatives, Brazil
  • Swiss Forum on Foreign Policy, Switzerland
  • Digital Transformation Team, Government of Italy, Italy
  • The Governance Lab, New York, United States
  • GovRight, Morocco
  • ICT4Dev, Morocco

How the Math Men Overthrew the Mad Men


 in the New Yorker: “Once, Mad Men ruled advertising. They’ve now been eclipsed by Math Men—the engineers and data scientists whose province is machines, algorithms, pureed data, and artificial intelligence. Yet Math Men are beleaguered, as Mark Zuckerberg demonstrated when he humbled himself before Congress, in April. Math Men’s adoration of data—coupled with their truculence and an arrogant conviction that their “science” is nearly flawless—has aroused government anger, much as Microsoft did two decades ago.

The power of Math Men is awesome. Google and Facebook each has a market value exceeding the combined value of the six largest advertising and marketing holding companies. Together, they claim six out of every ten dollars spent on digital advertising, and nine out of ten new digital ad dollars. They have become more dominant in what is estimated to be an up to two-trillion-dollar annual global advertising and marketing business. Facebook alone generates more ad dollars than all of America’s newspapers, and Google has twice the ad revenues of Facebook.

In the advertising world, Big Data is the Holy Grail, because it enables marketers to target messages to individuals rather than general groups, creating what’s called addressable advertising. And only the digital giants possess state-of-the-art Big Data. “The game is no longer about sending you a mail order catalogue or even about targeting online advertising,” Shoshana Zuboff, a professor of business administration at the Harvard Business School, wrote on faz.net, in 2016. “The game is selling access to the real-time flow of your daily life—your reality—in order to directly influence and modify your behavior for profit.” Success at this “game” flows to those with the “ability to predict the future—specifically the future of behavior,” Zuboff writes. She dubs this “surveillance capitalism.”

However, to thrash just Facebook and Google is to miss the larger truth: everyone in advertising strives to eliminate risk by perfecting targeting data. Protecting privacy is not foremost among the concerns of marketers; protecting and expanding their business is. The business model adopted by ad agencies and their clients parallels Facebook and Google’s. Each aims to massage data to better identify potential customers. Each aims to influence consumer behavior. To appreciate how alike their aims are, sit in an agency or client marketing meeting and you will hear wails about Facebook and Google’s “walled garden,” their unwillingness to share data on their users. When Facebook or Google counter that they must protect “the privacy” of their users, advertisers cry foul: You’re using the data to target ads we paid for—why won’t you share it, so that we can use it in other ad campaigns?…(More)”

AI trust and AI fears: A media debate that could divide society


Article by Vyacheslav Polonski: “Unless you live under a rock, you probably have been inundated with recent news on machine learning and artificial intelligence (AI). With all the recent breakthroughs, it almost seems like AI can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Of course, many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictionsRecent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place….

Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background. Instead, they are acutely aware of instances where AI goes terribly wrong:

These unfortunate examples have received a disproportionate amount of media attention, emphasising the message that humans cannot always rely on technology. In the end, it all goes back to the simple truth that machine learning is not foolproof, in part because the humans who design it aren’t….

Fortunately we already have some ideas about how to improve trust in AI — there’s light at the end of the tunnel.

  1. Experience: One solution may be to provide more hands-on experiences with automation apps and other AI applications in everyday situations (like this robot that can get you a beer from the fridge). Thus, instead of presenting the Sony’s new robot dog Aibo as an exclusive product for the upper-class, we’d recommend making these kinds of innovations more accessible to the masses. Simply having previous experience with AI can significantly improve people’s attitudes towards the technology, as we found in our experimental study. And this is especially important for the general public that may not have a very sophisticated understanding of the technology. Similar evidence also suggests the more you use other technologies such as the Internet, the more you trust them.
  2. Insight: Another solution may be to open the “black-box” of machine learning algorithms and be slightly more transparent about how they work. Companies such as GoogleAirbnb and Twitter already release transparency reports on a regular basis. These reports provide information about government requests and surveillance disclosures. A similar practice for AI systems could help people have a better understanding of how algorithmic decisions are made. Therefore, providing people with a top-level understanding of machine learning systems could go a long way towards alleviating algorithmic aversion.
  3. Control: Lastly, creating more of a collaborative decision-making process will help build trust and allow the AI to learn from human experience. In our work at Avantgarde Analytics, we have also found that involving people more in the AI decision-making process could improve trust and transparency. In a similar vein, a group of researchers at the University of Pennsylvania recently found that giving people control over algorithms can help create more trust in AI predictions. Volunteers in their study who were given the freedom to slightly modify an algorithm felt more satisfied with it, more likely to believe it was superior and more likely to use in in the future.

These guidelines (experience, insight and control) could help making AI systems more transparent and comprehensible to the individuals affected by their decisions….(More)”.

Open data work: understanding open data usage from a practice lens


Paper by Emma Ruijer in the International Review of Administrative Sciences: “During recent years, the amount of data released on platforms by public administrations around the world have exploded. Open government data platforms are aimed at enhancing transparency and participation. Even though the promises of these platforms are high, their full potential has not yet been reached. Scholars have identified technical and quality barriers of open data usage. Although useful, these issues fail to acknowledge that the meaning of open data also depends on the context and people involved. In this study we analyze open data usage from a practice lens – as a social construction that emerges over time in interaction with governments and users in a specific context – to enhance our understanding of the role of context and agency in the development of open data platforms. This study is based on innovative action-based research in which civil servants’ and citizens’ initiatives collaborate to find solutions for public problems using an open data platform. It provides an insider perspective of Open Data Work. The findings show that an absence of a shared cognitive framework for understanding open data and a lack of high-quality datasets can prevent processes of collaborative learning. Our contextual approach stresses the need for open data practices that work on the basis of rich interactions with users rather than government-centric implementations….(More)”.

Crowdbreaks: Tracking Health Trends using Public Social Media Data and Crowdsourcing


Paper by Martin Mueller and Marcel Salath: “In the past decade, tracking health trends using social media data has shown great promise, due to a powerful combination of massive adoption of social media around the world, and increasingly potent hardware and software that enables us to work with these new big data streams.

At the same time, many challenging problems have been identified. First, there is often a mismatch between how rapidly online data can change, and how rapidly algorithms are updated, which means that there is limited reusability for algorithms trained on past data as their performance decreases over time. Second, much of the work is focusing on specific issues during a specific past period in time, even though public health institutions would need flexible tools to assess multiple evolving situations in real time. Third, most tools providing such capabilities are proprietary systems with little algorithmic or data transparency, and thus little buy-in from the global public health and research community.

Here, we introduce Crowdbreaks, an open platform which allows tracking of health trends by making use of continuous crowdsourced labelling of public social media content. The system is built in a way which automatizes the typical workflow from data collection, filtering, labelling and training of machine learning classifiers and therefore can greatly accelerate the research process in the public health domain. This work introduces the technical aspects of the platform and explores its future use cases…(More)”.

How the Enlightenment Ends


Henry Kissinger in the Atlantic: “…Heretofore, the technological advance that most altered the course of modern history was the invention of the printing press in the 15th century, which allowed the search for empirical knowledge to supplant liturgical doctrine, and the Age of Reason to gradually supersede the Age of Religion. Individual insight and scientific knowledge replaced faith as the principal criterion of human consciousness. Information was stored and systematized in expanding libraries. The Age of Reason originated the thoughts and actions that shaped the contemporary world order.

But that order is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.

he internet age in which we already live prefigures some of the questions and issues that AI will only make more acute. The Enlightenment sought to submit traditional verities to a liberated, analytic human reason. The internet’s purpose is to ratify knowledge through the accumulation and manipulation of ever expanding data. Human cognition loses its personal character. Individuals turn into data, and data become regnant.

Users of the internet emphasize retrieving and manipulating information over contextualizing or conceptualizing its meaning. They rarely interrogate history or philosophy; as a rule, they demand information relevant to their immediate practical needs. In the process, search-engine algorithms acquire the capacity to predict the preferences of individual clients, enabling the algorithms to personalize results and make them available to other parties for political or commercial purposes. Truth becomes relative. Information threatens to overwhelm wisdom.

Inundated via social media with the opinions of multitudes, users are diverted from introspection; in truth many technophiles use the internet to avoid the solitude they dread. All of these pressures weaken the fortitude required to develop and sustain convictions that can be implemented only by traveling a lonely road, which is the essence of creativity.

The impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on priorities by permitting a focus on specialized purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop vision.
The digital world’s emphasis on speed inhibits reflection; its incentive empowers the radical over the thoughtful; its values are shaped by subgroup consensus, not by introspection. For all its achievements, it runs the risk of turning on itself as its impositions overwhelm its conveniences….

There are three areas of special concern:

First, that AI may achieve unintended results….

Second, that in achieving intended goals, AI may change human thought processes and human values….

Third, that AI may reach intended goals, but be unable to explain the rationale for its conclusions…..(More)”

Behavioral economics from nuts to ‘nudges’


Richard Thaler at ChicagoBoothReview: “…Behavioral economics has come a long way from my initial set of stories. Behavioral economists of the current generation are using all the modern tools of economics, from theory to big data to structural models to neuroscience, and they are applying those tools to most of the domains in which economists practice their craft. This is crucial to making descriptive economics more accurate. As the last section of this lecture highlighted, they are also influencing public-policy makers around the world, with those in the private sector not far behind. Sunstein and I did not invent nudging—we just gave it a word. People have been nudging as long as they have been trying to influence other people.

And much as we might wish it to be so, not all nudging is nudging for good. The same passive behavior we saw among Swedish savers applies to nearly everyone agreeing to software terms, or mortgage documents, or car payments, or employment contracts. We click “agree” without reading, and can find ourselves locked into a long-term contract that can only be terminated with considerable time and aggravation, or worse. Some firms are actively making use of behaviorally informed strategies to profit from the lack of scrutiny most shoppers apply. I call this kind of exploitive behavior “sludge.” It is the exact opposite of nudging for good. But whether the use of sludge is a long-term profit-maximizing strategy remains to be seen. Creating the reputation as a sludge-free supplier of goods and services may be a winning long-term strategy, just like delivering free bottles of water to victims of a hurricane.

Although not every application of behavioral economics will make the world a better place, I believe that giving economics a more human dimension and creating theories that apply to humans, not just econs, will make our discipline stronger, more useful, and undoubtedly more accurate….(More)”.