Open & Shut


Harsha Devulapalli: “Welcome to Open & Shut — a new blog dedicated to exploring the opportunities and challenges of working with open data in closed societies around the world. Although we’ll be exploring questions relevant to open data practitioners worldwide, we’re particularly interested in seeing how civil society groups and actors in the Global South are using open data to push for greater government transparency, and tackle daunting social and economic challenges facing their societies….Throughout this series we’ll be profiling and interviewing organisations working with open data worldwide, and providing do-it-yourself data tutorials that will be useful for beginners as well as data experts. …

What do we mean by the terms ‘open data’ and ‘closed societies’?

It’s important to be clear about what we’re dealing with, here. So let’s establish some key terms. When we talk about ‘open data’, we mean data that anyone can access, use and share freely. And when we say ‘closed societies’, we’re referring to states or regions in which the political and social environment is actively hostile to notions of openness and public scrutiny, and which hold principles of freedom of information in low esteem. In closed societies, data is either not published at all by the government, or else is only published in inaccessible formats, is missing data, is hard to find or else is just not digitised at all.

Iran is one such state that we would characterise as a ‘closed society’. At Small Media, we’ve had to confront the challenges of poor data practice, secrecy, and government opaqueness while undertaking work to support freedom of information and freedom of expression in the country. Based on these experiences, we’ve been working to build Iran Open Data — a civil society-led open data portal for Iran, in an effort to make Iranian government data more accessible and easier for researchers, journalists, and civil society actors to work with.

Iran Open Data — an open data portal for Iran, created by Small Media

.

..Open & Shut will shine a light on the exciting new ways that different groups are using data to question dominant narratives, transform public opinion, and bring about tangible change in closed societies. At the same time, it’ll demonstrate the challenges faced by open data advocates in opening up this valuable data. We intend to get the community talking about the need to build cross-border alliances in order to empower the open data movement, and to exchange knowledge and best practices despite the different needs and circumstances we all face….(More)

Inside the Lab That’s Quantifying Happiness


Rowan Jacobsen at Outside: “In Mississippi, people tweet about cake and cookies an awful lot; in Colorado, it’s noodles. In Mississippi, the most-tweeted activity is eating; in Colorado, it’s running, skiing, hiking, snowboarding, and biking, in that order. In other words, the two states fall on opposite ends of the behavior spectrum. If you were to assign a caloric value to every food mentioned in every tweet by the citizens of the United States and a calories-burned value to every activity, and then totaled them up, you would find that Colorado tweets the best caloric ratio in the country and Mississippi the worst.

Sure, you’d be forgiven for doubting people’s honesty on Twitter. On those rare occasions when I destroy an entire pint of Ben and Jerry’s, I most assuredly do not tweet about it. Likewise, I don’t reach for my phone every time I strap on a pair of skis.

And yet there’s this: Mississippi has the worst rate of diabetes and heart disease in the country and Colorado has the best. Mississippi has the second-highest percentage of obesity; Colorado has the lowest. Mississippi has the worst life expectancy in the country; Colorado is near the top. Perhaps we are being more honest on social media than we think. And perhaps social media has more to tell us about the state of the country than we realize.

That’s the proposition of Peter Dodds and Chris Danforth, who co-direct the University of Vermont’s Computational Story Lab, a warren of whiteboards and grad students in a handsome brick building near the shores of Lake Champlain. Dodds and Danforth are applied mathematicians, but they would make a pretty good comedy duo. When I stopped by the lab recently, both were in running clothes and cracking jokes. They have an abundance of curls between them and the wiry energy of chronic thinkers. They came to UVM in 2006 to start the Vermont Complex Systems Center, which crunches big numbers from big systems and looks for patterns. Out of that, they hatched the Computational Story Lab, which sifts through some of that public data to discern the stories we’re telling ourselves. “It took us a while to come up with the name,” Dodds told me as we shotgunned espresso and gazed into his MacBook. “We were going to be the Department of Recreational Truth.”

This year, they teamed up with their PhD student Andy Reagan to launch the Lexicocalorimeter, an online tool that uses tweets to compute the calories in and calories out for every state. It’s no mere party trick; the Story Labbers believe the Lexicocalorimeter has important advantages over slower, more traditional methods of gathering health data….(More)”.

Free Speech and Transparency in a Digital Era


Russell L. Weaver at IMODEV: ” Governmental openness and transparency is inextricably intertwined with freedom of expression. In order to engage in scrutinize government, the people must have access to information regarding the functioning of government. As the U.S. Supreme Court has noted, “It is inherent in the nature of the political process that voters must be free to obtain information from divers sources in order to determine how to cast their votes”. As one commentator noted, “Citizens need to understand what their government is doing in their name.”

Despite the need for transparency, the U.S. government has frequently functioned opaquely.  For example, even though the U.S. Supreme Court is a fundamental component of the U.S. constitutional system, confirmation hearings for U.S. Supreme Court justices were held in secret for decades. That changed about a hundred years ago when the U.S. Senate broke with tradition and began holding confirmation hearings in public.  The results of this openness have been both interesting and enlightening: the U.S. citizenry has become much more interested and involved in the confirmation process, galvanizing and campaigning both for and against proposed nominees. In the 1930s, Congress decided to open up the administrative process as well. For more than a century, administrative agencies were not required to notify the public of proposed actions, or to allow the public to have input on the policy choices reflected in proposed rules and regulations. That changed in the 1930s when Congress adopted the federal Administrative Procedure Act (APA). For the creation of so-called “informal rules,” the APA required agencies to publish a NOPR (notice of proposed rulemaking) in the Federal Register, thereby providing the public with notice of the proposed rule. Congress required that the NOPR provide the public with various types of information, including “(1) a statement of the time, place, and nature of public rule making proceedings; (2) reference to the legal authority under which the rule is proposed; and (3) either the terms or substance of the proposed rule or a description of the subjects and issues involved. »  In addition to allowing interested parties the opportunity to comment on NOPRs, and requiring agencies to “consider” those comments, the APA also required agencies to issue a “concise general statement” of the “basis and purpose” of any final rule that they issue.  As with the U.S. Supreme Court’s confirmation processes, the APA’s rulemaking procedures led to greater citizen involvement in the rulemaking process.  The APA also promoted openness by requiring administrative agencies to voluntarily disclose various types of internal information to the public, including “interpretative rules and statements of policy.”

Congress supplemented the APA in the 1960s when it enacted the federal Freedom of Information Act (FOIA). FOIA gave individuals and corporations a right of access to government held information. As a “disclosure” statute, FOIA specifically provides that “upon any request for records which reasonably describes such records and is made in accordance with published rules stating the time, place, fees (if any), and procedures to be followed, shall make the records promptly available to any person.”  Agencies are required to decide within twenty days whether to comply with a request. However, the time limit can be tolled under certain circumstances. Although FOIA is a disclosure statute, it does not require disclosure of all governmental documents.  In addition to FOIA, Congress also enacted the Federal Advisory Committee Act (FACA),  the Government in the Sunshine Act, and amendments to FOIA, all of which were designed to enhance governmental openness and transparency.  In addition, many state legislatures have adopted their own open records provisions that are similar to FOIA.

Despite these movements towards openness, advancements in speech technology have forced governments to become much more open and transparent than they have ever been.  Some of this openness has been intentional as governmental entities have used new speech technologies to communicate with the citizenry and enhance its understanding of governmental operations.  However, some of this openness has taken place despite governmental resistance.  The net effect is that free speech, and changes in communications technologies, have produced a society that is much more open and transparent.  This article examines the relationship between free speech, the new technologies, and governmental openness and transparency….(More).

Rise of the Government Chatbot


Zack Quaintance at Government Technology: “A robot uprising has begun, except instead of overthrowing mankind so as to usher in a bleak yet efficient age of cold judgment and colder steel, this uprising is one of friendly robots (so far).

Which is all an alarming way to say that many state, county and municipal governments across the country have begun to deploy relatively simple chatbots, aimed at helping users get more out of online public services such as a city’s website, pothole reporting and open data. These chatbots have been installed in recent months in a diverse range of places including Kansas City, Mo.; North Charleston, S.C.; and Los Angeles — and by many indications, there is an accompanying wave of civic tech companies that are offering this tech to the public sector.

They range from simple to complex in scope, and most of the jurisdictions currently using them say they are doing so on somewhat of a trial or experimental basis. That’s certainly the case in Kansas City, where the city now has a Facebook chatbot to help users get more out of its open data portal.

“The idea was never to create a final chatbot that was super intelligent and amazing,” said Eric Roche, Kansas City’s chief data officer. “The idea was let’s put together a good effort, and put it out there and see if people find it interesting. If they use it, get some lessons learned and then figure out — either in our city, or with developers, or with people like me in other cities, other chief data officers and such — and talk about the future of this platform.”

Roche developed Kansas City’s chatbot earlier this year by working after hours with Code for Kansas City, the local Code for America brigade — and he did so because since in the four-plus years the city’s open data program has been active, there have been regular concerns that the info available through it was hard to navigate, search and use for average citizens who aren’t data scientists and don’t work for the city (a common issue currently being addressed by many jurisdictions). The idea behind the Facebook chatbot is that Roche can program it with a host of answers to the most prevalent questions, enabling it to both help interested users and save him time for other work….

In North Charleston, S.C., the city has adopted a text-based chatbot, which goes above common 311-style interfaces by allowing users to report potholes or any other lapses in city services they may notice. It also allows them to ask questions, which it subsequently answers by crawling city websites and replying with relevant links, said Ryan Johnson, the city’s public relations coordinator.

North Charleston has done this by partnering with a local tech startup that has deep roots in the area’s local government. The company is called Citibot …

With Citibot, residents can report a pothole at 2 a.m., or they can get info about street signs or trash pickup sent right to their phones.

There are also more complex chatbot technologies taking hold at both the civic and state levels, in Los Angeles and Mississippi, to be exact.

Mississippi’s chatbot is called Missi, and its capabilities are vast and nuanced. Residents can even use it for help submitting online payments. It’s accessible by clicking a small chat icon on the side of the website.

Back in May, Los Angeles rolled out Chip, or City Hall Internet Personality, on the Los Angeles Business Assistance Virtual Network. The chatbot aims to assist visitors by operating as a 24/7 digital assistant for visitors to the site, helping them navigate it and better understand its services by answering their inquiries. It is capable of presenting info from anywhere on the site, and it can even go so far as helping users fill out forms or set up email alerts….(More)”

Algorithmic Transparency for the Smart City


Paper by Robert Brauneis and Ellen P. Goodman: “Emerging across many disciplines are questions about algorithmic ethics – about the values embedded in artificial intelligence and big data analytics that increasingly replace human decisionmaking. Many are concerned that an algorithmic society is too opaque to be accountable for its behavior. An individual can be denied parole or denied credit, fired or not hired for reasons she will never know and cannot be articulated. In the public sector, the opacity of algorithmic decisionmaking is particularly problematic both because governmental decisions may be especially weighty, and because democratically-elected governments bear special duties of accountability. Investigative journalists have recently exposed the dangerous impenetrability of algorithmic processes used in the criminal justice field – dangerous because the predictions they make can be both erroneous and unfair, with none the wiser.

We set out to test the limits of transparency around governmental deployment of big data analytics, focusing our investigation on local and state government use of predictive algorithms. It is here, in local government, that algorithmically-determined decisions can be most directly impactful. And it is here that stretched agencies are most likely to hand over the analytics to private vendors, which may make design and policy choices out of the sight of the client agencies, the public, or both. To see just how impenetrable the resulting “black box” algorithms are, we filed 42 open records requests in 23 states seeking essential information about six predictive algorithm programs. We selected the most widely-used and well-reviewed programs, including those developed by for-profit companies, nonprofits, and academic/private sector partnerships. The goal was to see if, using the open records process, we could discover what policy judgments these algorithms embody, and could evaluate their utility and fairness.

To do this work, we identified what meaningful “algorithmic transparency” entails. We found that in almost every case, it wasn’t provided. Over-broad assertions of trade secrecy were a problem. But contrary to conventional wisdom, they were not the biggest obstacle. It will not usually be necessary to release the code used to execute predictive models in order to dramatically increase transparency. We conclude that publicly-deployed algorithms will be sufficiently transparent only if (1) governments generate appropriate records about their objectives for algorithmic processes and subsequent implementation and validation; (2) government contractors reveal to the public agency sufficient information about how they developed the algorithm; and (3) public agencies and courts treat trade secrecy claims as the limited exception to public disclosure that the law requires. Although it would require a multi-stakeholder process to develop best practices for record generation and disclosure, we present what we believe are eight principal types of information that such records should ideally contain….(More)”.

Why We Should Care About Bad Data


Blog by Stefaan G. Verhulst: “At a time of open and big data, data-led and evidence-based policy making has great potential to improve problem solving but will have limited, if not harmful, effects if the underlying components are riddled with bad data.

Why should we care about bad data? What do we mean by bad data? And what are the determining factors contributing to bad data that if understood and addressed could prevent or tackle bad data? These questions were the subject of my short presentation during a recent webinar on  Bad Data: The Hobgoblin of Effective Government, hosted by the American Society for Public Administration and moderated by Richard Greene (Partner, Barrett and Greene Inc.). Other panelists included Ben Ward (Manager, Information Technology Audits Unit, California State Auditor’s Office) and Katherine Barrett (Partner, Barrett and Greene Inc.). The webinar was a follow-up to the excellent Special Issue of Governing on Bad Data written by Richard and Katherine….(More)”

Open Participatory Security: Unifying Technology, Citizens, and the State


Book by Jesse Paul Lehrke: “Our modern security systems have recently come under a lot of criticism: as too bureaucratic and unadaptable, too secretive and untrustworthy, and too obsessed with information technology rather than human needs. Yet listing failures is easy; security is never perfect. The question is why current approaches fail and whether there are viable alternatives. The root of their shortcomings is in the interaction of the very pillars of our security system in the contemporary context. While our enemies have adopted the technologies of the Information Age, changing how they organize and fight, these same technologies have only created more vulnerabilities for states. Governments have been generally unwilling to maximize their use of these technologies because it would require the wider release of information and the opening of organizational structures to include society in security making. Yet countering diffuse modern threats striking deep into our states and across our economies requires mobilizing the diffuse skills and variation of modern society. Open approaches for mobilizing participation and coproduction have the capabilities needed to improve contemporary security policy making, problem solving, and provision. Moreover, open participatory security can be effective not only for technical security, but also for restoring trust among the citizens and rebuilding the legitimacy of the state….(More)”

Rage against the machines: is AI-powered government worth it?


Maëlle Gavet at the WEF: “…the Australian government’s new “data-driven profiling” trial for drug testing welfare recipients, to US law enforcement’s use of facial recognition technology and the deployment of proprietary software in sentencing in many US courts … almost by stealth and with remarkably little outcry, technology is transforming the way we are policed, categorized as citizens and, perhaps one day soon, governed. We are only in the earliest stages of so-called algorithmic regulation — intelligent machines deploying big data, machine learning and artificial intelligence (AI) to regulate human behaviour and enforce laws — but it already has profound implications for the relationship between private citizens and the state….

Some may herald this as democracy rebooted. In my view it represents nothing less than a threat to democracy itself — and deep scepticism should prevail. There are five major problems with bringing algorithms into the policy arena:

  1. Self-reinforcing bias…
  2. Vulnerability to attack…
  3. Who’s calling the shots?…
  4. Are governments up to it?…
  5. Algorithms don’t do nuance….

All the problems notwithstanding, there’s little doubt that AI-powered government of some kind will happen. So, how can we avoid it becoming the stuff of bad science fiction? To begin with, we should leverage AI to explore positive alternatives instead of just applying it to support traditional solutions to society’s perceived problems. Rather than simply finding and sending criminals to jail faster in order to protect the public, how about using AI to figure out the effectiveness of other potential solutions? Offering young adult literacy, numeracy and other skills might well represent a far superior and more cost-effective solution to crime than more aggressive law enforcement. Moreover, AI should always be used at a population level, rather than at the individual level, in order to avoid stigmatizing people on the basis of their history, their genes and where they live. The same goes for the more subtle, yet even more pervasive data-driven targeting by prospective employers, health insurers, credit card companies and mortgage providers. While the commercial imperative for AI-powered categorization is clear, when it targets individuals it amounts to profiling with the inevitable consequence that entire sections of society are locked out of opportunity….(More)”.

Charities are underestimating the importance of trust. That’s a problem.


Jill Halford & Neil Sherlock at NPC: “A growing mistrust and scepticism of organisations, experts and leaders has become a defining feature of recent times, causing many to question established truths that they’ve traditionally held dear. Against a backdrop of increasing volumes of data and commentary, amplified by social media, and the rise of ‘fake news’, it has become much harder for the public to both know who the experts are and to trust them to get things right. This directly impacts many charities who are themselves experts in their field and rely on the public to listen to and respond to their advice. In an increasingly digitalised world, there’s a sense that it is harder to gain and retain trust. There are growing concerns among CEOs about the impact of social media on the level of trust in their industry.

A growing mistrust and scepticism of organisations, experts and leaders has become a defining feature of recent times.

The questioning of experts is underpinned by a pervading sense that many actors are driven by hidden or ulterior motives, perhaps making some people less willing to trust organisations and their leaders. The Edelman Trust Barometer 2017 finds that 60% of the UK public think ‘the system’ is failing. This is defined as feeling a sense of injustice, a lack of hope and confidence and a desire for change. There is an emerging view that everyone from politicians, to businesses to charities need to do more to explain what they do and how it benefits both individuals and wider society….Public polling for the Charity Commission showed that the overall level of trust and confidence in charities fell from 6.7 out of 10 in 2012 and 2014 to 5.7 in 2016. This is a trend that is also reflected in the Edelman Trust Barometer 2017. Meanwhile other studies suggest that trust is bouncing back.

 

…Trust is often an overlooked asset for charities. For many organisations, trust can typically only come on the agenda when things are going wrong. NPC’s State of the Sector research report Charities taking charge shows that nearly a third of charity leaders think a loss of trust in the sector would have no effect on their organisation. The research also finds a narrow association between trust and fundraising rather than taking a more holistic view to trust.

Trust is a fundamental prerequisite of effective human interaction and meaningful, constructive relationships.

But trust matters deeply to people, and so it should matter to the organisations that serve them. Trust is considered a fundamental prerequisite of effective human interaction and meaningful, constructive relationships. It is the ‘glue’ that binds society and the economy together. There is a clear need for all organisations to take a broader view of trust. While those charities that rely on fundraising may feel that they need to be more concerned with public trust than a philanthropic foundation, for example, trust impacts a charity in many ways. For example, people’s trust in an organisation can fundamentally shape their behaviour and actions towards it. This can include trusting an organisation with your data and personal information, being more willing to collaborate and engage, and listening and acting on advice and expertise.

Trust is a powerful asset for organisations in four specific ways:

  • trust drives performance;
  • trust allows organisations to be true to themselves;
  • trust can help win round stakeholder scepticism; and
  • trust can put organisations on the front foot in a crisis that will inevitably happen at some point, positioning them in a better place to recover.

All four of these should resonate with charities as they seek to deliver greater impact in line with their values and ethos….(More).

How AI Is Crunching Big Data To Improve Healthcare Outcomes


PSFK: “The state of your health shouldn’t be a mystery, nor should patients or doctors have to wait long to find answers to pressing medical concerns. In PSFK’s Future of Health Report, we dig deep into the latest in AI, big data algorithms and IoT tools that are enabling a new, more comprehensive overview of patient data collection and analysis. Machine support, patient information from medical records and conversations with doctors are combined with the latest medical literature to help form a diagnosis without detracting from doctor-patient relations.

The impact of improved AI helps patients form a baseline for well-being and is making changes all across the healthcare industry. AI not only streamlines intake processes and reduces processing volume at clinics, it also controls input and diagnostic errors within a patient record, allowing doctors to focus on patient care and communication, rather than data entry. AI also improves pattern recognition and early diagnosis by learning from multiple patient data sets.

By utilizing deep learning algorithms and software, healthcare providers can connect various libraries of medical information and scan databases of medical records, spotting patterns that lead to more accurate detection and greater breadth of efficiency in medical diagnosis and research. IBM Watson, which has previously been used to help identify genetic markers and develop drugs, is applying its neural learning networks to help doctors correctly diagnose heart abnormalities from medical imaging tests. By scanning thousands of images and learning from correct diagnoses, Watson is able to increase diagnostic accuracy, supporting doctors’ cardiac assessments.

Outside of the doctor’s office, AI is also being used to monitor patient vitals to help create a baseline for well-being. By monitoring health on a day-to-day basis, AI systems can alert patients and medical teams to abnormalities or changes from the baseline in real time, increasing positive outcomes. Take xbird, a mobile platform that uses artificial intelligence to help diabetics understand when hypoglycemic attacks will occur. The AI combines personal and environmental data points from over 20 sensors within mobile and wearable devices to create an automated personal diary and cross references it against blood sugar levels. Patients then share this data with their doctors in order to uncover their unique hypoglycemic triggers and better manage their condition.

In China, meanwhile, web provider Baidu has debuted Melody, a chat-based medical assistant that helps individuals communicate their symptoms, learn of possible diagnoses and connect to medical experts….(More)”.