Can government stop losing its mind?


Report by Gavin Starks: “Can government remember? Is it condemned to repeat mistakes? Or does it remember too much and so see too many reasons why anything new is bound to fail?

While we are at the beginnings of a data revolution, we are also at a point where the deluge of data is creating the potential for an ‘information collapse’ in complex administrations: structured information and knowledge is lost in the noise or, worse, misinformation rises as fact.

There are many reasons for this: the technical design of systems, turnover of people, and contracting out. Information is stored in silos and often guarded jealously. Cultural and process issues lead to poor use of technologies. Knowledge is both formal (codified) and informal (held in people’s brains). The greatest value will be unlocked by combining these with existing and emerging tools.

This report sets out how the public sector could benefit from a federated, data-driven approach: one that provides greater power to its leaders, benefits its participants and users, and improves performance through better use of, and structured access to, data.

The report explores examples from the Open Data Institute, Open Banking Standard, BBC Archives, Ministry of Justice, NHS Blood and Transplant, Defence Science and Technology Laboratory and Ministry of Defence.

Recommendations:

  1. Design for open; build for search
  2. Build reciprocity into data supply chains
  3. Develop data ethics standards that can evolve at pace
  4. Create a Digital Audit Office
  5. Develop and value a culture of network thinking

To shorten the path between innovation and policy in a way that is repeatable and scalable, the report proposes six areas of focus be considered in any implementation design.

  1. Policy Providing strategic leadership and governance; framing and analysing economic, legal and regulatory impacts (e.g. GDPR, data ethics, security) and highlighting opportunities and threats.
  2. Culture Creating compelling peer, press and public communication and engagement that both address concerns and inspire people to engage in the solutions.
  3. Making Commissioning startups, running innovation competitions and programmes to create practice-based evidence that illustrates the challenges and business opportunities.
  4. Learning Creating training materials that aid implementation and defining evidence-based sustainable business models that are anchored around user-needs.
  5. Standards Defining common human and machine processes that enable both repeatability and scale within commercial and non-commercial environments.
  6. Infrastructure Defining and framing how people and machines will use data, algorithms and open APIs to create sustainable impact….(More)”.

Blockchain To Solve Bahamas’ ‘Major Workforce Waste’


Tribune 242: “The Government’s first-ever use of blockchain technology will tackle what was yesterday branded “an enormous waste of human capital”.

The Inter-American Development Bank (IDB), unveiling a $200,000 ‘technical co-operation’ project, revealed that the Minnis administration plans to deploy the technology as a way to determine the success of an apprenticeship programme targeted at 1,350 Bahamians aged between 16-40 years-old, and who are either unemployed or school leavers.

Documents obtained by Tribune Business reveal that the Government is also looking to blockchain to combat the widespread problem of lost/missing student records and certifications, which the IDB described as a major constraint to developing a skilled, productive Bahamian workforce.

“Currently, the certification process in the Bahamas lacks technological advances,” the IDB report said. “Today, student records management is a lengthy and cumbersome process. Students do not own their own records of achievement, depending on issuing institutions to verify their achievements throughout their lives. “This results not only in a verification process that can last weeks or months, and involves hours of human labour and (fallible) judgment, but also creates inefficiencies in placing new students and processing transfer equivalencies.“In extreme cases, when the issuing institution goes out of business, loses their records or is destroyed due to natural disasters, students have no way of verifying their achievements and must often start from nothing. This results in an enormous waste of human capital.”

The IDB report said the Bahamas was now “in a singular position to highlight the value of blockchain-based digital records for both students and institutions”, with the technology seen as a mechanism for Bahamians to possess and share records of their educational achievements. Blockchain technology allows information to be recorded, shared and updated by a particular community, with each member maintaining their own copy of data that has to be verified collectively.

Anything that can be described in digital form, such as contracts, transactions and assets, could thus be suitable for blockchain solutions. And Blockcerts, the open-standard for creating, issuing and verifying blockchain-based certificates, ensures they are tamper-proof. “Not only does the Blockcerts standard (open standard for digital documents anchored to the blockchain) allow Bahamian institutions to prevent records fraud, safeguarding and building confidence in their brands, but it allows them to leapfrog the digitisation process, skipping many of the interoperability issues associated with legacy digital formats (i.e. PDF, XML),” the IDB report said.

“Blockcerts provides students with autonomy, privacy, security and greater access all over the world, while allowing the Bahamian government to consolidate and streamline its credentialing operations in a way that produces real return on investment over a period. Primary use cases include: Student diplomas, professional certifications, awards, transcripts, enrollment verification, employment verification, verifications of qualifications, credit equivalencies and more.”…(More)”.

Data in the EU: Commission steps up efforts to increase availability and boost healthcare data sharing


PressRelease: “Today, the European Commission is putting forward a set of measures to increase the availability of data in the EU, building on previous initiatives to boost the free flow of non-personal data in the Digital Single Market.

Data-driven innovation is a key enabler of market growth, job creation, particularly for SMEs and startups, and the development of new technologies. It allows citizens to easily access and manage their health data, and allows public authorities to use data better in research, prevention and health system reforms….

Today’s proposals build on the General Data Protection Regulation (GDPR), which will enter into application as of 25 May 2018. They will ensure:

  • Better access to and reusability of public sector data: A revised law on Public Sector Information covers data held by public undertakings in transport and utilities sectors. The new rules limit the exceptions that allow public bodies to charge more than the marginal costs of data dissemination for the reuse of their data. They also facilitate the reusability of open research data resulting from public funding, and oblige Member States to develop open access policies. Finally, the new rules require – where applicable – technical solutions like Application Programming Interfaces (APIs) to provide real-time access to data.
  • Scientific data sharing in 2018: new set of recommendations address the policy and technological changes since the last Commission proposal on access to and preservation of scientific information. They offer guidance on implementing open access policies in line with open science objectives, research data and data management, the creation of a European Open Science Cloud, and text and data-mining. They also highlight the importance of incentives, rewards, skills and metrics appropriate for the new era of networked research.
  • Private sector data sharing in business-to-business and business-to-governments contexts: A new Communication entitled “Towards a common European data space” provides guidance for businesses operating in the EU on the legal and technical principles that should govern data sharing collaboration in the private sector.
  • Securing citizens’ healthcare data while fostering European cooperation: The Commission is today setting out a plan of action that puts citizens first when it comes to data on citizens’ health: by securing citizens’ access to their health data and introducing the possibility to share their data across borders; by using larger data sets to enable more personalised diagnoses and medical treatment, and better anticipate epidemics; and by promoting appropriate digital tools, allowing public authorities to better use health data for research and for health system reforms. Today’s proposal also covers the interoperability of electronic health records as well as a mechanism for voluntary coordination in sharing data – including genomic data – for disease prevention and research….(More)”.

How Artificial Intelligence Could Increase the Risk of Nuclear War


Rand Corporation: “The fear that computers, by mistake or malice, might lead humanity to the brink of nuclear annihilation has haunted imaginations since the earliest days of the Cold War.

The danger might soon be more science than fiction. Stunning advances in AI have created machines that can learn and think, provoking a new arms race among the world’s major nuclear powers. It’s not the killer robots of Hollywood blockbusters that we need to worry about; it’s how computers might challenge the basic rules of nuclear deterrence and lead humans into making devastating decisions.

That’s the premise behind a new paper from RAND Corporation, How Might Artificial Intelligence Affect the Risk of Nuclear War? It’s part of a special project within RAND, known as Security 2040, to look over the horizon and anticipate coming threats.

“This isn’t just a movie scenario,” said Andrew Lohn, an engineer at RAND who coauthored the paper and whose experience with AI includes using it to route drones, identify whale calls, and predict the outcomes of NBA games. “Things that are relatively simple can raise tensions and lead us to some dangerous places if we are not careful.”…(More)”.

How artificial intelligence is transforming the world


Report by Darrell West and John Allen at Brookings: “Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity….(More)

Table of Contents
I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion

Use our personal data for the common good


Hetan Shah at Nature: “Data science brings enormous potential for good — for example, to improve the delivery of public services, and even to track and fight modern slavery. No wonder researchers around the world — including members of my own organization, the Royal Statistical Society in London — have had their heads in their hands over headlines about how Facebook and the data-analytics company Cambridge Analytica might have handled personal data. We know that trustworthiness underpins public support for data innovation, and we have just seen what happens when that trust is lost….But how else might we ensure the use of data for the public good rather than for purely private gain?

Here are two proposals towards this goal.

First, governments should pass legislation to allow national statistical offices to gain anonymized access to large private-sector data sets under openly specified conditions. This provision was part of the United Kingdom’s Digital Economy Act last year and will improve the ability of the UK Office for National Statistics to assess the economy and society for the public interest.

My second proposal is inspired by the legacy of John Sulston, who died earlier this month. Sulston was known for his success in advocating for the Human Genome Project to be openly accessible to the science community, while a competitor sought to sequence the genome first and keep data proprietary.

Like Sulston, we should look for ways of making data available for the common interest. Intellectual-property rights expire after a fixed time period: what if, similarly, technology companies were allowed to use the data that they gather only for a limited period, say, five years? The data could then revert to a national charitable corporation that could provide access to certified researchers, who would both be held to account and be subject to scrutiny that ensure the data are used for the common good.

Technology companies would move from being data owners to becoming data stewards…(More)” (see also http://datacollaboratives.org/).

Leveraging the Power of Bots for Civil Society


Allison Fine & Beth Kanter  at the Stanford Social Innovation Review: “Our work in technology has always centered around making sure that people are empowered, healthy, and feel heard in the networks within which they live and work. The arrival of the bots changes this equation. It’s not enough to make sure that people are heard; we now have to make sure that technology adds value to human interactions, rather than replacing them or steering social good in the wrong direction. If technology creates value in a human-centered way, then we will have more time to be people-centric.

So before the bots become involved with almost every facet of our lives, it is incumbent upon those of us in the nonprofit and social-change sectors to start a discussion on how we both hold on to and lead with our humanity, as opposed to allowing the bots to lead. We are unprepared for this moment, and it does not feel like an understatement to say that the future of humanity relies on our ability to make sure we’re in charge of the bots, not the other way around.

To Bot or Not to Bot?

History shows us that bots can be used in positive ways. Early adopter nonprofits have used bots to automate civic engagement, such as helping citizens register to votecontact their elected officials, and elevate marginalized voices and issues. And nonprofits are beginning to use online conversational interfaces like Alexa for social good engagement. For example, the Audubon Society has released an Alexa skill to teach bird calls.

And for over a decade, Invisible People founder Mark Horvath has been providing “virtual case management” to homeless people who reach out to him through social media. Horvath says homeless agencies can use chat bots programmed to deliver basic information to people in need, and thus help them connect with services. This reduces the workload for case managers while making data entry more efficient. He explains it working like an airline reservation: The homeless person completes the “paperwork” for services by interacting with a bot and then later shows their ID at the agency. Bots can greatly reduce the need for a homeless person to wait long hours to get needed services. Certainly this is a much more compassionate use of bots than robot security guards who harass homeless people sleeping in front of a business.

But there are also examples where a bot’s usefulness seems limited. A UK-based social service charity, Mencap, which provides support and services to children with learning disabilities and their parents, has a chatbot on its website as part of a public education effort called #HereIAm. The campaign is intended to help people understand more about what it’s like having a learning disability, through the experience of a “learning disabled” chatbot named Aeren. However, this bot can only answer questions, not ask them, and it doesn’t become smarter through human interaction. Is this the best way for people to understand the nature of being learning disabled? Is it making the difficulties feel more or less real for the inquirers? It is clear Mencap thinks the interaction is valuable, as they reported a 3 percent increase in awareness of their charity….

The following discussion questions are the start of conversations we need to have within our organizations and as a sector on the ethical use of bots for social good:

  • What parts of our work will benefit from greater efficiency without reducing the humanness of our efforts? (“Humanness” meaning the power and opportunity for people to learn from and help one another.)
  • Do we have a privacy policy for the use and sharing of data collected through automation? Does the policy emphasize protecting the data of end users? Is the policy easily accessible by the public?
  • Do we make it clear to the people using the bot when they are interacting with a bot?
  • Do we regularly include clients, customers, and end users as advisors when developing programs and services that use bots for delivery?
  • Should bots designed for service delivery also have fundraising capabilities? If so, can we ensure that our donors are not emotionally coerced into giving more than they want to?
  • In order to truly understand our clients’ needs, motivations, and desires, have we designed our bots’ conversational interactions with empathy and compassion, or involved social workers in the design process?
  • Have we planned for weekly checks of the data generated by the bots to ensure that we are staying true to our values and original intentions, as AI helps them learn?….(More)”.

Smart cities need thick data, not big data


Adrian Smith at The Guardian: “…The Smart City is an alluring prospect for many city leaders. Even if you haven’t heard of it, you may have already joined in by looking up bus movements on your phone, accessing Council services online or learning about air contamination levels. By inserting sensors across city infrastructures and creating new data sources – including citizens via their mobile devices – Smart City managers can apply Big Data analysis to monitor and anticipate urban phenomena in new ways, and, so the argument goes, efficiently manage urban activity for the benefit of ‘smart citizens’.

Barcelona has been a pioneering Smart City. The Council’s business partners have been installing sensors and opening data platforms for years. Not everyone is comfortable with this technocratic turn. After Ada Colau was elected Mayor on a mandate of democratising the city and putting citizens centre-stage, digital policy has sought to go ‘beyond the Smart City’. Chief Technology Officer Francesca Bria is opening digital platforms to greater citizen participation and oversight. Worried that the city’s knowledge was being ceded to tech vendors, the Council now promotes technological sovereignty.

On the surface, the noise project in Plaça del Sol is an example of such sovereignty. It even features in Council presentations. Look more deeply, however, and it becomes apparent that neighbourhood activists are really appropriating new technologies into the old-fashioned politics of community development….

What made Plaça del Sol stand out can be traced to a group of technology activists who got in touch with residents early in 2017. The activists were seeking participants in their project called Making Sense, which sought to resurrect a struggling ‘Smart Citizen Kit’ for environmental monitoring. The idea was to provide residents with the tools to measure noise levels, compare them with officially permissible levels, and reduce noise in the square. More than 40 neighbours signed up and installed 25 sensors on balconies and inside apartments.

The neighbours had what project coordinator Mara Balestrini from Ideas for Change calls ‘a matter of concern’. The earlier Smart Citizen Kit had begun as a technological solution looking for a problem: a crowd-funded gadget for measuring pollution, whose data users could upload to a web-platform for comparison with information from other users. Early adopters found the technology trickier to install than developers had presumed. Even successful users stopped monitoring because there was little community purpose. A new approach was needed. Noise in Plaça del Sol provided a problem for this technology fix….

Anthropologist Clifford Geertz argued many years ago that situations can only be made meaningful through ‘thick description’. Applied to the Smart City, this means data cannot really be explained and used without understanding the contexts in which it arises and gets used. Data can only mobilise people and change things when it becomes thick with social meaning….(More)”

The citation graph is one of humankind’s most important intellectual achievements


Dario Taraborelli at BoingBoing: “When researchers write, we don’t just describe new findings — we place them in context by citing the work of others. Citations trace the lineage of ideas, connecting disparate lines of scholarship into a cohesive body of knowledge, and forming the basis of how we know what we know.

Today, citations are also a primary source of data. Funders and evaluation bodies use them to appraise scientific impact and decide which ideas are worth funding to support scientific progress. Because of this, data that forms the citation graph should belong to the public. The Initiative for Open Citations was created to achieve this goal.

Back in the 1950s, reference works like Shepard’s Citations provided lawyers with tools to reconstruct which relevant cases to cite in the context of a court trial. No such a tool existed at the time for identifying citations in scientific publications. Eugene Garfield — the pioneer of modern citation analysis and citation indexing — described the idea of extending this approach to science and engineering as his Eureka moment. Garfield’s first experimental Genetics Citation Index, compiled by the newly-formed Institute for Scientific Information (ISI) in 1961, offered a glimpse into what a full citation index could mean for science at large. It was distributed, for free, to 1,000 libraries and scientists in the United States.

Fast forward to the end of the 20th century. the Web of Science citation index — maintained by Thomson Reuters, who acquired ISI in 1992 — has become the canonical source for scientists, librarians, and funders to search scholarly citations, and for the field of scientometrics, to study the structure and evolution of scientific knowledge. ISI could have turned into a publicly funded initiative, but it started instead as a for-profit effort. In 2016, Thomson Reuters sold its Intellectual Property & Science business to a private-equity fund for $3.55 billion. Its citation index is now owned by Clarivate Analytics.

Raw citation data being non-copyrightable, it’s ironic that the vision of building a comprehensive index of scientific literature has turned into a billion-dollar business, with academic institutions paying cripplingly expensive annual subscriptions for access and the public locked out.

Enter the Initiative for Open Citations.

In 2016, a small group founded the Initiative for Open Citations (I4OC) as a voluntary effort to work with scholarly publishers — who routinely publish this data — to persuade them to release it in the open and promote its unrestricted availability. Before the launch of the I4OC, only 1% of indexed scholarly publications with references were making citation data available in the public domain. When the I4OC was officially announced in 2017, we were able to report that this number had shifted from 1% to 40%. In the main, this was thanks to the swift action of a small number of large academic publishers.

In April 2018, we are celebrating the first anniversary of the initiative. Since the launch, the fraction of indexed scientific articles with open citation data (as measured by Crossref) has surpassed 50% and the number of participating publishers has risen to 490Over half a billion references are now openly available to the public without any copyright restriction. Of the top-20 biggest publishers with citation data, all but 5 — Elsevier, IEEE, Wolters Kluwer Health, IOP Publishing, ACS — now make this data open via Crossref and its APIs. Over 50 organisations — including science funders, platforms and technology organizations, libraries, research and advocacy institutions — have joined us in this journey to help advocate and promote the reuse of open citations….(More)”.

Everything* You Always Wanted To Know About Blockchain (But Were Afraid To Ask)


Alice Meadows at the Scholarly Kitchen: “In this interview, Joris van Rossum (Director of Special Projects, Digital Science) and author of Blockchain for Research, and Martijn Roelandse (Head of Publishing Innovation, Springer Nature), discuss blockchain in scholarly communications, including the recently launched Peer Review Blockchain initiative….

How would you describe blockchain in one sentence?

Joris: Blockchain is a technology for decentralized, self-regulating data which can be managed and organized in a revolutionary new way: open, permanent, verified and shared, without the need of a central authority.

How does it work (in layman’s language!)?

Joris: In a regular database you need a gatekeeper to ensure that whatever is stored in a database (financial transactions, but this could be anything) is valid. However with blockchain, trust is not created by means of a curator, but through consensus mechanisms and cryptographic techniques. Consensus mechanisms clearly define what new information is allowed to be added to the datastore. With the help of a technology called hashing, it is not possible to change any existing data without this being detected by others. And through cryptography, the database can be shared without real identities being revealed. So the blockchain technology removes the need for a middle-man.

How is this relevant to scholarly communication?

Joris: It’s very relevant. We’ve explored the possibilities and initiatives in a report published by Digital Science. The blockchain could be applied on several levels, which is reflected in a number of initiatives announced recently. For example, a cryptocurrency for science could be developed. This ‘bitcoin for science’ could introduce a monetary reward scheme to researchers, such as for peer review. Another relevant area, specifically for publishers, is digital rights management. The potential for this was picked up by this blog at a very early stage. Blockchain also allows publishers to easily integrate micropayments, thereby creating a potentially interesting business model alongside open access and subscriptions.

Moreover, blockchain as a datastore with no central owner where information can be stored pseudonymously could support the creation of a shared and authoritative database of scientific events. Here traditional activities such as publications and citations could be stored, along with currently opaque and unrecognized activities, such as peer review. A data store incorporating all scientific events would make science more transparent and reproducible, and allow for more comprehensive and reliable metrics….

How do you see developments in the industry regarding blockchain?

Joris: In the last couple of months we’ve seen the launch of many interesting initiatives. For example scienceroot.comPluto.network, and orvium.io. These are all ambitious projects incorporating many of the potential applications of blockchain in the industry, and to an extent aim to disrupt the current ecosystem. Recently artifacts.ai was announced, an interesting initiative that aims to allow researchers to permanently document every stage of the research process. However, we believe that traditional players, and not least publishers, should also look at how services to researchers can be improved using blockchain technology. There are challenges (e.g. around reproducibility and peer review) but that does not necessarily mean the entire ecosystem needs to be overhauled. In fact, in academic publishing we have a good track record of incorporating new technologies and using them to improve our role in scholarly communication. In other words, we should fix the system, not break it!

What is the Peer Review Blockchain initiative, and why did you join?

Martijn: The problems of research reproducibility, recognition of reviewers, and the rising burden of the review process, as research volumes increase each year, have led to a challenging landscape for scholarly communications. There is an urgent need for change to tackle the problems which is why we joined this initiative, to be able to take a step forward towards a fairer and more transparent ecosystem for peer review. The initiative aims to look at practical solutions that leverage the distributed registry and smart contract elements of blockchain technologies. Each of the parties can deposit peer review activity in the blockchain — depending on peer review type, either partially or fully encrypted — and subsequent activity is also deposited in the reviewer’s ORCID profile. These business transactions — depositing peer review activity against person x — will be verifiable and auditable, thereby increasing transparency and reducing the risk of manipulation. Through the shared processes we will setup with other publishers, and recordkeeping, trust will increase.

A separate trend we see is the broadening scope of research evaluation which triggered researchers to also get (more) recognition for their peer review work, beyond citations and altmetrics. At a later stage new applications could be built on top of the peer review blockchain….(More)”.