When Technology Gets Ahead of Society


Tarun Khanna at Harvard Business Review: “Drones, originally developed for military purposes, weren’t approved for commercial use in the United States until 2013. When that happened, it was immediately clear that they could be hugely useful to a whole host of industries—and almost as quickly, it became clear that regulation would be a problem. The new technology raised multiple safety and security issues, there was no consensus on who should write rules to mitigate those concerns, and the knowledge needed to develop the rules didn’t yet exist in many cases. In addition, the little flying robots made a lot of people nervous.

Such regulatory, logistical, and social barriers to adopting novel products and services are very common. In fact, technology routinely outstrips society’s ability to deal with it. That’s partly because tech entrepreneurs are often insouciant about the legal and social issues their innovations birth. Although electric cars are subsidized by the federal government, Tesla has run afoul of state and local regulations because it bypasses conventional dealers to sell directly to consumers. Facebook is only now facing up to major regulatory concerns about its use of data, despite being massively successful with users and advertisers.

It’s clear that even as innovations bring unprecedented comfort and convenience, they also threaten old ways of regulating industries, running a business, and making a living. This has always been true. Thus early cars weren’t allowed to go faster than horses, and some 19th-century textile workers used sledgehammers to attack the industrial machinery they feared would displace them. New technology can even upend social norms: Consider how dating apps have transformed the way people meet.

Entrepreneurs, of course, don’t really care that the problems they’re running into are part of a historical pattern. They want to know how they can manage—and shorten—the period between the advent of a technology and the emergence of the rules and new behaviors that allow society to embrace its possibilities.

Interestingly, the same institutional murkiness that pervades nascent industries such as drones and driverless cars is something I’ve also seen in developing countries. And strange though this may sound, I believe that tech entrepreneurs can learn a lot from businesspeople who have succeeded in the world’s emerging markets.

Entrepreneurs in Brazil or Nigeria know that it’s pointless to wait for the government to provide the institutional and market infrastructure their businesses need, because that will simply take too long. They themselves must build support structures to compensate for what Krishna Palepu and I have referred to in earlier writings as “institutional voids.” They must create the conditions that will allow them to create successful products or services.

Tech-forward entrepreneurs in developed economies may want to believe that it’s not their job to guide policy makers and the public—but the truth is that nobody else can play that role. They may favor hardball tactics, getting ahead by evading rules, co-opting regulators, or threatening to move overseas. But in the long term, they’d be wiser to use soft power, working with a range of partners to co-create the social and institutional fabric that will support their growth—as entrepreneurs in emerging markets have done.…(More)”.

Developing an impact framework for cultural change in government


Jesper Christiansen at Nesta: “Innovation teams and labs around the world are increasingly being tasked with building capacity and contributing to cultural change in government. There’s also an increasing recognition that we need to go beyond projects or single structures and make innovation become a part of the way governments operate more broadly.

However, there is a significant gap in our understanding of what “cultural change” or better “capacity” actually means.

At the same time, most innovation labs and teams are still being held to account in ways that don’t productively support this work. There is a lack of useful ways to measure outcomes, as opposed to outputs (for example, being asked to account for the number of workshops, rather than the increased capacity or impact that these workshops led to).

Consequently, we need a more developed awareness and understanding of what the signs of success look like, and what the intermediary outcomes (and measures) are in order to create a shift in accountability and better support ongoing capacity building….

One of the goals of States of Change, the collective we initiated last year to build this capability and culture, is to proactively address the common challenges that innovation practitioners face again and again. The field of public innovation is still emerging and evolving, and so our aim is to inspire action through practice-oriented, collaborative R&D activities and to develop the field based on practice rather than theory….(More)”.

Who wants to know?: The Political Economy of Statistical Capacity in Latin America


IADB paper by Dargent, Eduardo; Lotta, Gabriela; Mejía-Guerra, José Antonio; Moncada, Gilberto: “Why is there such heterogenity in the level of technical and institutional capacity in national statistical offices (NSOs)? Although there is broad consensus about the importance of statistical information as an essential input for decision making in the public and private sectors, this does not generally translate into a recognition of the importance of the institutions responsible for the production of data. In the context of the role of NSOs in government and society, this study seeks to explain the variation in regional statistical capacity by comparing historical processes and political economy factors in 10 Latin American countries. To do so, it proposes a new theoretical and methodological framework and offers recommendations to strengthen the institutionality of NSOs….(More)”.

Wikipedia vandalism could thwart hoax-busting on Google, YouTube and Facebook


Daniel Funke at Poynter: “For a brief moment, the California Republican Party supported Nazism. At least, that’s what Google said.

That’s because someone vandalized the Wikipedia page for the party on May 31 to list “Nazism” alongside ideologies like “Conservatism,” “Market liberalism” and “Fiscal conservatism.” The mistake was removed from search results, with Google clarifying to Vice News that the search engine had failed to catch the vandalism in the Wikipedia entry….

Google has long drawn upon the online encyclopedia for appending basic information to search results. According to the edit log for the California GOP page, someone added “Nazism” to the party’s ideology section around 7:40 UTC on May 31. The edit was removed within a minute, but it appears Google’s algorithm scraped the page just in time for the fake.

“Sometimes people vandalize public information sources, like Wikipedia, which can impact the information that appears in search,” a Google spokesperson told Poynter in an email. “We have systems in place that catch vandalism before it impacts search results, but occasionally errors get through, and that’s what happened here.”…

According to Google, more than 99.9 percent of Wikipedia edits that show up in Knowledge Panels, which display basic information about searchable keywords at the top of results, aren’t vandalism. The user who authored the original edit to the California GOP’s page did not use a user profile, making them hard to track down.

That’s a common tactic among people who vandalize Wikipedia pages, a practice the nonprofit has documented extensively. But given the volume of edits that are made on Wikipedia — about 10 per second, with 600 new pages per day — and the fact that Facebook and YouTube are now pulling from them to provide more context to posts, the potential for and effect of abuse is high….(More)”.

Preprints: The What, The Why, The How.


Center for Open Science: “The use of preprint servers by scholarly communities is definitely on the rise. Many developments in the past year indicate that preprints will be a huge part of the research landscape. Developments with DOIs, changes in funder expectations, and the launch of many new services indicate that preprints will become much more pervasive and reach beyond the communities where they started.

From funding agencies that want to realize impact from their efforts sooner to researchers’ desire to disseminate their research more quickly, the growth of these servers and the number of works being shared, has been substantial. At COS, we already host twenty different organizations’ services via the OSF Preprints platform.

So what’s a preprint and what is it good for? A preprint is a manuscript submitted to a  dedicated repository (like OSF PreprintsPeerJbioRxiv or arXiv) prior to peer review and formal publication. Some of those repositories may also accept other types of research outputs, like working papers and posters or conference proceedings. Getting a preprint out there has a variety of benefits for authors other stakeholders in the research:

  • They increase the visibility of research, and sooner. While traditional papers can languish in the peer review process for months, even years, a preprint is live the minute it is submitted and moderated (if the service moderates). This means your work gets indexed by Google Scholar and Altmetric, and discovered by more relevant readers than ever before.
  • You can get feedback on your work and make improvements prior to journal submission. Many authors have publicly commented about the recommendations for improvements they’ve received on their preprint that strengthened their work and even led to finding new collaborators.
  • Papers with an accompanying preprint get cited 30% more often than papers without. This research from PeerJsums it up, but that’s a big benefit for scholars looking to get more visibility and impact from their efforts.
  • Preprints get a permanent DOI, which makes them part of the freely accessible scientific record forever. This means others can relay on that permanence when citing your work in their research. It also means that your idea, developed by you, has a “stake in the ground” where potential scooping and intellectual theft are concerned.

So, preprints can really help lubricate scientific progress. But there are some things to keep in mind before you post. Usually, you can’t post a preprint of an article that’s already been submitted to a journal for peer review. Policies among journals vary widely, so it’s important to check with the journal you’re interested in sending your paper to BEFORE you submit a preprint that might later be published. A good resource for doing this is JISC’s SHERPA/RoMEO database. It’s also a good idea to understand the licensing choices available. At OSF Preprints, we recommend the CC-BY license suite, but you can check choosealicense.com or https://osf.io/6uupa/ for good overviews on how best to license your submissions….(More)”.

Research Shows Political Acumen, Not Just Analytical Skills, is Key to Evidence-Informed Policymaking


Press Release: “Results for Development (R4D) has released a new study unpacking how evidence translators play a key and somewhat surprising role in ensuring policymakers have the evidence they need to make informed decisions. Translators — who can be evidence producers, policymakers, or intermediaries such as journalists, advocates and expert advisors — identify, filter, interpret, adapt, contextualize and communicate data and evidence for the purposes of policymaking.

The study, Translators’ Role in Evidence-Informed Policymaking, provides a better understanding of who translators are and how different factors influence translators’ ability to promote the use of evidence in policymaking. This research shows translation is an essential function and that, absent individuals or organizations taking up the translator role, evidence translation and evidence-informed policymaking often do not take place.

“We began this research assuming that translators’ technical skills and analytical prowess would prove to be among the most important factors in predicting when and how evidence made its way into public sector decision making,” Nathaniel Heller, executive vice president for integrated strategies at Results for Development, said. “Surprisingly, that turned out not to be the case, and other ‘soft’ skills play a far larger role in translators’ efficacy than we had imagined.”

Key findings include:

  • Translator credibility and reputation are crucial to the ability to gain access to policymakers and to promote the uptake of evidence.
  • Political savvy and stakeholder engagement are among the most critical skills for effective translators.
  • Conversely, analytical skills and the ability to adapt, transform and communicate evidence were identified as being less important stand-alone translator skills.
  • Evidence translation is most effective when initiated by those in power or when translators place those in power at the center of their efforts.

The study includes a definitional and theoretical framework as well as a set of research questions about key enabling and constraining factors that might affect evidence translators’ influence. It also focuses on two cases in Ghana and Argentina to validate and debunk some of the intellectual frameworks around policy translators that R4D and others in the field have already developed. The first case focuses on Ghana’s blue-ribbon commission formed by the country’s president in 2015, which was tasked with reviewing Ghana’s national health insurance scheme. The second case looks at Buenos Aires’ 2016 government-led review of the city’s right to information regime….(More)”.

Ontario is trying a wild experiment: Opening access to its residents’ health data


Dave Gershorn at Quartz: “The world’s most powerful technology companies have a vision for the future of healthcare. You’ll still go to your doctor’s office, sit in a waiting room, and explain your problem to someone in a white coat. But instead of relying solely on their own experience and knowledge, your doctor will consult an algorithm that’s been trained on the symptoms, diagnoses, and outcomes of millions of other patients. Instead of a radiologist reading your x-ray, a computer will be able to detect minute differences and instantly identify a tumor or lesion. Or at least that’s the goal.

AI systems like these, currently under development by companies including Google and IBM, can’t read textbooks and journals, attend lectures, and do rounds—they need millions of real life examples to understand all the different variations between one patient and another. In general, AI is only as good as the data it’s trained on, but medical data is exceedingly private—most developed countries have strict health data protection laws, such as HIPAA in the United States….

These approaches, which favor companies with considerable resources, are pretty much the only way to get large troves of health data in the US because the American health system is so disparate. Healthcare providers keep personal files on each of their patients, and can only transmit them to other accredited healthcare workers at the patient’s request. There’s no single place where all health data exists. It’s more secure, but less efficient for analysis and research.

Ontario, Canada, might have a solution, thanks to its single-payer healthcare system. All of Ontario’s health data exists in a few enormous caches under government control. (After all, the government needs to keep track of all the bills its paying.) Similar structures exist elsewhere in Canada, such as Quebec, but Toronto, which has become a major hub for AI research, wants to lead the charge in providing this data to businesses.

Until now, the only people allowed to study this data were government organizations or researchers who partnered with the government to study disease. But Ontario has now entrusted the MaRS Discovery District—a cross between a tech incubator and WeWork—to build a platform for approved companies and researchers to access this data, dubbed Project Spark. The project, initiated by MaRS and Canada’s University Health Network, began exploring how to share this data after both organizations expressed interest to the government about giving broader health data access to researchers and companies looking to build healthcare-related tools.

Project Spark’s goal is to create an API, or a way for developers to request information from the government’s data cache. This could be used to create an app for doctors to access the full medical history of a new patient. Ontarians could access their health records at any time through similar software, and catalog health issues as they occur. Or researchers, like the ones trying to build AI to assist doctors, could request a different level of access that provides anonymized data on Ontarians who meet certain criteria. If you wanted to study every Ontarian who had Alzheimer’s disease over the last 40 years, that data would only be authorization and a few lines of code away.

There are currently 100 companies lined up to get access to data, comprised of health records from Ontario’s 14 million residents. (MaRS won’t say who the companies are). …(More)”

AI Nationalism


Blog by Ian Hogarth: “The central prediction I want to make and defend in this post is that continued rapid progress in machine learning will drive the emergence of a new kind of geopolitics; I have been calling it AI Nationalism. Machine learning is an omni-use technology that will come to touch all sectors and parts of society.

The transformation of both the economy and the military by machine learning will create instability at the national and international level forcing governments to act. AI policy will become the single most important area of government policy. An accelerated arms race will emerge between key countries and we will see increased protectionist state action to support national champions, block takeovers by foreign firms and attract talent. I use the example of Google, DeepMind and the UK as a specific example of this issue.

This arms race will potentially speed up the pace of AI development and shorten the timescale for getting to AGI. Although there will be many common aspects to this techno-nationalist agenda, there will also be important state specific policies. There is a difference between predicting that something will happen and believing this is a good thing. Nationalism is a dangerous path, particular when the international order and international norms will be in flux as a result and in the concluding section I discuss how a period of AI Nationalism might transition to one of global cooperation where AI is treated as a global public good….(More)”.

Big Data and AI – A transformational shift for government: So, what next for research?


Irina Pencheva, Marc Esteve and Slava Jenkin Mikhaylov in Public Policy and Administration: “Big Data and artificial intelligence will have a profound transformational impact on governments around the world. Thus, it is important for scholars to provide a useful analysis on the topic to public managers and policymakers. This study offers an in-depth review of the Policy and Administration literature on the role of Big Data and advanced analytics in the public sector. It provides an overview of the key themes in the research field, namely the application and benefits of Big Data throughout the policy process, and challenges to its adoption and the resulting implications for the public sector. It is argued that research on the subject is still nascent and more should be done to ensure that the theory adds real value to practitioners. A critical assessment of the strengths and limitations of the existing literature is developed, and a future research agenda to address these gaps and enrich our understanding of the topic is proposed…(More)”.

Our Infant Information Revolution


Joseph Nye at Project Syndicate: “…When people are overwhelmed by the volume of information confronting them, it is hard to know what to focus on. Attention, not information, becomes the scarce resource. The soft power of attraction becomes an even more vital power resource than in the past, but so does the hard, sharp power of information warfare. And as reputation becomes more vital, political struggles over the creation and destruction of credibility multiply. Information that appears to be propaganda may not only be scorned, but may also prove counterproductive if it undermines a country’s reputation for credibility.

During the Iraq War, for example, the treatment of prisoners at Abu Ghraib and Guantanamo Bay in a manner inconsistent with America’s declared values led to perceptions of hypocrisy that could not be reversed by broadcasting images of Muslims living well in America. Similarly, President Donald Trump’s tweets that prove to be demonstrably false undercut American credibility and reduce its soft power.

The effectiveness of public diplomacy is judged by the number of minds changed (as measured by interviews or polls), not dollars spent. It is interesting to note that polls and the Portland index of the Soft Power 30show a decline in American soft power since the beginning of the Trump administration. Tweets can help to set the global agenda, but they do not produce soft power if they are not credible.

Now the rapidly advancing technology of artificial intelligence or machine learning is accelerating all of these processes. Robotic messages are often difficult to detect. But it remains to be seen whether credibility and a compelling narrative can be fully automated….(More)”.