Designing the Next Generation of Open Data Policy


Andrew Young and Stefaan Verhulst at the Open Data Charter Blog: “The international Open Data Charter has emerged from the global open data community as a galvanizing document to place open government data directly in the hands of citizens and organizations. To drive this process forward, and ensure that the outcomes are both systemic and transformational, new open data policy needs to be based on evidence of how and when open data works in practice. To support this work, the GovLab, in collaboration with Omidyar Network, has recently completed research which provides vital evidence of open data projects around the world, including an analysis of 19 in-depth, impact-focused case studies and a key findings paper. All of the research is now available in an eBook published by O’Reilly Media.

The research found that open data is making an impact in four core ways, including:…(More)”

Living in the World of Both/And


Essay by Adene Sacks & Heather McLeod Grant  in SSIR: “In 2011, New York Times data scientist Jake Porway wrote a blog post lamenting the fact that most data scientists spend their days creating apps to help users find restaurants, TV shows, or parking spots, rather than addressing complicated social issues like helping identify which teens are at risk of suicide or creating a poverty index of Africa using satellite data.

That post hit a nerve. Data scientists around the world began clamoring for opportunities to “do good with data.” Porway—at the center of this storm—began to convene these scientists and connect them to nonprofits via hackathon-style events called DataDives, designed to solve big social and environmental problems. There was so much interest, he eventually quit his day job at the Times and created the organization DataKind to steward this growing global network of data science do-gooders.

At the same time, in the same city, another movement was taking shape—#GivingTuesday, an annual global giving event fueled by social media. In just five years, #GivingTuesday has reshaped how nonprofits think about fundraising and how donors give. And yet, many don’t know that 92nd Street Y (92Y)—a 140-year-old Jewish community and cultural center in Manhattan, better known for its star-studded speaker series, summer camps, and water aerobics classes—launched it.

What do these two examples have in common? One started as a loose global network that engaged data scientists in solving problems, and then became an organization to help support the larger movement. The other started with a legacy organization, based at a single site, and catalyzed a global movement that has reshaped how we think about philanthropy. In both cases, the founding groups have incorporated the best of both organizations and networks.

Much has been written about the virtues of thinking and acting collectively to solve seemingly intractable challenges. Nonprofit leaders are being implored to put mission above brand, build networks not just programs, and prioritize collaboration over individual interests. And yet, these strategies are often in direct contradiction to the conventional wisdom of organization-building: differentiating your brand, developing unique expertise, and growing a loyal donor base.

A similar tension is emerging among network and movement leaders. These leaders spend their days steering the messy process required to connect, align, and channel the collective efforts of diverse stakeholders. It’s not always easy: Those searching to sustain movements often cite the lost momentum of the Occupy movement as a cautionary note. Increasingly, network leaders are looking at how to adapt the process, structure, and operational expertise more traditionally associated with organizations to their needs—but without co-opting or diminishing the energy and momentum of their self-organizing networks…

Welcome to the World of “Both/And”

Today’s social change leaders—be they from business, government, or nonprofits—must learn to straddle the leadership mindsets and practices of both networks and organizations, and know when to use which approach. Leaders like Porway, and Henry Timms and Asha Curran of 92Y can help show us the way.

How do these leaders work with the “both/and” mindset?

First, they understand and leverage the strengths of both organizations and networks—and anticipate their limitations. As Timms describes it, leaders need to be “bilingual” and embrace what he has called “new power.” Networks can be powerful generators of new talent or innovation around complex multi-sector challenges. It’s useful to take a network approach when innovating new ideas, mobilizing and engaging others in the work, or wanting to expand reach and scale quickly. However, networks can dissipate easily without specific “handrails,” or some structure to guide and support their work. This is where they need some help from the organizational mindset and approach.

On the flip side, organizations are good at creating centralized structures to deliver products or services, manage risk, oversee quality control, and coordinate concrete functions like communications or fundraising. However, often that efficiency and effectiveness can calcify over time, becoming a barrier to new ideas and growth opportunities. When organizational boundaries are too rigid, it is difficult to engage the outside world in ideating or mobilizing on an issue. This is when organizations need an infusion of the “network mindset.”

 

…(More)

How to advance open data research: Towards an understanding of demand, users, and key data


Danny Lämmerhirt and Stefaan Verhulst at IODC blog: “…Lord Kelvin’s famous quote “If you can not measure it, you can not improve it” equally applies to open data. Without more evidence of how open data contributes to meeting users’ needs and addressing societal challenges, efforts and policies toward releasing and using more data may be misinformed and based upon untested assumptions.

When done well, assessments, metrics, and audits can guide both (local) data providers and users to understand, reflect upon, and change how open data is designed. What we measure and how we measure is therefore decisive to advance open data.

Back in 2014, the Web Foundation and the GovLab at NYU brought together open data assessment experts from Open Knowledge, Organisation for Economic Co-operation and Development, United Nations, Canada’s International Development Research Centre, and elsewhere to explore the development of common methods and frameworks for the study of open data. It resulted in a draft template or framework for measuring open data. Despite the increased awareness for more evidence-based open data approaches, since 2014 open data assessment methods have only advanced slowly. At the same time, governments publish more of their data openly, and more civil society groups, civil servants, and entrepreneurs employ open data to manifold ends: the broader public may detect environmental issues and advocate for policy changes, neighbourhood projects employ data to enable marginalized communities to participate in urban planning, public institutions may enhance their information exchange, and entrepreneurs embed open data in new business models.

In 2015, the International Open Data Conference roadmap made the following recommendations on how to improve the way we assess and measure open data.

  1. Reviewing and refining the Common Assessment Methods for Open Data framework. This framework lays out four areas of inquiry: context of open data, the data published, use practices and users, as well as the impact of opening data.
  2. Developing a catalogue of assessment methods to monitor progress against the International Open Data Charter (based on the Common Assessment Methods for Open Data).
  3. Networking researchers to exchange common methods and metrics. This helps to build methodologies that are reproducible and increase credibility and impact of research.
  4. Developing sectoral assessments.

In short, the IODC called for refining our assessment criteria and metrics by connecting researchers, and applying the assessments to specific areas. It is hard to tell how much progress has been made in answering these recommendations, but there is a sense among researchers and practitioners that the first two goals are yet to be fully addressed.

Instead we have seen various disparate, yet well meaning, efforts to enhance the understanding of the release and impact of open data. A working group was created to measure progress on the International Open Data Charter, which provides governments with principles for implementing open data policies. While this working group compiled a list of studies and their methodologies, it did not (yet) deepen the common framework of definitions and criteria to assess and measure the implementation of the Charter.

In addition, there is an increase of sector- and case-specific studies that are often more descriptive and context specific in nature, yet do contribute to the need for examples that illustrate the value proposition for open data.

As such, there seems to be a disconnect between top-level frameworks and on-the-ground research, preventing the sharing of common methods and distilling replicable experiences about what works and what does not….(More)”

Data for Policy: Data Science and Big Data in the Public Sector


Innar Liiv at OXPOL: “How can big data and data science help policy-making? This question has recently gained increasing attention. Both the European Commission and the White House have endorsed the use of data for evidence-based policy making.

Still, a gap remains between theory and practice. In this blog post, I make a number of recommendations for systematic development paths.

RESEARCH TRENDS SHAPING DATA FOR POLICY

‘Data for policy’ as an academic field is still in its infancy. A typology of the field’s foci and research areas are summarised in the figure below.

 

diagram1

 

Besides the ‘data for policy’ community, there are two important research trends shaping the field: 1) computational social science; and 2) the emergence of politicised social bots.

Computational social science (CSS) is an new interdisciplinary research trend in social science, which tries to transform advances in big data and data science into research methodologies for understanding, explaining and predicting underlying social phenomena.

Social science has a long tradition of using computational and agent-based modelling approaches (e.g.Schelling’s Model of Segregation), but the new challenge is to feed real-life, and sometimes even real-time information into those systems to get gain rapid insights into the validity of research hypotheses.

For example, one could use mobile phone call records to assess the acculturation processes of different communities. Such a project would involve translating different acculturation theories into computational models, researching the ethical and legal issues inherent in using mobile phone data and developing a vision for generating policy recommendations and new research hypothesis from the analysis.

Politicised social bots are also beginning to make their mark. In 2011, DARPA solicited research proposals dealing with social media in strategic communication. The term ‘political bot’ was not used, but the expected results left no doubt about the goals…

The next wave of e-government innovation will be about analytics and predictive models.  Taking advantage of their potential for social impact will require a solid foundation of e-government infrastructure.

The most important questions going forward are as follows:

  • What are the relevant new data sources?
  • How can we use them?
  • What should we do with the information? Who cares? Which political decisions need faster information from novel sources? Do we need faster information? Does it come with unanticipated risks?

These questions barely scratch the surface, because the complex interplay between general advancements of computational social science and hovering satellite topics like political bots will have an enormous impact on research and using data for policy. But, it’s an important start….(More)”

5 Crowdsourced News Platforms Shaping The Future of Journalism and Reporting


 at Crowdsourcing Week: “We are exposed to a myriad of news and updates worldwide. As the crowd becomes moreinvolved in providing information, adopting that ‘upload mindset’ coined by Will Merritt ofZooppa, access to all kinds of data is a few taps and clicks away….

Google News Lab – Better reporting and insightful storytelling

crowdsourced-news-platforms-googlenewslabs

Last week, Google announced its own crowdsourced news platform dubbed News Lab as part of their efforts “to empower innovation at the intersection of technology and media.”

Scouting for real-time stories, updates, and breaking news is much easier and systematize for journalists worldwide. They can use Google’s tools for better reporting, data for insightful storytelling and programs to focus on the future of media, tackling this initiative in three ways.

“There’s a revolution in data journalism happening in newsrooms today, as more data sets and more tools for analysis are allowing journalists to create insights that were never before possible,” Google said.

Grasswire – first-hand information in real-time

crowdsourced-news-platforms-grasswire

The design looks bleak and simple, but the site itself is rich with content—first-hand information crowdsourced from Twitter users in real-time and verified. Austen Allred, co-founder of Grasswire was inspired to develop the platform after his “minor slipup” as the American Journalism Review (AJR) puts it, when he missed his train out of Shanghai that actually saved his life.

“The bullet train Allred was supposed to be on collided with another train in the Wenzhou area ofChina’s Zhejiang province,” AJR wrote. “Of the 1,630 passengers, 40 died, and another 210 were injured.” The accident happened in 2011. Unfortunately, the Chinese government made some cover upon the incident, which frustrated Allred in finding first-hand information.

After almost four years, Grasswire was launched, a website that collects real-time information from users for breaking news infused with crowdsourcing model afterward. “It’s since grown into a more complex interface, allowing users to curate selected news tweets by voting and verifying information with a fact-checking system,” AJR wrote, which made the verification of data open and systematized.

Rappler – Project Agos: a technology for disaster risk reduction

crowdsourced-news-platforms-projectagos

The Philippines is a favorite hub for typhoons. The aftermath of typhoon Haiyan was exceedingly disastrous. But the crowds were steadfast in uploading and sharing information and crowdsourcing became mainstream during the relief operations. Maria Ressa said that they had to educate netizens to use the appropriate hashtags for years (#nameoftyphoonPH, e.g. #YolandaPH) for typhoons to collect data on social media channels easily.

Education and preparation can mitigate the risks and save lives if we utilize the right technology and act accordingly. In her blog, After Haiyan: Crisis management and beyond, Maria wrote, “We need to educate not just the first responders and local government officials, but more importantly, the people in the path of the storms.” …

China’s CCDI app – Crowdsourcing political reports to crack down corruption practices

crowdsourced-news-platforms-ccdiapp

In China, if you want to mitigate or possible, eradicate corrupt practices, then there’s an app for that.China launched its own anti-corruption app called, Central Commission for Discipline InspectionWebsite App, allowing the public to upload text messages, photos and videos of Chinese officials’ any corrupt practices.

The platform was released by the government agency, Central Committee for Discipline Inspection.Nervous in case you’ll be tracked as a whistleblower? Interestingly, anyone can report anonymously.China Daily said, “the anti-corruption authorities received more than 1,000 public reports, and nearly70 percent were communicated via snapshots, text messages or videos uploaded,” since its released.Kenya has its own version, too, called Ushahidi using crowdmapping, and India’s I Paid a Bribe.

Newzulu – share news, publish and get paid

crowdsourced-news-platforms-newzulu

While journalists can get fresh insights from Google News Labs, the crowd can get real-time verified news from Grasswire, and CCDI is open for public, Newzulu crowdsourced news platforms doesn’t just invite the crowd to share news, they can also publish and get paid.

It’s “a community of over 150,000 professional and citizen journalists who share and break news to the world as it happens,” originally based in Sydney. Anyone can submit stories, photos, videos, and even stream live….(More)”

Revealing Algorithmic Rankers


Julia Stoyanovich and Ellen P. Goodman in the Freedom to Tinker Blog: “ProPublica’s story on “machine bias” in an algorithm used for sentencing defendants amplified calls to make algorithms more transparent and accountable. It has never been more clear that algorithms are political (Gillespie) and embody contested choices (Crawford), and that these choices are largely obscured from public scrutiny (Pasquale and Citron). We see it in controversies over Facebook’s newsfeed, or Google’s search results, or Twitter’s trending topics. Policymakers are considering how to operationalize “algorithmic ethics” and scholars are calling for accountable algorithms (Kroll, et al.).

One kind of algorithm that is at once especially obscure, powerful, and common is the ranking algorithm (Diakopoulos). Algorithms rank individuals to determine credit worthiness, desirability for college admissions and employment, and compatibility as dating partners. They encode ideas of what counts as the best schools, neighborhoods, and technologies. Despite their importance, we actually can know very little about why this person was ranked higher than another in a dating app, or why this school has a better rank than that one. This is true even if we have access to the ranking algorithm, for example, if we have complete knowledge about the factors used by the ranker and their relative weights, as is the case for US News ranking of colleges. In this blog post, we argue that syntactic transparency, wherein the rules of operation of an algorithm are more or less apparent, or even fully disclosed, still leaves stakeholders in the dark: those who are ranked, those who use the rankings, and the public whose world the rankings may shape.

Using algorithmic rankers as an example, we argue that syntactic transparency alone will not lead to true algorithmic accountability (Angwin). This is true even if the complete input data is publicly available. We advocate instead for interpretability, which rests on making explicit the interactions between the program and the data on which it acts. An interpretable algorithm allows stakeholders to understand the outcomes, not merely the process by which outcomes were produced….

Opacity in algorithmic rankers can lead to four types of harms:

(1) Due process / fairness. The subjects of the ranking cannot have confidence that their ranking is meaningful or correct, or that they have been treated like similarly situated subjects. Syntactic transparency helps with this but it will not solve the problem entirely, especially when people cannot interpret how weighted factors have impacted the outcome (Source 2 above).

(2) Hidden normative commitments. A ranking formula implements some vision of the “good.” Unless the public knows what factors were chosen and why, and with what weights assigned to each, it cannot assess the compatibility of this vision with other norms. Even where the formula is disclosed, real public accountability requires information about whether the outcomes are stable, whether the attribute weights are meaningful, and whether the outcomes are ultimately validated against the chosen norms. Did the vendor evaluate the actual effect of the features that are postulated as important by the scoring / ranking mode? Did the vendor take steps to compensate for mutually-reinforcing correlated inputs, and for possibly discriminatory inputs? Was stability of the ranker interrogated on real or realistic inputs? This kind of transparency around validation is important for both learning algorithms which operate according to rules that are constantly in flux and responsive to shifting data inputs, and for simpler score-based rankers that are likewise sensitive to the data.

(3) Interpretability. Especially where ranking algorithms are performing a public function (e.g., allocation of public resources or organ donations) or directly shaping the public sphere (e.g., ranking politicians), political legitimacy requires that the public be able to interpret algorithmic outcomes in a meaningful way. At the very least, they should know the degree to which the algorithm has produced robust results that improve upon a random ordering of the items (a ranking-specific confidence measure). In the absence of interpretability, there is a threat to public trust and to democratic participation, raising the dangers of an algocracy (Danaher) – rule by incontestable algorithms.

(4) Meta-methodological assessment. Following on from the interpretability concerns is a meta question about whether a ranking algorithm is the appropriate method for shaping decisions. There are simply some domains, and some instances of datasets, in which rank order is not appropriate. For example, if there are very many ties or near-ties induced by the scoring function, or if the ranking is too unstable, it may be better to present data through an alternative mechanism such as clustering. More fundamentally, we should question the use of an algorithmic process if its effects are not meaningful or if it cannot be explained. In order to understand whether the ranking methodology is valid, as a first order question, the algorithmic process needs to be interpretable….

The Ranking Facts show how the properties of the 10 highest-ranked items compare to the entire dataset (Relativity), making explicit cases where the ranges of values, and the median value, are different at the top-10 vs. overall (median is marked with red triangles for faculty size and average publication count). The label lists the attributes that have most impact on the ranking (Impact), presents the scoring formula (if known), and explains which attributes correlate with the computed score. Finally, the label graphically shows the distribution of scores (Stability), explaining that scores differ significantly up to top-10 but are nearly indistinguishable in later positions.

Something like the Rankings Facts makes the process and outcome of algorithmic ranking interpretable for consumers, and reduces the likelihood of opacity harms, discussed above. Beyond Ranking Facts, it is important to develop Interpretability tools that enable vendors to design fair, meaningful and stable ranking processes, and that support external auditing. Promising technical directions include, e.g., quantifying the influence of various features on the outcome under different assumptions about availability of data and code, and investigating whether provenance techniques can be used to generate explanations….(More)”

Evidence-based policy and policy as ‘translation’: designing a model for policymaking


Jo Ingold  and Mark Monaghan at the LSE Politics and Policy Blog: “It’s fair to say that research has never monopolised the policy process to the extent that policies are formulated solely, or even primarily, upon evidence. At the same time, research is never entirely absent from the process nor is it always exploited to justify a pre-existing policy stance as those who pronounce that we that we are now in an era of policy based evidence would have us believe. Often the reality lies somewhere in the middle. A number of studies have looked at how evidence may or may not have impacted on the policy decision-making process. Learning from other contexts, or ‘policy transfer’ is one other way of harnessing particular kinds of evidence, focusing on the migration of policies from one jurisdiction to another, whether within or across countries. Studies have begun to move away from theories of direct transfer to consider the processes involved in movement of ideas from one area to another. In effect, they consider the ‘translation’ of evidence and policy.

Our research brings together the evidence-based policymaking and ‘policy as translation’ literatures to try to shed light on the process by which evidence is used in policymaking. Although these literatures have developed separately (and to a large extent remain so) we see both as, at root, being concerned with the same issues, in particular how ideas, evidence and knowledge are integrated into the policymaking process. With EBPM there is a stated desire to formulate policies based on the best available evidence, while ‘policy as translation’ focuses on the ‘travel of ideas’ and views the policy process as fluid, dynamic and continually re-constituting, rather than a linear or rational ‘transfer’ process….

The Evidence Translation Model is intended to be recursive and includes five key dimensions which influence how evidence, ideas and knowledge are used in policy:

  • The substantive nature of the policy problem in the context of the Zeitgeist
  • Agenda-setting – where evidence is sought (fishing/farming) and what evidence is used
  • The filtration processes which shape and mould how evidence is used (flak/strain)
  • The policy apparatus for policy design and implementation
  • The role of ‘evidence translators’

Evidence Translation Model

work model
Source: Policy & Politics 2016 (44:2)

Our research draws attention to what is perceived to be evidence and at what stage of the policymaking process it is used….(More; See also authors’ article in Policy & Politics)”.

The Behavioral Economics Guide 2016


Guide edited by Alain Samson: “Since the publication of last year’s edition of the Behavioral Economics (BE) Guide, behavioral science has continued to exert its influence in various domains of scholarship and practical applications. The Guide’s host, behavioraleconomics.com, has grown to become a popular online hub for behavioral science ideas and resources. Our domain’s new blog publishes articles from academics and practitioners alike, reflecting the wide range of areas in which BE ideas are generated and used. …

Past editions of the BE Guide focused on BE theory (2014) and behavioral science practice (2015). The aim of this year’s issue is to provide different perspectives on the field and novel applications. This editorial1 offers a selection of recent (often critical) thinking around behavioral economics research and applications. It is followed by Q&As with Richard Thaler and Varun Gauri. The subsequent section provides a range of absorbing contributions from authors who work in applied behavioral science. The final section includes a further expanded encyclopedia of BE (and related) concepts, a new listing of behavioral science events, more graduate programs, and a larger selection of journals, reflecting the growth of the field and our continued efforts to compile relevant information….(More)”

Can You Really Spot Cancer Through a Search Engine?


Michael Reilly at MIT Technology Review: “In the world of cancer treatment, early diagnosis can mean the difference between being cured and being handed a death sentence. At the very least, catching a tumor early increases a patient’s chances of living longer.

Researchers at Microsoft think they may know of a tool that could help detect cancers before you even think to go to a doctor: your search engine.

In a study published Tuesday in the Journal of Oncology Practice, the Microsoft team showed that it was able to mine the anonymized search queries of 6.4 million Bing users to find searches that indicated someone had been diagnosed with pancreatic cancer (such as “why did I get cancer in pancreas,” and “I was told I have pancreatic cancer what to expect”). Then, looking at people’s search patterns before their diagnosis, they identified patterns of search that indicated they had been experiencing symptoms before they ever sought medical treatment.

Pancreatic cancer is a particularly deadly form of the disease. It’s the fourth-leading cause of cancer death in the U.S., and three-quarters of people diagnosed with it die within a year. But catching it early still improves the odds of living longer.

By looking for searches for symptoms—which include yellowing, itchy skin, and abdominal pain—and checking the user’s search history for signs of other risk factors like alcoholism and obesity, the team was often able to identify searches for symptoms up to five months before they were diagnosed.

In their paper, the team acknowledged the limitations of the work, saying that it is not meant to provide people with a diagnosis. Instead they suggested that it might one day be turned into a tool that warns users whose searches indicate they may have symptoms of cancer.

“The goal is not to perform the diagnosis,” said Ryen White, one of the researchers, on a post on Microsoft’s blog. “The goal is to help those at highest risk to engage with medical professionals who can actually make the true diagnosis.”…(More)”

Twiplomacy Study 2016


Executive Summary: “Social media has become diplomacy’s significant other. It has gone from being an afterthought to being the very first thought of world leaders and governments across the globe, as audiences flock to their newsfeeds for the latest news. This recent worldwide embrace of online channels has brought with it a wave of openness and transparency that has never been experienced before. Social media provides a platform for unconditional communication, and has become a communicator’s most powerful tool. Twitter in particular, has even become a diplomatic ‘barometer, a tool used to analyze and forecast international relations.

There is a vast array of social networks for government communicators to choose from. While some governments and foreign ministries still ponder the pros and cons of any social media engagement, others have gone beyond Twitter, Facebook and Instagram to reach their target audiences, even embracing emerging platforms such as Snapchat, WhatsApp and Telegram where communications are under the radar and almost impossible to track.

Burson-Marsteller’s 2016 Twiplomacy study has been expanded to include other social media platforms such as Facebook, Instagram and YouTube, as well as more niche digital diplomacy platforms such as Snapchat, LinkedIn, Google+,Periscope and Vine.

There is a growing digital divide between governments that are active on social media with dedicated teams and those that see digital engagement as an afterthought and so devote few resources to it. There is still a small number of government leaders who refuse to embrace the new digital world and, for these few, their community managers struggle to bring their organizations into the digital century.

Over the past year, the most popular world leaders on social media have continued to increase their audiences, while new leaders have emerged in the Twittersphere. Argentina’s Mauricio Macri, Canada’s Justin Trudeau and U.S. President Barack Obama have all made a significant impact on Twitter and Facebook over the past year.

Obama’s social media communication has become even more personal through his @POTUS Twitter account and Facebook page, and the first “president of the social media age” will leave the White House in January 2017 with an incredible 137 million fans, followers and subscribers. Beyond merely Twitter and Facebook, world leaders such as the Argentinian President have also become active on new channels like Snapchat to reach a younger audience and potential future voters. Similarly, a number of governments, mainly in Latin America, have started to use Periscope, a cost-effective medium to live-stream their press conferences.

We have witnessed occasional public interactions between leaders, namely the friendly fighting talk between the Obamas, the Queen of England and Canada’s Justin Trudeau. Foreign ministries continue to expand their diplomatic and digital networks by following each other and creating coalitions on specific topics, in particular the fight against ISIS….

A number of world leaders, including the President of Colombia and Australia’s Julie Bishop, also use emojis to brighten up their tweets, creating what can be described as a new diplomatic sign language. The Foreign Ministry in Finland has even produced its own set of 49 emoticons depicting summer and winter in the Nordic country.

We asked a number of digital leaders of some of the best connected foreign ministries and governments to share their thoughts on their preferred social media channel and examples of their best campaigns on our blog. You will learn:

Here is our list of the #Twiplomacy Top Twenty Twitterati in 2016….(More)”