2018 Global Go To Think Tank Index Report
Report by James G. McGann: “The Think Tanks and Civil Societies Program (TTCSP) of the Lauder Institute at the University of Pennsylvania conducts research on the role policy institutes play in governments and civil societies around the world. Often referred to as the “think tanks’ think tank,” TTCSP examines the evolving role and character of public policy research organizations. Over the last 27 years, the TTCSP has developed and led a series of global initiatives that have helped bridge the gap between knowledge and policy in critical policy areas such as international peace and security, globalization and governance, international economics, environmental issues, information and society, poverty alleviation, and healthcare and global health. These international collaborative efforts are designed to establish regional and international networks of policy institutes and communities that improve
The TTCSP works with leading scholars and practitioners from think tanks and universities in a variety of collaborative efforts and programs, and produces the annual Global Go To
Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence
Our model demonstrates high diagnostic accuracy across multiple organ systems and is comparable to experienced pediatricians in diagnosing common childhood diseases. Our study provides a proof of concept for implementing an AI-based system as a means to aid physicians in tackling large amounts of data, augmenting diagnostic evaluations, and to provide clinical decision support in cases of diagnostic uncertainty or complexity. Although this impact may be most evident in areas where healthcare providers are in relative shortage, the benefits of such an AI system are likely to be universal….(More)”.
How Tech Utopia Fostered Tyranny
Jon Askonas at The New Atlantis: “The rumors spread like wildfire: Muslims were secretly lacing a Sri Lankan village’s food with sterilization drugs. Soon, a video circulated that appeared to show a Muslim shopkeeper admitting to drugging his customers — he had misunderstood the question that was angrily put to him. Then all hell broke loose. Over a several-day span, dozens of mosques and Muslim-owned shops and homes were burned down across multiple towns. In one home, a young journalist was
Mob violence is an old phenomenon, but the tools encouraging it, in this case, were not. As the New York Times reported in April, the rumors were spread via Facebook, whose newsfeed algorithm prioritized high-engagement content, especially videos. “Designed to maximize user time on site,” as the Times article describes, the newsfeed algorithm “promotes whatever wins the most attention. Posts that tap into negative, primal emotions like anger or fear, studies have found, produce the highest engagement, and so proliferate.” On Facebook in Sri Lanka, posts with incendiary rumors had among the highest engagement rates, and so were among the most highly promoted content on the platform. Similar cases of mob violence have taken place in India, Myanmar, Mexico, and elsewhere, with misinformation spread mainly through Facebook and the messaging tool WhatsApp.
Follow The New AtlantisThis is in spite of Facebook’s decision in January 2018 to tweak its algorithm, apparently to prevent the kind of manipulation we saw in the 2016 U.S. election, when posts and election ads originating from Russia reportedly showed up in newsfeeds of up to 126 million American Facebook users. The company explained that the changes to its algorithm will mean that newsfeeds will be “showing more posts from friends and family and updates that spark conversation,” and “less public content, including videos and other posts from publishers or businesses.” But these changes, which Facebook had tested out in countries like Sri Lanka in the previous year, may actually have exacerbated the problem — which is that incendiary content, when posted by friends and family, is guaranteed to “spark conversation” and therefore to be prioritized in newsfeeds. This is because “misinformation is almost always more interesting than the truth,” as Mathew Ingram provocatively put it in the Columbia Journalism Review.
How did we get here, from Facebook’s mission to “give people the power to build community and bring the world closer together”? Riot-inducing “fake news” and election meddling are obviously far from what its founders intended for the platform. Likewise, Google’s founders surely did not build their search engine with the intention of its being censored in China to suppress free speech, and yet, after years of refusing this demand from Chinese leadership, Google has recently relented rather than pull their search engine from China entirely. And YouTube’s creators surely did not intend their feature that promotes “trending” content to help clickbait conspiracy-theory videos go viral.
These outcomes — not merely unanticipated by the companies’ founders but outright opposed to their intentions — are not limited to social media. So far, Big Tech companies have presented issues of incitement, algorithmic radicalization, and “fake news” as merely bumps on the road of progress, glitches and bugs to be patched over. In fact, the problem goes deeper, to fundamental questions of human nature. Tools based on the premise that access to information will only enlighten us and social connectivity will only make us more humane have instead fanned conspiracy theories, information bubbles, and social fracture. A tech movement spurred by visions of libertarian empowerment and progressive uplift has instead fanned a global resurgence of populism and authoritarianism.
Despite the storm of criticism, Silicon Valley has still failed to recognize in these abuses a sharp rebuke of its sunny view of human nature. It remains naïvely blind to how its own aspirations for social engineering are on a spectrum with the tools’ “unintended” uses by authoritarian regimes and nefarious actors
How to keep good research from dying a bad death: Strategies for co-creating research with impact
Blog post by Bridget Konadu Gyamfi and Bethany Park…:”Researchers are often invested in disseminating the results of their research to the practitioners and policymakers who helped enable it—but disseminating a paper, developing a brief, or even holding an event may not truly empower decision-makers to make changes based on the research. …
Disseminate results in stages and determine next steps
Mapping evidence to real-world decisions and processes in order to determine the right course of action can be complex. Together with our partners, we gather the troops—researchers, implementers, and IPA’s research and policy team—and have a discussion around what the implications of the research are for policy and practice.
This staged dissemination is critically important: having private discussions first helps partners digest the results and think through their reactions in a lower-stakes setting. We help the partners think about not only the results, but how their stakeholders will respond to the results, and how we can support their ongoing learning, whether results are “good” or not as hoped. Later, we hold larger dissemination events to inform the public. But we try to work closely with researchers and implementers to think through next steps right after results are available—before the window of opportunity passes.
Identify & prioritize policy opportunities
Many of our partners have already written smart advice about how to identify policy opportunities (windows, openings… etc.), so there’s no need for us to restate all that great thinking (go read it!). However, we get asked frequently how we prioritize policy opportunities, and we do have a clear internal process for making that decision. Here are our criteria:

- A body of evidence to build on: One single study doesn’t often present the best policy opportunities. This is a generalization, of course, and there are exceptions, but typically our policy teams pay the most attention to bodies of evidence that are coming to a consensus. These are the opportunities for which we feel most able to recommend next steps related to policy and practice—there is a clearer message to communicate and research conclusions we can state with greater confidence.
- Relationships to open doors: Our long-term in-country presence and deep involvement with partners through research projects means that we have many relationships and doors open to us. Yet some of these relationships are stronger than others, and some partners are more influential in the processes we want to impact. We use stakeholder mapping tools to clarify who is invested and who has influence. We also track our stakeholder outreach to make sure our relationships stay strong and mutually beneficial.
- A concrete decision or process that we can influence: This is the typical understanding of a “policy opening,” and it’s an important one. What are the partner’s priorities, felt needs, and open questions? Where do those create opportunities for our influence? If the evidence would indicate one course of action, but that course isn’t even an option our partner would consider or be able to consider (for cost or other practical reasons), we have to give the opportunity a pass.
- Implementation funding: In the countries where we work, even when we have strong relationships, strong evidence, and the partner is open to influence, there is still one crucial ingredient missing: implementation funding. Addressing this constraint means getting evidence-based programming onto the agenda of major donors.
Get partners on board
Forming a coalition of partners and funders who will partner with us as we move forward is crucial. As a research and policy organization, we can’t scale effective solutions alone—nor is that the specialty that we want to
Impact of a nudging intervention and factors associated with vegetable dish choice among European adolescents
A cross-sectional quasi-experimental study was implemented in restaurants in four European countries: Denmark, France, Italy and United Kingdom. In total, 360 individuals aged 12-19 years were allocated into control or intervention groups, and asked to select from meat-based, fish-based, or vegetable-based meals. All three dishes were identically presented in appearance (balls with similar size and weight) and with the same sauce (tomato sauce) and side dishes (pasta and salad). In the intervention condition, the vegetable-based option was presented as the “dish of the day” and numbers of dishes chosen by each group were compared using the Pearson chi-square test. Multivariate logistic regression analysis was run to assess associations between choice of vegetable-based dish and its potential associated factors (adherence to Mediterranean diet, food neophobia, attitudes towards nudging for vegetables, food choice questionnaire, human values scale, social norms and self-estimated health, country, gender and belonging to control or intervention groups). All analyses were run in SPSS 22.0.
The nudging strategy (dish of the day) did not show a difference on the choice of the vegetable-based option among adolescents tested (p = 0.80 for Denmark and France and p = 0.69 and p = 0.53 for Italy and UK, respectively). However,
The “dish of the day” strategy did not work under the study conditions. Choice of the vegetable-based dish was predicted by natural dimension, social norms, gender and attitudes towards vegetable nudging. An understanding of factors related to choosing
Show me the Data! A Systematic Mapping on Open Government Data Visualization
Paper by André Eberhardt and Milene Selbach Silveira: “During the last years many government organizations have adopted Open Government Data policies to make their data publicly available. Although governments are having success on publishing their data, the availability of the datasets is not enough to people to make use of it due to lack of technical expertise such as programming skills and knowledge on data management. In this scenario, Visualization Techniques can be applied to Open Government Data in order to help to solve this problem.
In this sense, we analyzed previously published papers related to Open Government Data Visualization in order to provide an overview about how visualization techniques are being applied to Open Government Data and which are the most common challenges when dealing with it. A systematic mapping study was conducted to survey the papers that were published in this area. The study found 775 papers and, after applying all inclusion and exclusion criteria, 32 papers were selected. Among other results, we found that datasets related to transportation are the main ones being used and Map is the most used visualization technique. Finally, we report that data quality is the main challenge being reported by studies that applied visualization techniques to Open Government Data…(More)”.
Urban Computing
Book by Yu Zheng:”…Urban computing brings powerful computational techniques to bear on such urban challenges as pollution, energy consumption, and traffic congestion. Using today’s large-scale computing infrastructure and data gathered from sensing technologies, urban computing combines computer science with urban planning, transportation, environmental science, sociology, and other areas of urban studies, tackling specific problems with concrete methodologies in a data-centric computing framework. This authoritative treatment of urban computing offers an overview of the field, fundamental techniques, advanced models, and novel applications.
Each chapter acts as a tutorial that introduces readers to an important aspect of urban computing, with references to relevant research. The book outlines key concepts, sources of data, and typical applications; describes four paradigms of urban sensing in sensor-centric and human-centric categories; introduces data management for spatial and
This is how AI bias really happens—and why it’s so hard to fix
Karen Hao at MIT Technology Review: “Over the past few months, we’ve documented how the vast majority of AI’s applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data. We’ve also covered how these technologies affect people’s lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system.
But it’s not enough just to know that this bias exists. If we want to be able to fix it, we need to understand the mechanics of how it arises in the first place.
How AI bias happens
We often shorthand our explanation of AI bias by blaming it on biased training data. The reality is more nuanced: bias can creep in long before the data is collected as well as at many other stages of the deep-learning process. For the purposes of this discussion, we’ll focus on three key stages.Sign up for the The AlgorithmArtificial intelligence, demystified
By signing up you agree to receive email newsletters and notifications from MIT Technology Review. You can change your preferences at any time. View our Privacy Policy for more detail.
Framing the problem. The first thing computer scientists do when they create a deep-learning model is decide what they actually want it to achieve. A credit card company, for example, might want to predict a customer’s creditworthiness, but “creditworthiness” is a rather nebulous concept. In order to translate it into something that can be computed, the company must decide whether it wants to, say, maximize its profit margins or maximize the number of loans that get repaid. It could then define creditworthiness within the context of that goal. The problem is that “those decisions are made for various business reasons other than fairness or discrimination,” explains Solon Barocas, an assistant professor at Cornell University who specializes in fairness in machine learning. If the algorithm discovered that giving out subprime loans was an effective way to maximize profit, it would end up engaging in predatory behavior even if that wasn’t the company’s intention.
Collecting the data. There are two main ways that bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices. The first case might occur, for example, if a deep-learning algorithm is fed more photos of light-skinned faces than dark-skinned faces. The resulting face recognition system would inevitably be worse at recognizing darker-skinned faces. The second case is precisely what happened when Amazon discovered that its internal recruiting tool was dismissing female candidates. Because it was trained on historical hiring decisions, which favored men over women, it learned to do the same.
Preparing the data. Finally, it is possible to introduce bias during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. (This is not to be confused with the problem-framing stage. You can use the same attributes to train a model for very different goals or use very different attributes to train a model for the same goal.) In the case of modeling creditworthiness, an “attribute” could be the customer’s age, income, or number of paid-off loans. In the case of Amazon’s recruiting tool, an “attribute” could be the candidate’s gender, education level, or years of experience. This is what people often call the “art” of deep learning: choosing which attributes to consider or ignore can significantly influence your model’s prediction accuracy. But while its impact on accuracy is easy to measure, its impact on the model’s bias is not.
Why AI bias is hard to fix
Given that context, some of the challenges of mitigating bias may already be apparent to you. Here we highlight four main ones
Fact-Based Policy: How Do State and Local Governments Accomplish It?
Report and Proposal by Justine Hastings: “Fact-based policy is essential to making government more effective and more efficient, and many states could benefit from more extensive use of data and evidence when making policy. Private companies have taken advantage of declining computing costs and vast data resources to solve problems in a fact-based way, but state and local governments have not made as much progress….
Drawing on her experience in Rhode Island, Hastings proposes that states build secure, comprehensive, integrated