We need data infrastructure as well as data sharing – conflicts of interest in video game research


Article by David Zendle & Heather Wardle: “Industry data sharing has the potential to revolutionise evidence on video gaming and mental health, as well as a host of other critical topics. However, collaborative data sharing agreements between academics and industry partners may also afford industry enormous power in steering the development of this evidence base. In this paper, we outline how nonfinancial conflicts of interest may emerge when industry share data with academics. We then go on to describe ways in which such conflicts may affect the quality of the evidence base. Finally, we suggest strategies for mitigating this impact and preserving research independence. We focus on the development of data infrastructure: technological, social, and educational architecture that facilitates unfettered and free access to the kinds of high-quality data that industry hold, but without industry involvement…(More)”.

Public sector innovation has a “first mile” problem


Article by Catarina Tully, and Giulio Quaggiotto: “Even if progress has been uneven, the palette of innovation approaches adopted by the public sector has considerably expanded in the last few years: from new sources of data to behavioural science, from foresight to user-centred design, from digital transformation to system thinking. And yet, the frustration of many innovation champions within the government is palpable. We are all familiar with innovation graveyards and, in our learning journeys, probably contributed to them in spite of all best intentions:

  • Dashboards that look very “smart” and are carefully tended to by few specialists but never used by their intended target audience: decision-makers.
  • Prototypes or experiments that were developed by an innovation unit and meant to be handed over to a line ministry or city department but never were.
  • Beautifully crafted scenarios and horizon scanning reports that last the length of a press conference or a ribbon-cutting event and are quickly put on the shelves after that.

The list could go on and on.

Innovation theatre is a well known malaise (paraphrasing Sean McDonald: “the use of [technology] interventions that make people feel as if a government—and, more often, a specific group of political leaders—is solving a problem, without it doing anything to actually solve that problem.”)

In the current climate, the pressure to “scale” quick-fixes in the face of multiple crises (as opposed to the hard work of addressing root causes, building trust, and structural transformations) is only increasing the appetite for performative theatre. Eventually, public intrapreneurs learn to use the theatre to their advantage: let the photo op with the technology gadget or the “futuristic” scenario take the centre stage so as to create goodwill with the powers that be, while you work quietly in the backstage to do the “right” thing…(More)”.

How to spot AI-generated text


Article by Melissa Heikkilä: “This sentence was written by an AI—or was it? OpenAI’s new chatbot, ChatGPT, presents us with a problem: How will we know whether what we read online is written by a human or a machine?

Since it was released in late November, ChatGPT has been used by over a million people. It has the AI community enthralled, and it is clear the internet is increasingly being flooded with AI-generated text. People are using it to come up with jokes, write children’s stories, and craft better emails. 

ChatGPT is OpenAI’s spin-off of its large language model GPT-3, which generates remarkably human-sounding answers to questions that it’s asked. The magic—and danger—of these large language models lies in the illusion of correctness. The sentences they produce look right—they use the right kinds of words in the correct order. But the AI doesn’t know what any of it means. These models work by predicting the most likely next word in a sentence. They haven’t a clue whether something is correct or false, and they confidently present information as true even when it is not. 

In an already polarized, politically fraught online world, these AI tools could further distort the information we consume. If they are rolled out into the real world in real products, the consequences could be devastating. 

We’re in desperate need of ways to differentiate between human- and AI-written text in order to counter potential misuses of the technology, says Irene Solaiman, policy director at AI startup Hugging Face, who used to be an AI researcher at OpenAI and studied AI output detection for the release of GPT-3’s predecessor GPT-2. 

New tools will also be crucial to enforcing bans on AI-generated text and code, like the one recently announced by Stack Overflow, a website where coders can ask for help. ChatGPT can confidently regurgitate answers to software problems, but it’s not foolproof. Getting code wrong can lead to buggy and broken software, which is expensive and potentially chaotic to fix…(More)”.

How AI That Powers Chatbots and Search Queries Could Discover New Drugs


Karen Hao at The Wall Street Journal: “In their search for new disease-fighting medicines, drug makers have long employed a laborious trial-and-error process to identify the right compounds. But what if artificial intelligence could predict the makeup of a new drug molecule the way Google figures out what you’re searching for, or email programs anticipate your replies—like “Got it, thanks”?

That’s the aim of a new approach that uses an AI technique known as natural language processing—​the same technology​ that enables OpenAI’s ChatGPT​ to ​generate human-like responses​—to analyze and synthesize proteins, which are the building blocks of life and of many drugs. The approach exploits the fact that biological codes have something in common with search queries and email texts: Both are represented by a series of letters.  

Proteins are made up of dozens to thousands of small chemical subunits known as amino acids, and scientists use special notation to document the sequences. With each amino acid corresponding to a single letter of the alphabet, proteins are represented as long, sentence-like combinations.

Natural language algorithms, which quickly analyze language and predict the next step in a conversation, can also be applied to this biological data to create protein-language models. The models encode what might be called the grammar of proteins—the rules that govern which amino acid combinations yield specific therapeutic properties—to predict the sequences of letters that could become the basis of new drug molecules. As a result, the time required for the early stages of drug discovery could shrink from years to months.

“Nature has provided us with tons of examples of proteins that have been designed exquisitely with a variety of functions,” says Ali Madani, founder of ProFluent Bio, a Berkeley, Calif.-based startup focused on language-based protein design. “We’re learning the blueprint from nature.”…(More)”.

Storytelling Will Save the Earth


Article by Bella Lack: “…The environmental crisis is one of overconsumption, carbon emissions, and corporate greed. But it’s also a crisis of miscommunication. For too long, hard data buried environmentalists in an echo-chamber, but in 2023, storytelling will finally enable a united global response to the environmental crisis. As this crisis worsens, we will stop communicating the climate crisis with facts and stats—instead we will use stories like Timothy’s.  

Unlike numbers or facts, stories can trigger an emotional response, harnessing the power of motivation, imagination, and personal values, which drive the most powerful and permanent forms of social change. For instance, in 2019, we all saw the images of Notre Dame cathedral erupting in flames. Three minutes after the fire began, images of the incident were being broadcast globally, eliciting an immediate response from world leaders. That same year, the Amazon forest also burned, spewing smoke that spread over 2,000 miles and burning over one and a half football fields of rain forest every minute of every day—it took three weeks for the mainstream media to report that story. Why did the burning of Notre Dame warrant such rapid responses globally, when the Amazon fires did not? Although it is just a beautiful assortment of limestone, lead, and wood, we attach personal significance to Notre Dame, because it has a story we know and can relate to. That is what propelled people to react to it, while the fact that the Amazon was on fire elicited nothing…(More)”.

Storytelling allows us to make sense of the world. 

The Risks of Empowering “Citizen Data Scientists”


Article by Reid Blackman and Tamara Sipes: “New tools are enabling organizations to invite and leverage non-data scientists — say, domain data experts, team members very familiar with the business processes, or heads of various business units — to propel their AI efforts. There are advantages to empowering these internal “citizen data scientists,” but also risks. Organizations considering implementing these tools should take five steps: 1) provide ongoing education, 2) provide visibility into similar use cases throughout the organization, 3) create an expert mentor program, 4) have all projects verified by AI experts, and 5) provide resources for inspiration outside your organization…(More)”.

A catalyst for community-wide action on sustainable development


Article by Communities around the world are increasingly recognizing that breaking down silos and leveraging shared resources and interdependencies across economic, social, and environmental issues can help accelerate progress on multiple issues simultaneously. As a framework for organizing local development priorities, the world’s 17 Sustainable Development Goals (SDGs) uniquely combine a need for broad technical expertise with an opportunity to synergize across domains—all while adhering to the principle of leaving no one behind. For local leaders attempting to tackle intersecting issues using the SDGs, one underpinning question is how to support new forms of collaboration to maximize impact and progress?

In early May, over 100 people across the East Central Florida (ECF) region in the U.S. participated in Partnership for the Goals: Creating a Resilient and Thriving Community,” a two-day multi-stakeholder convening spearheaded by a team of local leaders from the East Central Florida Regional Resilience Collaborative (ECFR2C), the Central Florida Foundation, the City of Orlando, Florida for Good, Orange County, and the University of Central Florida. The convening grew out of a multi-year resilience planning process that leveraged the SDGs as a framework for tackling local economic, social, and environmental priorities all at once.

To move from community-wide planning to community-wide action, the organizers experimented with a 17 Rooms process—a new approach to accelerating collaborative action for the SDGs pioneered by the Center for Sustainable Development at Brookings and The Rockefeller Foundation. We collaborated with the ECF local organizing team and, in the process, spotted a range of more broadly relevant insights that we describe here…(More)”.

Which Connections Really Help You Find a Job?


Article by Iavor Bojinov, Karthik Rajkumar, Guillaume Saint-Jacques, Erik Brynjolfsson, and Sinan Aral: “Whom should you connect with the next time you’re looking for a job? To answer this question, we analyzed data from multiple large-scale randomized experiments involving 20 million people to measure how different types of connections impact job mobility. Our results, published recently in Science Magazine, show that your strongest ties — namely your connections to immediate coworkers, close friends, and family — were actually the least helpful for finding new opportunities and securing a job. You’ll have better luck with your weak ties: the more infrequent, arm’s-length relationships with acquaintances.

To be more specific, the ties that are most helpful for finding new jobs tend to be moderately weak: They strike a balance between exposing you to new social circles and information and having enough familiarity and overlapping interests so that the information is useful. Our findings uncovered the relationship between the strength of the connection (as measured by the number of mutual connections prior to connecting) and the likelihood that a job seeker transitions to a new role within the organization of a connection.The observation that weak ties are more beneficial for finding a job is not new. Sociologist Mark Granovetter first laid out this idea in a seminal 1973 paper that described how a person’s network affects their job prospects. Since then, the theory, known as the “strength of weak ties,” has become one of the most influential in the social sciences — underpinning network theories of information diffusion, industry structure, and human cooperation….(More)”.

The Dangers of Systems Illiteracy


Review by Carol Dumaine: “In 1918, as the Great War was coming to an end after four bloody years of brutal conflict, an influenza pandemic began to ravage societies around the globe. While in Paris negotiating the terms of the peace agreement in the spring of 1919, evidence indicates that US president Woodrow Wilson was stricken with the flu. 

Wilson, who had been intransigent in insisting on just peace terms for the defeated nations (what he called “peace without victory”), underwent a profound change of mental state that his personal physician and closest advisors attributed to his illness. While sick, Wilson suddenly agreed to all the terms he had previously adamantly rejected and approved a treaty that made onerous demands of Germany. 

Wilson’s reversal left Germans embittered and his own advisors disillusioned. Historian John M. Barry, who recounts this episode in his book about the 1918 pandemic, The Great Influenza, observes that most historians agree “that the harshness toward Germany of the Paris peace treaty helped create the economic hardship, nationalistic reaction, and political chaos that fostered the rise of Hitler.” 

This anecdote is a vivid illustration of how a public health disaster can intersect with world affairs, potentially sowing the seeds for a future of war. Converging crises can leave societies with too little time to regroup, breaking down resilience and capacities for governance. Barry concludes from his research into the 1918 pandemic that to forestall this loss of authority—and perhaps to avoid future, unforeseen repercussions—government leaders should share the unvarnished facts and evolving knowledge of a situation. 

Society is ultimately based on trust; during the flu pandemic, “as trust broke down, people became alienated not only from those in authority, but from each other.” Barry continues, “Those in authority must retain the public’s trust. The way to do that is to distort nothing, to put the best face on nothing, to try to manipulate no one.”

Charles Weiss makes a similar argument in his new book, The Survival Nexus: Science, Technology, and World Affairs. Weiss contends that the preventable human and economic losses of the COVID-19 pandemic were the result of politicians avoiding harsh truths: “Political leaders suppressed evidence of virus spread, downplayed the importance of the epidemic and the need to observe measures to protect the health of the population, ignored the opinions of local experts, and publicized bogus ‘cures’—all to avoid economic damage and public panic, but equally importantly to consolidate political power and to show themselves as strong leaders who were firmly in control.” …(More)”.

The Potentially Adverse Impact of Twitter 2.0 on Scientific and Research Communication


Article by Julia Cohen: “In just over a month after the change in Twitter leadership, there have been significant changes to the social media platform, in its new “Twitter 2.0.” version. For researchers who use Twitter as a primary source of data, including many of the computer scientists at USC’s Information Sciences Institute (ISI), the effects could be debilitating…

Over the years, Twitter has been extremely friendly to researchers, providing and maintaining a robust API (application programming interface) specifically for academic research. The Twitter API for Academic Research allows researchers with specific objectives who are affiliated with an academic institution to gather historical and real-time data sets of tweets, and related metadata, at no cost. Currently, the Twitter API for Academic Research continues to be functional and maintained in Twitter 2.0.

The data obtained from the API provides a means to observe public conversations and understand people’s opinions about societal issues. Luca Luceri, a Postdoctoral Research Associate at ISI called Twitter “a primary platform to observe online discussion tied to political and social issues.” And Twitter touts its API for Academic Research as a way for “academic researchers to use data from the public conversation to study topics as diverse as the conversation on Twitter itself.”

However, if people continue deactivating their Twitter accounts, which appears to be the case, the makeup of the user base will change, with data sets and related studies proportionally affected. This is especially true if the user base evolves in a way that makes it more ideologically homogeneous and less diverse.

According to MIT Technology Review, in the first week after its transition, Twitter may have lost one million users, which translates to a 208% increase in lost accounts. And there’s also the concern that the site could not work as effectively, because of the substantial decrease in the size of the engineering teams. This includes concerns about the durability of the service researchers rely on for data, namely the Twitter API. Jason Baumgartner, founder of Pushshift, a social media data collection, analysis, and archiving platform, said in several recent API requests, his team also saw a significant increase in error rates – in the 25-30% range –when they typically see rates near 1%. Though for now this is anecdotal, it leaves researchers wondering if they will be able to rely on Twitter data for future research.

One example of how the makeup of the less-regulated Twitter 2.0 user base could significantly be altered is if marginalized groups leave Twitter at a higher rate than the general user base, e.g. due to increased hate speech. Keith Burghardt, a Computer Scientist at ISI who studies hate speech online said, “It’s not that an underregulated social media changes people’s opinions, but it just makes people much more vocal. So you will probably see a lot more content that is hateful.” In fact, a study by Montclair State University found that hate speech on Twitter skyrocketed in the week after the acquisition of Twitter….(More)”.