How AI That Powers Chatbots and Search Queries Could Discover New Drugs


Karen Hao at The Wall Street Journal: “In their search for new disease-fighting medicines, drug makers have long employed a laborious trial-and-error process to identify the right compounds. But what if artificial intelligence could predict the makeup of a new drug molecule the way Google figures out what you’re searching for, or email programs anticipate your replies—like “Got it, thanks”?

That’s the aim of a new approach that uses an AI technique known as natural language processing—​the same technology​ that enables OpenAI’s ChatGPT​ to ​generate human-like responses​—to analyze and synthesize proteins, which are the building blocks of life and of many drugs. The approach exploits the fact that biological codes have something in common with search queries and email texts: Both are represented by a series of letters.  

Proteins are made up of dozens to thousands of small chemical subunits known as amino acids, and scientists use special notation to document the sequences. With each amino acid corresponding to a single letter of the alphabet, proteins are represented as long, sentence-like combinations.

Natural language algorithms, which quickly analyze language and predict the next step in a conversation, can also be applied to this biological data to create protein-language models. The models encode what might be called the grammar of proteins—the rules that govern which amino acid combinations yield specific therapeutic properties—to predict the sequences of letters that could become the basis of new drug molecules. As a result, the time required for the early stages of drug discovery could shrink from years to months.

“Nature has provided us with tons of examples of proteins that have been designed exquisitely with a variety of functions,” says Ali Madani, founder of ProFluent Bio, a Berkeley, Calif.-based startup focused on language-based protein design. “We’re learning the blueprint from nature.”…(More)”.

Storytelling Will Save the Earth


Article by Bella Lack: “…The environmental crisis is one of overconsumption, carbon emissions, and corporate greed. But it’s also a crisis of miscommunication. For too long, hard data buried environmentalists in an echo-chamber, but in 2023, storytelling will finally enable a united global response to the environmental crisis. As this crisis worsens, we will stop communicating the climate crisis with facts and stats—instead we will use stories like Timothy’s.  

Unlike numbers or facts, stories can trigger an emotional response, harnessing the power of motivation, imagination, and personal values, which drive the most powerful and permanent forms of social change. For instance, in 2019, we all saw the images of Notre Dame cathedral erupting in flames. Three minutes after the fire began, images of the incident were being broadcast globally, eliciting an immediate response from world leaders. That same year, the Amazon forest also burned, spewing smoke that spread over 2,000 miles and burning over one and a half football fields of rain forest every minute of every day—it took three weeks for the mainstream media to report that story. Why did the burning of Notre Dame warrant such rapid responses globally, when the Amazon fires did not? Although it is just a beautiful assortment of limestone, lead, and wood, we attach personal significance to Notre Dame, because it has a story we know and can relate to. That is what propelled people to react to it, while the fact that the Amazon was on fire elicited nothing…(More)”.

Storytelling allows us to make sense of the world. 

The Risks of Empowering “Citizen Data Scientists”


Article by Reid Blackman and Tamara Sipes: “New tools are enabling organizations to invite and leverage non-data scientists — say, domain data experts, team members very familiar with the business processes, or heads of various business units — to propel their AI efforts. There are advantages to empowering these internal “citizen data scientists,” but also risks. Organizations considering implementing these tools should take five steps: 1) provide ongoing education, 2) provide visibility into similar use cases throughout the organization, 3) create an expert mentor program, 4) have all projects verified by AI experts, and 5) provide resources for inspiration outside your organization…(More)”.

A catalyst for community-wide action on sustainable development


Article by Communities around the world are increasingly recognizing that breaking down silos and leveraging shared resources and interdependencies across economic, social, and environmental issues can help accelerate progress on multiple issues simultaneously. As a framework for organizing local development priorities, the world’s 17 Sustainable Development Goals (SDGs) uniquely combine a need for broad technical expertise with an opportunity to synergize across domains—all while adhering to the principle of leaving no one behind. For local leaders attempting to tackle intersecting issues using the SDGs, one underpinning question is how to support new forms of collaboration to maximize impact and progress?

In early May, over 100 people across the East Central Florida (ECF) region in the U.S. participated in Partnership for the Goals: Creating a Resilient and Thriving Community,” a two-day multi-stakeholder convening spearheaded by a team of local leaders from the East Central Florida Regional Resilience Collaborative (ECFR2C), the Central Florida Foundation, the City of Orlando, Florida for Good, Orange County, and the University of Central Florida. The convening grew out of a multi-year resilience planning process that leveraged the SDGs as a framework for tackling local economic, social, and environmental priorities all at once.

To move from community-wide planning to community-wide action, the organizers experimented with a 17 Rooms process—a new approach to accelerating collaborative action for the SDGs pioneered by the Center for Sustainable Development at Brookings and The Rockefeller Foundation. We collaborated with the ECF local organizing team and, in the process, spotted a range of more broadly relevant insights that we describe here…(More)”.

Which Connections Really Help You Find a Job?


Article by Iavor Bojinov, Karthik Rajkumar, Guillaume Saint-Jacques, Erik Brynjolfsson, and Sinan Aral: “Whom should you connect with the next time you’re looking for a job? To answer this question, we analyzed data from multiple large-scale randomized experiments involving 20 million people to measure how different types of connections impact job mobility. Our results, published recently in Science Magazine, show that your strongest ties — namely your connections to immediate coworkers, close friends, and family — were actually the least helpful for finding new opportunities and securing a job. You’ll have better luck with your weak ties: the more infrequent, arm’s-length relationships with acquaintances.

To be more specific, the ties that are most helpful for finding new jobs tend to be moderately weak: They strike a balance between exposing you to new social circles and information and having enough familiarity and overlapping interests so that the information is useful. Our findings uncovered the relationship between the strength of the connection (as measured by the number of mutual connections prior to connecting) and the likelihood that a job seeker transitions to a new role within the organization of a connection.The observation that weak ties are more beneficial for finding a job is not new. Sociologist Mark Granovetter first laid out this idea in a seminal 1973 paper that described how a person’s network affects their job prospects. Since then, the theory, known as the “strength of weak ties,” has become one of the most influential in the social sciences — underpinning network theories of information diffusion, industry structure, and human cooperation….(More)”.

The Dangers of Systems Illiteracy


Review by Carol Dumaine: “In 1918, as the Great War was coming to an end after four bloody years of brutal conflict, an influenza pandemic began to ravage societies around the globe. While in Paris negotiating the terms of the peace agreement in the spring of 1919, evidence indicates that US president Woodrow Wilson was stricken with the flu. 

Wilson, who had been intransigent in insisting on just peace terms for the defeated nations (what he called “peace without victory”), underwent a profound change of mental state that his personal physician and closest advisors attributed to his illness. While sick, Wilson suddenly agreed to all the terms he had previously adamantly rejected and approved a treaty that made onerous demands of Germany. 

Wilson’s reversal left Germans embittered and his own advisors disillusioned. Historian John M. Barry, who recounts this episode in his book about the 1918 pandemic, The Great Influenza, observes that most historians agree “that the harshness toward Germany of the Paris peace treaty helped create the economic hardship, nationalistic reaction, and political chaos that fostered the rise of Hitler.” 

This anecdote is a vivid illustration of how a public health disaster can intersect with world affairs, potentially sowing the seeds for a future of war. Converging crises can leave societies with too little time to regroup, breaking down resilience and capacities for governance. Barry concludes from his research into the 1918 pandemic that to forestall this loss of authority—and perhaps to avoid future, unforeseen repercussions—government leaders should share the unvarnished facts and evolving knowledge of a situation. 

Society is ultimately based on trust; during the flu pandemic, “as trust broke down, people became alienated not only from those in authority, but from each other.” Barry continues, “Those in authority must retain the public’s trust. The way to do that is to distort nothing, to put the best face on nothing, to try to manipulate no one.”

Charles Weiss makes a similar argument in his new book, The Survival Nexus: Science, Technology, and World Affairs. Weiss contends that the preventable human and economic losses of the COVID-19 pandemic were the result of politicians avoiding harsh truths: “Political leaders suppressed evidence of virus spread, downplayed the importance of the epidemic and the need to observe measures to protect the health of the population, ignored the opinions of local experts, and publicized bogus ‘cures’—all to avoid economic damage and public panic, but equally importantly to consolidate political power and to show themselves as strong leaders who were firmly in control.” …(More)”.

The Potentially Adverse Impact of Twitter 2.0 on Scientific and Research Communication


Article by Julia Cohen: “In just over a month after the change in Twitter leadership, there have been significant changes to the social media platform, in its new “Twitter 2.0.” version. For researchers who use Twitter as a primary source of data, including many of the computer scientists at USC’s Information Sciences Institute (ISI), the effects could be debilitating…

Over the years, Twitter has been extremely friendly to researchers, providing and maintaining a robust API (application programming interface) specifically for academic research. The Twitter API for Academic Research allows researchers with specific objectives who are affiliated with an academic institution to gather historical and real-time data sets of tweets, and related metadata, at no cost. Currently, the Twitter API for Academic Research continues to be functional and maintained in Twitter 2.0.

The data obtained from the API provides a means to observe public conversations and understand people’s opinions about societal issues. Luca Luceri, a Postdoctoral Research Associate at ISI called Twitter “a primary platform to observe online discussion tied to political and social issues.” And Twitter touts its API for Academic Research as a way for “academic researchers to use data from the public conversation to study topics as diverse as the conversation on Twitter itself.”

However, if people continue deactivating their Twitter accounts, which appears to be the case, the makeup of the user base will change, with data sets and related studies proportionally affected. This is especially true if the user base evolves in a way that makes it more ideologically homogeneous and less diverse.

According to MIT Technology Review, in the first week after its transition, Twitter may have lost one million users, which translates to a 208% increase in lost accounts. And there’s also the concern that the site could not work as effectively, because of the substantial decrease in the size of the engineering teams. This includes concerns about the durability of the service researchers rely on for data, namely the Twitter API. Jason Baumgartner, founder of Pushshift, a social media data collection, analysis, and archiving platform, said in several recent API requests, his team also saw a significant increase in error rates – in the 25-30% range –when they typically see rates near 1%. Though for now this is anecdotal, it leaves researchers wondering if they will be able to rely on Twitter data for future research.

One example of how the makeup of the less-regulated Twitter 2.0 user base could significantly be altered is if marginalized groups leave Twitter at a higher rate than the general user base, e.g. due to increased hate speech. Keith Burghardt, a Computer Scientist at ISI who studies hate speech online said, “It’s not that an underregulated social media changes people’s opinions, but it just makes people much more vocal. So you will probably see a lot more content that is hateful.” In fact, a study by Montclair State University found that hate speech on Twitter skyrocketed in the week after the acquisition of Twitter….(More)”.

How data restrictions erode internet freedom


Article by Tom Okman: “Countries across the world – small, large, powerful and weak – are accelerating efforts to control and restrict private data. According to the Information Technology and Innovation Foundation, the number of laws, regulations and policies that restrict or require data to be stored in a specific country more than doubled between 2017 and 2021, rising from 67 to 144.

Some of these laws may be driven by benevolent intentions. After all, citizens will support stopping the spread of online disinformation, hate, and extremism or systemic cyber-snooping. Cyber-libertarian John Perry Barlow’s call for the government to “leave us alone” in cyberspace rings hollow in this context.

Government internet oversight is on the rise.

Government internet oversight is on the rise. Image: Information Technology and Innovation Foundation

But some digital policies may prove to be repressive for companies and citizens alike. They extend the justifiable concern over the dominance of large tech companies to other areas of the digital realm.

These “digital iron curtains” can take many forms. What they have in common is that they seek to silo the internet (or parts of it) and private data into national boxes. This risks dividing the internet, reducing its connective potential, and infringing basic digital freedoms…(More)”.

Abandoned: the human cost of neurotechnology failure


Article by Liam Drew: “…Hundreds of thousands of people benefit from implanted neurotechnology every day. Among the most common devices are spinal-cord stimulators, first commercialized in 1968, that help to ease chronic pain. Cochlear implants that provide a sense of hearing, and deep-brain stimulation (DBS) systems that quell the debilitating tremor of Parkinson’s disease, are also established therapies.

Encouraged by these successes, and buoyed by advances in computing and engineering, researchers are trying to develop evermore sophisticated devices for numerous other neurological and psychiatric conditions. Rather than simply stimulating the brain, spinal cord or peripheral nerves, some devices now monitor and respond to neural activity.

For example, in 2013, the US Food and Drug Administration approved a closed-loop system for people with epilepsy. The device detects signs of neural activity that could indicate a seizure and stimulates the brain to suppress it. Some researchers are aiming to treat depression by creating analogous devices that can track signals related to mood. And systems that allow people who have quadriplegia to control computers and prosthetic limbs using only their thoughts are also in development and attracting substantial funding.

The market for neurotechnology is predicted to expand by around 75% by 2026, to US$17.1 billion. But as commercial investment grows, so too do the instances of neurotechnology companies giving up on products or going out of business, abandoning the people who have come to depend on their devices.

Shortly after the demise of ATI, a company called Nuvectra, which was based in Plano, Texas, filed for bankruptcy in 2019. Its device — a new kind of spinal-cord stimulator for chronic pain — had been implanted in at least 3,000 people. In 2020, artificial-vision company Second Sight, in Sylmar, California, laid off most of its workforce, ending support for the 350 or so people who were using its much heralded retinal implant to see. And in June, another manufacturer of spinal-cord stimulators — Stimwave in Pompano Beach, Florida — filed for bankruptcy. The firm has been bought by a credit-management company and is now embroiled in a legal battle with its former chief executive. Thousands of people with the stimulator, and their physicians, are watching on in the hope that the company will continue to operate.

When the makers of implanted devices go under, the implants themselves are typically left in place — surgery to remove them is often too expensive or risky, or simply deemed unnecessary. But without ongoing technical support from the manufacturer, it is only a matter of time before the programming needs to be adjusted or a snagged wire or depleted battery renders the implant unusable.

People are then left searching for another way to manage their condition, but with the added difficulty of a non-functional implant that can be an obstacle both to medical imaging and future implants. For some people, including Möllmann-Bohle, no clear alternative exists.

“It’s a systemic problem,” says Jennifer French, executive director of Neurotech Network, a patient advocacy and support organization in St. Petersburg, Florida. “It goes all the way back to clinical trials, and I don’t think it’s received enough attention.”…(More)”.

The Wireless Body


Article by Jeremy Greene: “Nearly half the US adult population will pass out at some point in their lives. Doctors call this “syncope,” and it is bread-and-butter practice for any emergency room or urgent care clinic. While most cases are benign—a symptom of dehydration or mistimed medication—syncope can also be a sign of something gone terribly wrong. It may be a symptom of a heart attack, a blood clot in the lungs, an embolus to the arteries supplying the brain, or a life-threatening arrhythmia. After a series of tests ruling out the worst, most patients go home without incident. Many of them also go home with a Holter monitor. 

The Holter monitor is a device about the size of a pack of cards that records the electrical activity of the heart over the course of a day or more. Since its invention more than half a century ago, it has become such a common object in clinical medicine that few pause to consider its origins. But, as the makers of new Wi-Fi and cloud-enabled devices, smartphone apps, and other “wearable” technologies claim to be revolutionizing the world of preventive health care, there is much to learn from the history of this older instrument of medical surveillance…(More)”.