Prompting Diverse Ideas: Increasing AI Idea Variance


Paper by Lennart Meincke, Ethan Mollick, and Christian Terwiesch: “Unlike routine tasks where consistency is prized, in creativity and innovation the goal is to create a diverse set of ideas. This paper delves into the burgeoning interest in employing Artificial Intelligence (AI) to enhance the productivity and quality of the idea generation process. While previous studies have found that the average quality of AI ideas is quite high, prior research also has pointed to the inability of AI-based brainstorming to create sufficient dispersion of ideas, which limits novelty and the quality of the overall best idea. Our research investigates methods to increase the dispersion in AI-generated ideas. Using GPT-4, we explore the effect of different prompting methods on Cosine Similarity, the number of unique ideas, and the speed with which the idea space gets exhausted. We do this in the domain of developing a new product development for college students, priced under $50. In this context, we find that (1) pools of ideas generated by GPT-4 with various plausible prompts are less diverse than ideas generated by groups of human subjects (2) the diversity of AI generated ideas can be substantially improved using prompt engineering (3) Chain-of-Thought (CoT) prompting leads to the highest diversity of ideas of all prompts we evaluated and was able to come close to what is achieved by groups of human subjects. It also was capable of generating the highest number of unique ideas of any prompt we studied…(More)”

Trust in AI companies drops to 35 percent in new study


Article by Filip Timotija: “Trust in artificial intelligence (AI) companies has dipped to 35 percent over a five-year period in the U.S., according to new data.

The data, released Tuesday by public relations firm Edelman, found that trust in AI companies also dropped globally by eight points, going from 61 percent to 53 percent. 

The dwindling confidence in the rapidly-developing tech industry comes as regulators in the U.S. and across the globe are brainstorming solutions on how to regulate the sector. 

When broken down my political party, researchers found Democrats showed the most trust in AI companies at 38 percent — compared to Republicans’ 24 percent and independents’ 25 percent, per the study.

Multiple factors contributed to the decline in trust toward the companies polled in the data, according to Justin Westcott, Edelman’s chair of global technology.

“Key among these are fears related to privacy invasion, the potential for AI to devalue human contributions, and apprehensions about unregulated technological leaps outpacing ethical considerations,” Westcott said, adding “the data points to a perceived lack of transparency and accountability in how AI companies operate and engage with societal impacts.”

Technology as a whole is losing its lead in trust among sectors, Edelman said, highlighting the key findings from the study.

“Eight years ago, technology was the leading industry in trust in 90 percent of the countries we study,” researchers wrote, referring to the 28 countries. “Now it is most trusted only in half.”

Westcott argued the findings should be a “wake up call” for AI companies to “build back credibility through ethical innovation, genuine community engagement and partnerships that place people and their concerns at the heart of AI developments.”

As for the impacts on the future for the industry as a whole, “societal acceptance of the technology is now at a crossroads,” he said, adding that trust in AI and the companies producing it should be seen “not just as a challenge, but an opportunity.”

Priorities, Westcott continued, should revolve around ethical practices, transparency and a “relentless focus” on the benefits to society AI can provide…(More)”.

The AI data scraping challenge:  How can we proceed responsibly?


Article by Lee Tiedrich: “Society faces an urgent and complex artificial intelligence (AI) data scraping challenge.  Left unsolved, it could threaten responsible AI innovation.  Data scraping refers to using web crawlers or other means to obtain data from third-party websites or social media properties.  Today’s large language models (LLMs) depend on vast amounts of scraped data for training and potentially other purposes.  Scraped data can include facts, creative content, computer code, personal information, brands, and just about anything else.  At least some LLM operators directly scrape data from third-party sites.  Common CrawlLAION, and other sites make scraped data readily accessible.  Meanwhile, Bright Data and others offer scraped data for a fee. 

In addition to fueling commercial LLMs, scraped data can provide researchers with much-needed data to advance social good.  For instance, Environmental Journal explains how scraped data enhances sustainability analysis.  Nature reports that scraped data improves research about opioid-related deaths.  Training data in different languages can help make AI more accessible for users in Africa and other underserved regions.  Access to training data can even advance the OECD AI Principles by improving safety and reducing bias and other harms, particularly when such data is suitable for the AI system’s intended purpose…(More)”.

Evaluating LLMs Through a Federated, Scenario-Writing Approach


Article by Bogdana “Bobi” Rakova: “What do screenwriters, AI builders, researchers, and survivors of gender-based violence have in common? I’d argue they all imagine new, safe, compassionate, and empowering approaches to building understanding.

In partnership with Kwanele South Africa, I lead an interdisciplinary team, exploring this commonality in the context of evaluating large language models (LLMs) — more specifically, chatbots that provide legal and social assistance in a critical context. The outcomes of our engagement are a series of evaluation objectives and scenarios that contribute to an evaluation protocol with the core tenet that when we design for the most vulnerable, we create better futures for everyone. In what follows I describe our process. I hope this methodological approach and our early findings will inspire other evaluation efforts to meaningfully center the margins in building more positive futures that work for everyone…(More)”

Generative AI: Navigating Intellectual Property


Factsheet by WIPO: “Generative artificial intelligence (AI) tools are rapidly being adopted by many businesses and organizations for the purpose of content generation. Such tools represent both a substantial opportunity to assist business operations and a significant legal risk due to current uncertainties, including intellectual property (IP) questions.

Many organizations are seeking to put guidance in place to help their employees mitigate these risks. While each business situation and legal context will be unique, the following Guiding Principles and Checklist are intended to assist organizations in understanding the IP risks, asking the right questions, and considering potential safeguards…(More)”.

Can AI mediate conflict better than humans?


Article by Virginia Pietromarchi: “Diplomats whizzing around the globe. Hush-hush meetings, often never made public. For centuries, the art of conflict mediation has relied on nuanced human skills: from elements as simple as how to make eye contact and listen carefully to detecting shifts in emotions and subtle signals from opponents.

Now, a growing set of entrepreneurs and experts are pitching a dramatic new set of tools into the world of dispute resolution – relying increasingly on artificial intelligence (AI).

“Groundbreaking technological advancements are revolutionising the frontier of peace and mediation,” said Sama al-Hamdani, programme director of Hala System, a private company using AI and data analysis to gather unencrypted intelligence in conflict zones, among other war-related tasks.

“We are witnessing an era where AI transforms mediators into powerhouses of efficiency and insight,” al-Hamdani said.

The researcher is one of thousands of speakers participating in the Web Summit in Doha, Qatar, where digital conflict mediation is on the agenda. The four-day summit started on February 26 and concludes on Thursday, February 29.

Already, say experts, digital solutions have proven effective in complex diplomacy. At the peak of the COVID-19 restrictions, mediators were not able to travel for in-person meetings with their interlocutors.

The solution? Use remote communication software Skype to facilitate negotiations, as then-United States envoy Zalmay Khalilzad did for the Qatar-brokered talks between the US and the Taliban in 2020.

For generations, power brokers would gather behind doors to make decisions affecting people far and wide. Digital technologies can now allow the process to be relatively more inclusive.

This is what Stephanie Williams, special representative of the United Nations’ chief in Libya, did in 2021 when she used a hybrid model integrating personal and digital interactions as she led mediation efforts to establish a roadmap towards elections. That strategy helped her speak to people living in areas deemed too dangerous to travel to. The UN estimates that Williams managed to reach one million Libyans.

However, practitioners are now growing interested in the use of technology beyond online consultations…(More)”

AI as a Public Good: Ensuring Democratic Control of AI in the Information Space


Report by the Forum on Information and Democracy: “…The report outlines key recommendations to governments, the industry and relevant stakeholders, notably:

  • Foster the creation of a tailored certification system for AI companies inspired by the success of the Fair Trade certification system.
  • Establish standards governing content authenticity and provenance, including for author authentication.
  • Implement a comprehensive legal framework that clearly defines the rights of individuals including the right to be informed, to receive an explanation, to challenge a machine-generated outcome, and to non-discrimination
  • Provide users with an easy and user-friendly opportunity to choose alternative recommender systems that do not optimize for engagement but build on ranking in support of positive individual and societal outcomes, such as reliable information, bridging content or diversity of information.
  • Set up a participatory process to determine the rules and criteria guiding dataset provenance and curation, human labeling for AI training, alignment, and red-teaming to build inclusive, non-discriminatory and transparent AI systems…(More)”.

The AI project pushing local languages to replace French in Mali’s schools


Article by Annie Risemberg and Damilare Dosunmu: “For the past six months,Alou Dembele, a27-year-oldengineer and teacher, has spent his afternoons reading storybooks with children in the courtyard of a community school in Mali’s capital city, Bamako. The books are written in Bambara — Mali’s most widely spoken language — and include colorful pictures and stories based on local culture. Dembele has over 100 Bambara books to pick from — an unimaginable educational resource just a year ago.

From 1960 to 2023, French was Mali’s official language. But in June last year, the military government replaced it in favor of 13 local languages, creating a desperate need for new educational materials.

Artificial intelligence came to the rescue: RobotsMali, a government-backed initiative, used tools like ChatGPT, Google Translate, and free-to-use image-maker Playgroundto create a pool of 107 books in Bambara in less than a year. Volunteer teachers, like Dembele, distribute them through after-school classes. Within a year, the books have reached over 300 elementary school kids, according to RobotsMali’s co-founder, Michael Leventhal. They are not only helping bridge the gap created after French was dropped but could also be effective in helping children learn better, experts told Rest of World…(More)”.

Mirror, Mirror, on the Wall, Who’s the Fairest of Them All?


Paper by Alice Xiang: “Debates in AI ethics often hinge on comparisons between AI and humans: which is more beneficial, which is more harmful, which is more biased, the human or the machine? These questions, however, are a red herring. They ignore what is most interesting and important about AI ethics: AI is a mirror. If a person standing in front of a mirror asked you, “Who is more beautiful, me or the person in the mirror?” the question would seem ridiculous. Sure, depending on the angle, lighting, and personal preferences of the beholder, the person or their reflection might appear more beautiful, but the question is moot. AI reflects patterns in our society, just and unjust, and the worldviews of its human creators, fair or biased. The question then is not which is fairer, the human or the machine, but what can we learn from this reflection of our society and how can we make AI fairer? This essay discusses the challenges to developing fairer AI, and how they stem from this reflective property…(More)”.

AI doomsayers funded by billionaires ramp up lobbying


Article by Brendan Borderlon: “Two nonprofits funded by tech billionaires are now directly lobbying Washington to protect humanity against the alleged extinction risk posed by artificial intelligence — an escalation critics see as a well-funded smokescreen to head off regulation and competition.

The similarly named Center for AI Policy and Center for AI Safety both registered their first lobbyists in late 2023, raising the profile of a sprawling influence battle that’s so far been fought largely through think tanks and congressional fellowships.

Each nonprofit spent close to $100,000 on lobbying in the last three months of the year. The groups draw money from organizations with close ties to the AI industry like Open Philanthropy, financed by Facebook co-founder Dustin Moskovitz, and Lightspeed Grants, backed by Skype co-founder Jaan Tallinn.

Their message includes policies like CAIP’s call for legislation that would hold AI developers liable for “severe harms,” require permits to develop “high-risk” systems and empower regulators to “pause AI projects if they identify a clear emergency.”

“[The] risks of AI remain neglected — and are in danger of being outpaced by the rapid rate of AI development,” Nathan Calvin, senior policy counsel at the CAIS Action Fund, said in an email.

Detractors see the whole enterprise as a diversion. By focusing on apocalyptic scenarios, critics claim, these well-funded groups are raising barriers to entry for smaller AI firms and shifting attention away from more immediate and concrete problems with the technology, such as its potential to eliminate jobs or perpetuate discrimination.

Until late last year, organizations working to focus Washington on AI’s existential threat tended to operate under the radar. Instead of direct lobbying, groups like Open Philanthropy funded AI staffers in Congress and poured money into key think tanks. The RAND Corporation, an influential think tank that played a key role in drafting President Joe Biden’s October executive order on AI, received more than $15 million from Open Philanthropy last year…(More)”.