Why the future might not be where you think it is


Article by Ruth Ogden: “Imagine the future. Where is it for you? Do you see yourself striding towards it? Perhaps it’s behind you. Maybe it’s even above you.

And what about the past? Do you imagine looking over your shoulder to see it?

How you answer these questions will depend on who you are and where you come from. The way we picture the future is influenced by the culture we grow up in and the languages we are exposed to.

For many people who grew up in the UK, the US and much of Europe, the future is in front of them, and the past is behind them. People in these cultures typically perceive time as linear. They see themselves as continually moving towards the future because they cannot go back to the past.

In some other cultures, however, the location of the past and the future are inverted. The Aymara, a South American Indigenous group of people living in the Andes, conceptualise the future as behind them and the past in front of them.

Scientists discovered this by studying the gestures of the Aymara people during discussions of topics such as ancestors and traditions. The researchers noticed that when Aymara spoke about their ancestors, they were likely to gesture in front of themselves, indicating that the past was in front. However, when they were asked about a future event, their gesture seemed to indicate that the future was perceived as behind.

Analysis of how people write, speak and gesture about time suggests that the Aymara are not alone. Speakers of Darij, an Arabic dialect spoken in Morocco, also appear to imagine the past as in front and the future behind. As do some Vietnamese speakers.

The future doesn’t always have to be behind or in front of us. There is evidence that some Mandarin speakers represent the future as down and the past as up. These differences suggest that there is no universal location for the past, present and future. Instead, people construct these representations based on their upbringing and surroundings.

Culture doesn’t just influence where we see the position of the future. It also influences how we see ourselves getting there…(More)”.

What causes such maddening bottlenecks in government? ‘Kludgeocracy.’


Article by Jennifer Pahlka: “Former president Donald Trump wants to “obliterate the deep state.” As a Democrat who values government, I am chilled by the prospect. But I sometimes partly agree with him.

Certainly, Trump and I are poles apart on the nature of the problem. His “deep state” evokes a shadowy cabal that doesn’t exist. What is true, however, is that red tape and misaligned gears frequently stymie progress on even the most straightforward challenges. Ten years ago, Steven M. Teles, a political science professor at Johns Hopkins University, coined the term “kludgeocracy” to describe the problem. Since then, it has only gotten worse.

Whatever you call it, the sprawling federal bureaucracy takes care of everything from the nuclear arsenal to the social safety net to making sure our planes don’t crash. Public servants do critical work; they should be honored, not disparaged.

Yet most of them are frustrated. I’ve spoken with staffers in a dozen federal agencies this year while rolling out my book about government culture and effectiveness. I heard over and over about rigid, maximalist interpretations of rules, regulations, policies and procedures that take precedence over mission. Too often acting responsibly in government has come to mean not acting at all.

Kludgeocracy Example No. 1: Within government, designers are working to make online forms and applications easier to use. To succeed, they need to do user research, most of which is supposed to be exempt from the data-collection requirements of the Paperwork Reduction Act. Yet compliance officers insist that designers send their research plans for approval by the White House Office of Information and Regulatory Affairs (OIRA) under the act. Countless hours can go into the preparation and internal approvals of a “package” for OIRA, which then might post the plans to the Federal Register for the fun-house-mirror purpose of collecting public input on a plan to collect public input. This can result in months of delay. Meanwhile, no input happens, and no paperwork gets reduced.

Kludgeocracy Example No. 2: For critical economic and national security reasons, Congress passed a law mandating the establishment of a center for scientific research. Despite clear legislative intent, work was bogged down for months when one agency applied a statute to prohibit a certain structure for the center and another applied a different statute to require that structure. The lawyers ultimately found a solution, but it was more complex and cumbersome than anyone had hoped for. All the while, the clock was ticking.

What causes such maddening bottlenecks? The problem is mainly one of culture and incentives. It could be solved if leaders in each branch — in good faith — took the costs seriously…(More)”.

The battle over right to repair is a fight over your car’s data


Article by Ofer Tur-Sinai: “Cars are no longer just a means of transportation. They have become rolling hubs of data communication. Modern vehicles regularly transmit information wirelessly to their manufacturers.

However, as cars grow “smarter,” the right to repair them is under siege.

As legal scholars, we find that the question of whether you and your local mechanic can tap into your car’s data to diagnose and repair spans issues of property rights, trade secrets, cybersecurity, data privacy and consumer rights. Policymakers are forced to navigate this complex legal landscape and ideally are aiming for a balanced approach that upholds the right to repair, while also ensuring the safety and privacy of consumers…

Until recently, repairing a car involved connecting to its standard on-board diagnostics port to retrieve diagnostic data. The ability for independent repair shops – not just those authorized by the manufacturer – to access this information was protected by a state law in Massachusetts, approved by voters on Nov. 6, 2012, and by a nationwide memorandum of understanding between major car manufacturers and the repair industry signed on Jan. 15, 2014.

However, with the rise of telematics systems, which combine computing with telecommunications, these dynamics are shifting. Unlike the standardized onboard diagnostics ports, telematics systems vary across car manufacturers. These systems are often protected by digital locks, and circumventing these locks could be considered a violation of copyright law. The telematics systems also encrypt the diagnostic data before transmitting it to the manufacturer.

This reduces the accessibility of telematics information, potentially locking out independent repair shops and jeopardizing consumer choice – a lack of choice that can lead to increased costs for consumers….

One issue left unresolved by the legislation is the ownership of vehicle data. A vehicle generates all sorts of data as it operates, including location, diagnostic, driving behavior, and even usage patterns of in-car systems – for example, which apps you use and for how long.

In recent years, the question of data ownership has gained prominence. In 2015, Congress legislated that the data stored in event data recorders belongs to the vehicle owner. This was a significant step in acknowledging the vehicle owner’s right over specific datasets. However, the broader issue of data ownership in today’s connected cars remains unresolved…(More)”.

Private UK health data donated for medical research shared with insurance companies


Article by Shanti Das: “Sensitive health information donated for medical research by half a million UK citizens has been shared with insurance companies despite a pledge that it would not be.

An Observer investigation has found that UK Biobank opened up its vast biomedical database to insurance sector firms several times between 2020 and 2023. The data was provided to insurance consultancy and tech firms for projects to create digital tools that help insurers predict a person’s risk of getting a chronic disease. The findings have raised concerns among geneticists, data privacy experts and campaigners over vetting and ethical checks at Biobank.

Set up in 2006 to help researchers investigating diseases, the database contains millions of blood, saliva and urine samples, collected regularly from about 500,000 adult volunteers – along with medical records, scans, wearable device data and lifestyle information.

Approved researchers around the world can pay £3,000 to £9,000 to access records ranging from medical history and lifestyle information to whole genome sequencing data. The resulting research has yielded major medical discoveries and led to Biobank being considered a “jewel in the crown” of British science.

Biobank said it strictly guarded access to its data, only allowing access by bona fide researchers for health-related projects in the public interest. It said this included researchers of all stripes, whether employed by academic, charitable or commercial organisations – including insurance companies – and that “information about data sharing was clearly set out to participants at the point of recruitment and the initial assessment”.

But evidence gathered by the Observer suggests Biobank did not explicitly tell participants it would share data with insurance companies – and made several public commitments not to do so.

When the project was announced, in 2002, Biobank promised that data would not be given to insurance companies after concerns were raised that it could be used in a discriminatory way, such as by the exclusion of people with a particular genetic makeup from insurance.

In an FAQ section on the Biobank website, participants were told: “Insurance companies will not be allowed access to any individual results nor will they be allowed access to anonymised data.” The statement remained online until February 2006, during which time the Biobank project was subject to public scrutiny and discussed in parliament.

The promise was also reiterated in several public statements by backers of Biobank, who said safeguards would be built in to ensure that “no insurance company or police force or employer will have access”.

This weekend, Biobank said the pledge – made repeatedly over four years – no longer applied. It said the commitment had been made before recruitment formally began in 2007 and that when Biobank volunteers enrolled they were given revised information.

This included leaflets and consent forms that contained a provision that anonymised Biobank data could be shared with private firms for “health-related” research, but did not explicitly mention insurance firms or correct the previous assurances…(More)”

Researchers warn we could run out of data to train AI by 2026. What then?


Article by Rita Matulionyte: “As artificial intelligence (AI) reaches the peak of its popularity, researchers have warned the industry might be running out of training data – the fuel that runs powerful AI systems. This could slow down the growth of AI models, especially large language models, and may even alter the trajectory of the AI revolution.

But why is a potential lack of data an issue, considering how much there are on the web? And is there a way to address the risk?…

We need a lot of data to train powerful, accurate and high-quality AI algorithms. For instance, ChatGPT was trained on 570 gigabytes of text data, or about 300 billion words.

Similarly, the stable diffusion algorithm (which is behind many AI image-generating apps such as DALL-E, Lensa and Midjourney) was trained on the LIAON-5B dataset comprising of 5.8 billion image-text pairs. If an algorithm is trained on an insufficient amount of data, it will produce inaccurate or low-quality outputs.

The quality of the training data is also important…This is why AI developers seek out high-quality content such as text from books, online articles, scientific papers, Wikipedia, and certain filtered web content. The Google Assistant was trained on 11,000 romance novels taken from self-publishing site Smashwords to make it more conversational.

The AI industry has been training AI systems on ever-larger datasets, which is why we now have high-performing models such as ChatGPT or DALL-E 3. At the same time, research shows online data stocks are growing much slower than datasets used to train AI.

In a paper published last year, a group of researchers predicted we will run out of high-quality text data before 2026 if the current AI training trends continue. They also estimated low-quality language data will be exhausted sometime between 2030 and 2050, and low-quality image data between 2030 and 2060.

AI could contribute up to US$15.7 trillion (A$24.1 trillion) to the world economy by 2030, according to accounting and consulting group PwC. But running out of usable data could slow down its development…(More)”.

Chatbots May ‘Hallucinate’ More Often Than Many Realize


Cade Metz at The New York Times: “When the San Francisco start-up OpenAI unveiled its ChatGPT online chatbot late last year, millions were wowed by the humanlike way it answered questions, wrote poetry and discussed almost any topic. But most people were slow to realize that this new kind of chatbot often makes things up.

When Google introduced a similar chatbot several weeks later, it spewed nonsense about the James Webb telescope. The next day, Microsoft’s new Bing chatbot offered up all sorts of bogus information about the Gap, Mexican nightlife and the singer Billie Eilish. Then, in March, ChatGPT cited a half dozen fake court cases while writing a 10-page legal brief that a lawyer submitted to a federal judge in Manhattan.

Now a new start-up called Vectara, founded by former Google employees, is trying to figure out how often chatbots veer from the truth. The company’s research estimates that even in situations designed to prevent it from happening, chatbots invent information at least 3 percent of the time — and as high as 27 percent.

Experts call this chatbot behavior “hallucination.” It may not be a problem for people tinkering with chatbots on their personal computers, but it is a serious issue for anyone using this technology with court documents, medical information or sensitive business data.

Because these chatbots can respond to almost any request in an unlimited number of ways, there is no way of definitively determining how often they hallucinate. “You would have to look at all of the world’s information,” said Simon Hughes, the Vectara researcher who led the project…(More)”.

Markets and the Good


Introduction to Special Issue by Jay Tolson: “How, then, do we think beyond what has come to be the tyranny of economics—or perhaps more accurately, how do we put economics in its proper place? Coming at these questions from different angles and different first principles, our authors variously dissect formative economic doctrines (see Kyle Edward Williams, “The Myth of the Friedman Doctrine”) and propose restoring the genius of the American system of capitalism (Jacob Soll, “Hamilton’s System”) or revising the purpose and priorities of the corporation (Michael Lind, “Profit, Power, and Purpose”). Others, in turn, prescribe restraints for the excesses of liberalism (Deirdre Nansen McCloskey “An Economic Theology of Liberalism”) or even an alternative to it, in the restoration of “common good” thinking associated with subsidiarity (Andrew Willard Jones, “Friendship and the Common Good”). Yet others examine how “burnout” and “emotional labor” became status markers and signs of virtue that weaken solidarity among workers of all kinds (Jonathan Malesic, “How We Obscure the Common Plight of Workers”) or the subtle ways in which we have reduced ourselves to cogs in our economic system (Sarah M. Brownsberger, “Name Your Industry—Or Else!”). Collectively, our authors suggest, the reluctance to question and rethink our fundamental economic assumptions and institutions—and their relation to other goods—may pose the greatest threat to real prosperity and human flourishing…(More)”.

The UN Hired an AI Company to Untangle the Israeli-Palestinian Crisis


Article by David Gilbert: “…The application of artificial intelligence technologies to conflict situations has been around since at least 1996, with machine learning being used to predict where conflicts may occur. The use of AI in this area has expanded in the intervening years, being used to improve logistics, training, and other aspects of peacekeeping missions. Lane and Shults believe they could use artificial intelligence to dig deeper and find the root causes of conflicts.

Their idea for an AI program that models the belief systems that drive human behavior first began when Lane moved to Northern Ireland a decade ago to study whether computation modeling and cognition could be used to understand issues around religious violence.

In Belfast, Lane figured out that by modeling aspects of identity and social cohesion, and identifying the factors that make people motivated to fight and die for a particular cause, he could accurately predict what was going to happen next.

“We set out to try and come up with something that could help us better understand what it is about human nature that sometimes results in conflict, and then how can we use that tool to try and get a better handle or understanding on these deeper, more psychological issues at really large scales,” Lane says.

The result of their work was a study published in 2018 in The Journal for Artificial Societies and Social Simulation, which found that people are typically peaceful but will engage in violence when an outside group threatens the core principles of their religious identity.

A year later, Lane wrote that the model he had developed predicted that measures introduced by Brexit—the UK’s departure from the European Union that included the introduction of a hard border in the Irish Sea between Northern Ireland and the rest of the UK—would result in a rise in paramilitary activity. Months later, the model was proved right.

The multi-agent model developed by Lane and Shults relied on distilling more than 50 million articles from GDelt, a project that ​​monitors “the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages.” But feeding the AI millions of articles and documents was not enough, the researchers realized. In order to fully understand what was driving the people of Northern Ireland to engage in violence against their neighbors, they would need to conduct their own research…(More)”.

AI in public services will require empathy, accountability


Article by Yogesh Hirdaramani: “The Australian Government Department of the Prime Minister and Cabinet has released the first of its Long Term Insights Briefing, which focuses on how the Government can integrate artificial intelligence (AI) into public services while maintaining the trustworthiness of public service delivery.

Public servants need to remain accountable and transparent with their use of AI, continue to demonstrate empathy for the people they serve, use AI to better meet people’s needs, and build AI literacy amongst the Australian public, the report stated.

The report also cited a forthcoming study that found that Australian residents with a deeper understanding of AI are more likely to trust the Government’s use of AI in service delivery. However,more than half of survey respondents reported having little knowledge of AI.

Key takeaways

The report aims to supplement current policy work on how AI can be best governed in the public service to realise its benefits while maintaining public trust.

In the longer term, the Australian Government aims to use AI to deliver personalised services to its citizens, deliver services more efficiently and conveniently, and achieve a higher standard of care for its ageing population.

AI can help public servants achieve these goals through automating processes, improving service processing and response time, and providing AI-enabled interfaces which users can engage with, such as chatbots and virtual assistants.

However, AI can also lead to unfair or unintended outcomes due to bias in training data or hallucinations, the report noted.

According to the report, the trustworthy use of AI will require public servants to:

  1. Demonstrate integrity by remaining accountable for AI outcomes and transparent about AI use
  2. Demonstrate empathy by offering face-to-face services for those with greater vulnerabilities 
  3. Use AI in ways that improve service delivery for end-users
  4. Build internal skills and systems to implement AI, while educating the public on the impact of AI

The Australian Taxation Office currently uses AI to identify high-risk business activity statements to determine whether refunds can be issued or if further review is required, noted the report. Taxpayers can appeal the decision if staff decide to deny refunds…(More)”

The Open Sky


Essay by Lars Erik Schönander: “Any time you walk outside, satellites may be watching you from space. There are currently more than 8,000 active satellites in orbit, including over a thousand designed to observe the Earth.

Satellite technology has come a long way since its secretive inception during the Cold War, when a country’s ability to successfully operate satellites meant not only that it was capable of launching rockets into Earth orbit but that it had eyes in the sky. Today not only governments across the world but private enterprises too launch satellites, collect and analyze satellite imagery, and sell it to a range of customers, from government agencies to the person on the street. SpaceX’s Starlink satellites bring the Internet to places where conventional coverage is spotty or compromised. Satellite data allows the United States to track rogue ships and North Korean missile launches, while scientists track wildfires, floods, and changes in forest cover.

The industry’s biggest technical challenge, aside from acquiring the satellite imagery itself, has always been to analyze and interpret it. This is why new AI tools are set to drastically change how satellite imagery is used — and who uses it. For instance, Meta’s Segment Anything Model, a machine-learning tool designed to “cut out” discrete objects from images, is proving highly effective at identifying objects in satellite images.

But the biggest breakthrough will likely come from large language models — tools like OpenAI’s ChatGPT — that may soon allow ordinary people to query the Earth’s surface the way data scientists query databases. Achieving this goal is the ambition of companies like Planet Labs, which has launched hundreds of satellites into space and is working with Microsoft to build what it calls a “queryable Earth.” At this point, it is still easy to dismiss their early attempt as a mere toy. But as the computer scientist Paul Graham once noted, if people like a new invention that others dismiss as a toy, this is probably a good sign of its future success.

This means that satellite intelligence capabilities that were once restricted to classified government agencies, and even now belong only to those with bountiful money or expertise, are about to be open to anyone with an Internet connection…(More)”.