Crowdfunding Education


Article by Victoria Goldiee: “Nigeria’s education system has declined due to inadequate funding and facilities, low admission rates, and a nationwide shortage of qualified teachers. Consequently, receiving a quality education has become a privilege only accessible to families with financial means. According to research by the Nigeria Education and Training Services Industry, 49 percent of Nigeria’s youth enter into trade apprenticeships or expatriate to pursue a better education. In fact, Nigeria has the highest percentage of its students overseas of any African nation.

In February 2016, social entrepreneur Bola Lawal turned to crowdfunding to make educational opportunities accessible to Nigerians. He founded ScholarX as the vehicle to realize this mission through taking advantage of the largely untapped market of unclaimed scholarships, educational grants, and philanthropic donations for African students. The X in ScholarX represents the missing value and recognition that Nigerian youth deserve for their dedication to academic achievement.

“The idea for ScholarX came from the conversation with my friends on our shared experiences,” Lawal recounts, “because I also had difficulty paying for school like millions of Nigerians.” He adds that he was “even suspended from school because” of his inability to pay the tuition fee.

Like Lawal, more than 100,000 Nigerian students overseas rely on scholarships, many of which are backed either by oil and gas companies that aim to recruit students into the industry or by federal government grants for local students. But in recent years, these scholarships have been scaled back or scrapped altogether because of the ongoing economic crisis and recession. The crash of the foreign exchange rate of Nigeria’s currency, the naira, has further threatened the prospects of Nigeria’s overseas students, leaving many unable to pay tuition…(More)”

China’s Hinterland Becomes A Critical Datascape


Article by Gary Zhexi Zhang: “In 2014, the southwestern province of Guizhou, a historically poor and mountainous area, beat out rival regions to become China’s first “Big Data Comprehensive Pilot Zone,” as part of a national directive to develop the region — which is otherwise best known as an exporter of tobacco, spirits and coal — into the infrastructural backbone of the country’s data industry. Since then, vast investment has poured into the province. Thousands of miles of highway and high-speed rail tunnel through the mountains. Driving through the province can feel vertiginous: Of the hundred highest bridges in the world, almost half are in Guizhou, and almost all were built in the last 15 years.

In 2015, Xi Jinping visited Gui’an New Area to inaugurate the province’s transformation into China’s “Big Data Valley,” exemplifying the central government’s goal to establish “high quality social and economic development,” ubiquitously advertised through socialist-style slogans plastered on highways and city streets…(More)”.

China’s biggest AI model is challenging American dominance


Article by Sam Eifling: “So far, the AI boom has been dominated by U.S. companies like OpenAI, Google, and Meta. In recent months, though, a new name has been popping up on benchmarking lists: Alibaba’s Qwen. Over the past few months, variants of Qwen have been topping the leaderboards of sites that measure an AI model’s performance.

“Qwen 72B is the king, and Chinese models are dominating,” Hugging Face CEO Clem Delangue wrote in June, after a Qwen-based model first rose to the top of his company’s Open LLM leaderboard.

It’s a surprising turnaround for the Chinese AI industry, which many thought was doomed by semiconductor restrictions and limitations on computing power. Qwen’s success is showing that China can compete with the world’s best AI models — raising serious questions about how long U.S. companies will continue to dominate the field. And by focusing on capabilities like language support, Qwen is breaking new ground on what an AI model can do — and who it can be built for.

Those capabilities have come as a surprise to many developers, even those working on Qwen itself. AI developer David Ng used Qwen to build the model that topped the Open LLM leaderboard. He’s built models using Meta and Google’s technology also but says Alibaba’s gave him the best results. “For some reason, it works best on the Chinese models,” he told Rest of World. “I don’t know why.”..(More)”

Why is it so hard to establish the death toll?


Article by Smriti Mallapaty: “Given the uncertainty of counting fatalities during conflict, researchers use other ways to estimate mortality.

One common method uses household surveys, says Debarati Guha-Sapir, an epidemiologist who specializes in civil conflicts at the University of Louvain in Louvain-la-Neuve, Belgium, and is based in Brussels. A sample of the population is asked how many people in their family have died over a specific period of time. This approach has been used to count deaths in conflicts elsewhere, including in Iraq3 and the Central African Republic4.

The situation in Gaza right now is not conducive to a survey, given the level of movement and displacement, say researchers. And it would be irresponsible to send data collectors into an active conflict and put their lives at risk, says Ball.

There are also ethical concerns around intruding on people who lack basic access to food and medication to ask about deaths in their families, says Jamaluddine. Surveys will have to wait for the conflict to end and movement to ease, say researchers.

Another approach is to compare multiple independent lists of fatalities and calculate mortality from the overlap between them. The Human Rights Data Analysis Group used this approach to estimate the number of people killed in Syria between 2011 and 2014. Jamaluddine hopes to use the ministry fatality data in conjunction with those posted on social media by several informal groups to estimate mortality in this way. But Guha-Sapir says this method relies on the population being stable and not moving around, which is often not the case in conflict-affected communities.

In addition to deaths immediately caused by the violence, some civilians die of the spread of infectious diseases, starvation or lack of access to health care. In February, Jamaluddine and her colleagues used modelling to make projections of excess deaths due to the war and found that, in a continued scenario of six months of escalated conflict, 68,650 people could die from traumatic injuries, 2,680 from non-communicable diseases such as cancer and 2,720 from infectious diseases — along with thousands more if an epidemic were to break out. On 30 July, the ministry declared a polio epidemic in Gaza after detecting the virus in sewage samples, and in mid-August it confirmed the first case of polio in 25 years, in a 10-month-old baby…

The longer the conflict continues, the harder it will be to get reliable estimates, because “reports by survivors get worse as time goes by”, says Jon Pedersen, a demographer at !Mikro in Oslo, who advises international agencies on mortality estimates…(More)”.

Germany’s botched data revamp leaves economists ‘flying blind’


Article by Olaf Storbeck: “Germany’s statistical office has suspended some of its most important indicators after botching a data update, leaving citizens and economists in the dark at a time when the country is trying to boost flagging growth.

In a nation once famed for its punctuality and reliability, even its notoriously diligent beancounters have become part of a growing perception that “nothing works any more” as Germans moan about delayed trains, derelict roads and bridges, and widespread staff shortages.

“There used to be certain aspects in life that you could just rely on, and the fact that official statistics are published on time was one of them — not any more,” said Jörg Krämer, chief economist of Commerzbank, adding that the suspended data was also closely watched by monetary policymakers and investors.

Since May the Federal Statistical Office (Destatis) has not updated time-series data for retail and wholesale sales, as well as revenue from the services sector, hospitality, car dealers and garages.

These indicators, which are published monthly and adjusted for seasonal changes, are a key component of GDP and crucial for assessing consumer demand in the EU’s largest economy.

Private consumption accounted for 52.7 per cent of German output in 2023. Retail sales made up 28 per cent of private consumption but shrank 3.4 per cent from a year earlier. Overall GDP declined 0.3 per cent last year, Destatis said.

The Wiesbaden-based authority, which was established in 1948, said the outages had been caused by IT issues and a complex methodological change in EU business statistics in a bid to boost accuracy.

Destatis has been working on the project since the EU directive in 2019, and the deadline for implementing the changes is December.

But a series of glitches, data issues and IT delays meant Destatis has been unable to publish retail sales and other services data for four months.

A key complication is that the revenues of companies that operate in both services and manufacturing will now be reported differently for each sector. In the past, all revenue was treated as either services or manufacturing, depending on which unit was bigger…(More)”

Problem-solving matter


Essay by David C Krakauer and Chris Kempes: “What makes computation possible? Seeking answers to that question, a hardware engineer from another planet travels to Earth in the 21st century. After descending through our atmosphere, this extraterrestrial explorer heads to one of our planet’s largest data centres, the China Telecom-Inner Mongolia Information Park, 470 kilometres west of Beijing. But computation is not easily discovered in this sprawling mini-city of server farms. Scanning the almost-uncountable transistors inside the Information Park, the visiting engineer might­ be excused for thinking that the answer to their question lies in the primary materials driving computational processes: silicon and metal oxides. After all, since the 1960s, most computational devices have relied on transistors and semiconductors made from these metalloid materials.

If the off-world engineer had visited Earth several decades earlier, before the arrival of metal-oxide transistors and silicon semiconductors, they might have found entirely different answers to their question. In the 1940s, before silicon semiconductors, computation might appear as a property of thermionic valves made from tungsten, molybdenum, quartz and silica – the most important materials used in vacuum tube computers.

And visiting a century earlier, long before the age of modern computing, an alien observer might come to even stranger conclusions. If they had arrived in 1804, the year the Jacquard loom was patented, they might have concluded that early forms of computation emerged from the plant matter and insect excreta used to make the wooden frames, punch cards and silk threads involved in fabric-weaving looms, the analogue precursors to modern programmable machines.

But if the visiting engineer did come to these conclusions, they would be wrong. Computation does not emerge from silicon, tungsten, insect excreta or other materials. It emerges from procedures of reason or logic.

This speculative tale is not only about the struggles of an off-world engineer. It is also an analogy for humanity’s attempts to answer one of our most difficult problems: life. For, just as an alien engineer would struggle to understand computation through materials, so it is with humans studying our distant origins…(More)”.

Use GenAI to Improve Scenario Planning


Article by Daniel J. Finkenstadt et al: “Businesses are increasingly leveraging strategic foresight and scenario planning to navigate uncertainties stemming from climate change, global conflicts, and technological advancements. Traditional methods, however, struggle with identifying key trends, exploring multiple scenarios, and providing actionable guidance. Generative AI offers a robust alternative, enabling rapid, cost-effective, and comprehensive contingency planning. This AI-driven approach enhances scenario creation, narrative exploration, and strategy generation, providing detailed, adaptable strategies rather than conclusive solutions. This approach demands accurate, relevant data and encourages iterative refinement, transforming how organizations forecast and strategize for the future…(More)”.

We finally have a definition for open-source AI


Article by Rhiannon Williams and James O’Donnell: “Open-source AI is everywhere right now. The problem is, no one agrees on what it actually is. Now we may finally have an answer. The Open Source Initiative (OSI), the self-appointed arbiters of what it means to be open source, has released a new definition, which it hopes will help lawmakers develop regulations to protect consumers from AI risks. 

Though OSI has published much about what constitutes open-source technology in other fields, this marks its first attempt to define the term for AI models. It asked a 70-person group of researchers, lawyers, policymakers, and activists, as well as representatives from big tech companies like Meta, Google, and Amazon, to come up with the working definition. 

According to the group, an open-source AI system can be used for any purpose without the need to secure permission, and researchers should be able to inspect its components and study how the system works.

It should also be possible to modify the system for any purpose—including to change its output—and to share it with others to usewith or without modificationsfor any purpose. In addition, the standard attempts to define a level of transparency for a given model’s training data, source code, and weights. 

The previous lack of an open-source standard presented a problem…(More)”.

Revisiting the ‘Research Parasite’ Debate in the Age of AI


Article by C. Brandon Ogbunu: “A 2016 editorial published in the New England Journal of Medicine lamented the existence of “research parasites,” those who pick over the data of others rather than generating new data themselves. The article touched on the ethics and appropriateness of this practice. The most charitable interpretation of the argument centered around the hard work and effort that goes into the generation of new data, which costs millions of research dollars and takes countless person-hours. Whatever the merits of that argument, the editorial and its associated arguments were widely criticized.

Given recent advances in AI, revisiting the research parasite debate offers a new perspective on the ethics of sharing and data democracy. It is ironic that the critics of research parasites might have made a sound argument — but for the wrong setting, aimed at the wrong target, at the wrong time. Specifically, the large language models, or LLMs, that underlie generative AI tools such as OpenAI’s ChatGPT, have an ethical challenge in how they parasitize freely available data. These discussions bring up new conversations about data security that may undermine, or at least complicate, efforts at openness and data democratization.

The backlash to that 2016 editorial was swift and violent. Many arguments centered around the anti-science spirit of the message. For example, metanalysis – which re-analyzes data from a selection of studies – is a critical practice that should be encouraged. Many groundbreaking discoveries about the natural world and human health have come from this practice, including new pictures of the molecular causes of depression and schizophrenia. Further, the central criticisms of research parasitism undermine the ethical goals of data sharing and ambitions for open science, where scientists and citizen-scientists can benefit from access to data. This differs from the status quo in 2016, when data published in many of the top journals of the world were locked behind a paywall, illegible, poorly labeled, or difficult to use. This remains largely true in 2024…(More)”.

Definitions, digital, and distance: on AI and policymaking


Article by Gavin Freeguard: “Our first question is less, ‘to what extent can AI improve public policymaking?’, but ‘what is currently wrong with policymaking?’, and then, ‘is AI able to help?’.

Ask those in and around policymaking about the problems and you’ll get a list likely to include:

  • the practice not having changed in decades (or centuries)
  • it being an opaque ‘dark art’ with little transparency
  • defaulting to easily accessible stakeholders and evidence
  • a separation between policy and delivery (and digital and other disciplines), and failure to recognise the need for agility and feedback as opposed to distinct stages
  • the challenges in measuring or evaluating the impact of policy interventions and understanding what works, with a lack of awareness, let alone sharing, of case studies elsewhere
  • difficulties in sharing data
  • the siloed nature of government complicating cross-departmental working
  • policy asks often being dictated by politics, with electoral cycles leading to short-termism, ministerial churn changing priorities and personal style, events prompting rushed reactions, or political priorities dictating ‘policy-based evidence making’
  • a rush to answers before understanding the problem
  • definitional issues about what policy actually is making it hard to get a hold of or develop professional expertise.  

If we’re defining ‘policy’ and the problem, we also need to define ‘AI’, or at least acknowledge that we are not only talking about new, shiny generative AI, but a world of other techniques for automating processes and analysing data that have been used in government for years.

So is ‘AI’ able to help? It could support us to make better use of a wider range of data more quickly; but it could privilege that which is easier to measure, strip data of vital context, and embed biases and historical assumptions. It could ‘make decisions more transparent (perhaps through capturing digital records of the process behind them, or by visualising the data that underpins a decision)’; or make them more opaque with ‘black-box’ algorithms, and distract from overcoming the very human cultural problems around greater openness. It could help synthesise submissions or generate ideas to brainstorm; or fail to compensate for deficiencies in underlying government knowledge infrastructure, and generate gibberish. It could be a tempting silver bullet for better policy; or it could paper over the cracks, while underlying technical, organisational and cultural plumbing goes unfixed. It could have real value in some areas, or cause harms in others…(More)”.