How Insurance Companies Used Bad Science to Discriminate


Jessie Wright-Mendoza at JStor: “After the Civil War, the United States searched for ways to redefine itself. But by the 1880’s, the hopes of Reconstruction had dimmed. Across the United States there was instead a push to formalize and legalize discrimination against African-Americans. The effort to marginalize the first generation of free black Americans infiltrated nearly every aspect of daily life, including the cost of insurance.

Initially, African-Americans could purchase life insurance policies on equal footing with whites. That all changed in 1881. In March of that year Prudential, one of the country’s largest insurers, announced that policies held by black adults would be worth one-third less than the same plans held by whites. Their weekly premiums would remain the same. Benefits for black children didn’t change, but weekly premiums for their policies would rise by five cents.

Prudential defended the decision by pointing out that the black mortality rate was higher than the white mortality rate. Therefore, they explained, claims paid out for black policyholders were a disproportionate amount of all payouts. Most of the major life insurance companies followed suit, making it nearly impossible for African-Americans to gain coverage. Across the industry, companies blocked agents from soliciting African-American customers and denied commission for any policies issued to blacks.

The public largely accepted the statistical explanation for unequal coverage. The insurer’s job was to calculate risk. Race was merely another variable like occupation or geographic location. As one trade publication put it in 1891: “Life insurance companies are not negro-maniacs, they are business institutions…there is no sentiment and there are no politics in it.”

Companies considered race-based risk the same for all African-Americans, whether they were strong or sickly, educated or uneducated, from the country or the city. The “science” behind the risk formula is credited to Prudential statistician Frederick L. Hoffman, whose efforts to prove the genetic inferiority of the black race were used to justify the company’s discriminatory policies….(More)”.

Data-Driven Government: The Role of Chief Data Officers


Jane Wiseman for IBM Center for The Business of Government: “Governments at all levels have seen dramatic increases in availability and use of data over the past decade.

The push for data-driven government is currently of intense interest at the federal level as it develops an integrated federal data strategy as part of its goal to “leverage data as a strategic asset.” There is also pending legislation to require agencies to designate chief data officers (CDOs).

This report focuses on the expanding use of data at the federal level and how to best manage it. Ms. Wiseman says: “The purpose of this report is to advance the use of data in government by describing the work of pioneering federal CDOs and providing a framework for thinking about how a new analytics leader might establish his or her office and use data to advance the mission of the agency.”

Ms. Wiseman’s report provides rich profiles of five pioneering CDOs in the federal government and how they have defined their new roles. Based on her research and interviews, she offers insights into how the role of agency CDOs is evolving in different agencies and the reasons agency leaders are establishing these roles.  She also offers advice on how new CDOs can be successful at the federal level, based on the experiences of the pioneers as well as the experiences of state and local CDOs….(More)”.

The Cost-Benefit Revolution


Book by Cass Sunstein: “Why policies should be based on careful consideration of their costs and benefits rather than on intuition, popular opinion, interest groups, and anecdotes.

Opinions on government policies vary widely. Some people feel passionately about the child obesity epidemic and support government regulation of sugary drinks. Others argue that people should be able to eat and drink whatever they like. Some people are alarmed about climate change and favor aggressive government intervention. Others don’t feel the need for any sort of climate regulation. In The Cost-Benefit Revolution, Cass Sunstein argues our major disagreements really involve facts, not values. It follows that government policy should not be based on public opinion, intuitions, or pressure from interest groups, but on numbers—meaning careful consideration of costs and benefits. Will a policy save one life, or one thousand lives? Will it impose costs on consumers, and if so, will the costs be high or negligible? Will it hurt workers and small businesses, and, if so, precisely how much?

As the Obama administration’s “regulatory czar,” Sunstein knows his subject in both theory and practice. Drawing on behavioral economics and his well-known emphasis on “nudging,” he celebrates the cost-benefit revolution in policy making, tracing its defining moments in the Reagan, Clinton, and Obama administrations (and pondering its uncertain future in the Trump administration). He acknowledges that public officials often lack information about costs and benefits, and outlines state-of-the-art techniques for acquiring that information. Policies should make people’s lives better. Quantitative cost-benefit analysis, Sunstein argues, is the best available method for making this happen—even if, in the future, new measures of human well-being, also explored in this book, may be better still…(More)”.

Tanzania’s government is casting itself as the nation’s sole custodian of data


Abdi Latif Dahir at Quartz: “Tanzania’s government wants to have exclusive control over who collects and shares data about the country.

In a bill tabled in parliament this week, the government aims to criminalize the collection, analysis, and dissemination of any data without first obtaining authorization from the country’s chief statistician. The key amendments to the Statistics Act also prohibit researchers from publicly releasing any data “which is intended to invalidate, distort, or discredit official statistics.” Any person who does anything to the contrary could merit a fine of not less than 10 million shillings ($4,400), a jail term of three years, or both.

Officials have said the amendments are being passed as a measure to promote peace and security and to stop the publication of fake information. Critics, however, argue the laws will curtail both the collection of crucial data and the ability to fact-check and hold official sources accountable. Opposition members in parliament also said the law could target institutions and scholars releasing data that isn’t in favor of the government….

the move to ban independent data collection could be damaging given how much quality information could help in national development. African nations increasingly lack evidence-based research that could inform how they formulate national policies. And many times in Tanzania, independent actors fulfill this gap, providing data on flood-prone areas to avoid disasters, or documenting citizens’ needs—something that isn’t captured in official government statistics….(More)”.

To Secure Knowledge: Social Science Partnerships for the Common Good


Social Science Research Council: “For decades, the social sciences have generated knowledge vital to guiding public policy, informing business, and understanding and improving the human condition. But today, the social sciences face serious threats. From dwindling federal funding to public mistrust in institutions to widespread skepticism about data, the infrastructure supporting the social sciences is shifting in ways that threaten to undercut research and knowledge production.

How can we secure social knowledge for future generations?

This question has guided the Social Science Research Council’s Task Force. Following eighteen months of consultation with key players as well as internal deliberation, we have identified both long-term developments and present threats that have created challenges for the social sciences, but also created unique opportunities. And we have generated recommendations to address these issues.

Our core finding focuses on the urgent need for new partnerships and collaborations among several key players: the federal government, academic institutions, donor organizations, and the private sector. Several decades ago, these institutions had clear zones of responsibility in producing social knowledge, with the federal government constituting the largest portion of funding for basic research. Today, private companies represent an increasingly large share not just of research and funding, but also the production of data that informs the social sciences, from smart phone usage to social media patterns.

In addition, today’s social scientists face unprecedented demands for accountability, speedy publication, and generation of novel results. These pressures have emerged from the fragmented institutional foundation that undergirds research. That foundation needs a redesign in order for the social sciences to continue helping our communities address problems ranging from income inequality to education reform.

To build a better future, we identify five areas of action: Funding, Data, Ethics, Research Quality, and Research Training. In each area, our recommendations range from enlarging corporate-academic pilot programs to improving social science training in digital literacy.

A consistent theme is that none of the measures, if taken unilaterally, can generate optimal outcomes. Instead, we have issued a call to forge a new research compact to harness the potential of the social sciences for improving human lives. That compact depends on partnerships, and we urge the key players in the construction of social science knowledge—including universities, government, foundations, and corporations—to act swiftly. With the right realignments, the security of social knowledge lies within our reach….(More)”

The Hacking of America


Jill Lepore at the New York Times: “Every government is a machine, and every machine has its tinkerers — and its jams. From the start, machines have driven American democracy and, just as often, crippled it. The printing press, the telegraph, the radio, the television, the mainframe, cable TV, the internet: Each had wild-eyed boosters who promised that a machine could hold the republic together, or make it more efficient, or repair the damage caused by the last machine. Each time, this assertion would be both right and terribly wrong. But lately, it’s mainly wrong, chiefly because the rules that prevail on the internet were devised by people who fundamentally don’t believe in government.

The Constitution itself was understood by its framers as a machine, a precisely constructed instrument whose measures — its separation of powers, its checks and balances — were mechanical devices, as intricate as the gears of a clock, designed to thwart tyrants, mobs and demagogues, and to prevent the forming of factions. Once those factions began to appear, it became clear that other machines would be needed to establish stable parties. “The engine is the press,” Thomas Jefferson, an inveterate inventor, wrote in 1799.

The United States was founded as a political experiment; it seemed natural that it should advance and grow through technological experiment. Different technologies have offered different fixes. Equality was the promise of the penny press, newspapers so cheap that anyone could afford them. The New York Sun was first published in 1833. “It shines for all” was its common-man motto. Union was the promise of the telegraph. “The greatest revolution of modern times, and indeed of all time, for the amelioration of society, has been effected by the magnetic telegraph,” The Sun announced, proclaiming “the annihilation of space.”
Time was being annihilated too. As The New York Herald pointed out, the telegraph appeared to make it possible for “the whole nation” to have “the same idea at the same moment.” Frederick Douglass was convinced that the great machines of the age were ushering in an era of worldwide political revolution. “Thanks to steam navigation and electric wires,” he wrote, “a revolution cannot be confined to the place or the people where it may commence but flashes with lightning speed from heart to heart.” Henry David Thoreau raised an eyebrow: “We are in great haste to construct a magnetic telegraph from Maine to Texas; but Maine and Texas, it may be, have nothing important to communicate.”

Even that savage war didn’t diminish Americans’ faith that technology could solve the problem of political division. In the 1920s, Herbert Hoover, as secretary of commerce, rightly anticipated that radio, the nation’s next great mechanical experiment, would make it possible for political candidates and officeholders to speak to voters without the bother and expense of traveling to meet them. NBC began radio broadcasting in 1926, CBS in 1928. By the end of the decade, nearly every household would have a wireless. Hoover promised that radio would make Americans “literally one people.”

That radio fulfilled this promise for as long as it did is the result of decisions made by Mr. Hoover, a Republican who believed that the government had a role to play in overseeing the airwaves by issuing licenses for frequencies to broadcasting companies and regulating their use. “The ether is a public medium,” he insisted, “and its use must be for the public benefit.” He pressed for passage of the Radio Act of 1927, one of the most consequential and underappreciated acts of Progressive reform — insisting that programmers had to answer to the public interest. That commitment was extended to television in 1949 when the Federal Communications Commission, the successor to the Federal Radio Commission, established the Fairness Doctrine, a standard for television news that required a “reasonably balanced presentation” of different political views….

All of this history was forgotten or ignored by the people who wrote the rules of the internet and who peer out upon the world from their offices in Silicon Valley and boast of their disdain for the past. But the building of a new machinery of communications began even before the opening of the internet. In the 1980s, conservatives campaigned to end the Fairness Doctrine in favor of a public-interest-based rule for broadcasters, a market-based rule: If people liked it, broadcasters could broadcast it….(More)”

Satellite Images and Shadow Analysis: How The Times Verifies Eyewitness Videos


 Christoph Koettl at the New York Times: “Was a video of a chemical attack really filmed in Syria? What time of day did an airstrike happen? Which military unit was involved in a shooting in Afghanistan? Is this dramatic image of glowing clouds really showing wildfires in California?

These are some of the questions the video team at The New York Times has to answer when reviewing raw eyewitness videos, often posted to social media. It can be a highly challenging process, as misinformation shared through digital social networks is a serious problem for a modern-day newsroom. Visual information in the digital age is easy to manipulate, and even easier to spread.

What is thus required for conducting visual investigations based on social media content is a mix of traditional journalistic diligence and cutting-edge internet skills, as can be seen in our recent investigation into the chemical attack in Douma, Syria.

 The following provides some insight into our video verification process. It is not a comprehensive overview, but highlights some of our most trusted techniques and tools….(More)”.

Swarm AI Outperforms in Stanford Medical Study


Press Release: “Stanford University School of Medicine and Unanimous AI presented a new study today showing that a small group of doctors, connected by intelligence algorithms that enable them to work together as a “hive mind,” could achieve higher diagnostic accuracy than the individual doctors or machine learning algorithms alone.  The technology used is called Swarm AI and it empowers networked human groups to combine their individual insights in real-time, using AI algorithms to converge on optimal solutions.

As presented at the 2018 SIIM Conference on Machine Intelligence in Medical Imaging, the study tasked a group of experienced radiologists with diagnosing the presence of pneumonia in chest X-rays. This is one of the most widely performed imaging procedures in the US, with more than 1 million adults hospitalized with pneumonia each year. But, despite this prevalence, accurately diagnosing X-rays is highly challenging with significant variability across radiologists. This makes it both an optimal task for applying new AI technologies, and an important problem to solve for the medical community.

When generating diagnoses using Swarm AI technology, the average error rate was reduced by 33% compared to traditional diagnoses by individual practitioners.  This is an exciting result, showing the potential of AI technologies to amplify the accuracy of human practitioners while maintaining their direct participation in the diagnostic process.

Swarm AI technology was also compared to the state-of-the-art in automated diagnosis using software algorithms that do not employ human practitioners.  Currently, the best system in the world for the automated diagnosing of pneumonia from chest X-rays is the CheXNet system from Stanford University, which made headlines in 2017 by significantly outperforming individual practitioners using deep-learning derived algorithms.

The Swarm AI system, which combines real-time human insights with AI technology, was 22% more accurate in binary classification than the software-only CheXNet system.  In other words, by connecting a group of radiologists into a medical “hive mind”, the hybrid human-machine system was able to outperform individual human doctors as well as the state-of-the-art in deep-learning derived algorithms….(More)”.

Rohingya turn to blockchain to solve identity crisis


Skot Thayer and Alex Hern at the Guardian: “Rohingya refugees are turning to blockchain-type technology to help address one of their most existential threats: lack of officially-recognised identity.

Denied citizenship in their home country of Myanmar for decades, the Muslim minority was the target of a brutal campaign of violence by the military which culminated a year ago this week. A “clearance operation” led by Buddhist militia sent more than 700,000 Rohingya pouring over the border into Bangladesh, without passports or official ID.

The Myanmar government has since agreed to take the Rohingya back, but are refusing to grant them citizenship. Many Rohingya do not want to return and face life without a home or an identity. This growing crisis prompted Muhammad Noor and his team at the Rohingya Project to try to find a digital solution.

“Why does a centralised entity like a bank or government own my identity,” says Noor, a Rohingya community leader based in Kuala Lumpur. “Who are they to say if I am who I am?”

Using blockchain-based technology, Noor, is trialling the use of digital identity cards that aim to help Rohingya in Malaysia, Bangladesh and Saudi Arabia access services such as banking and education. The hope is that successful trials might lead to a system that can help the community across southeast Asia.

Under the scheme, a blockchain database is used to record individual digital IDs, which can then be issued to people once they have taken a test to verify that they are genuine Rohingya….

Blockchain-based initiatives, such as the Rohingya Project, could eventually allow people to build the network of relationships necessary to participate in the modern global economy and prevent second and third generation “invisible” people from slipping into poverty. It could also allow refugees to send money across borders, bypassing high transaction fees.

In Jordan’s Azraq refugee camp, the United Nations World Food Programme (WFP) is using blockchain and biometrics to help Syrian refugees to purchase groceries using a voucher system. This use of the technology allows the WFP to bypass bank fees.

But Al Rjula says privacy is still an issue. “The technology is maturing, yet implementation by startups and emerging tech companies is still lacking,” he says.

The involvement of a trendy technology such as blockchains can often be enough to secure the funding, attention and support that start-ups – whether for-profit or charitable – need to thrive. But companies such as Tykn still have to tackle plenty of the same issues as their old-fashioned database-using counterparts, from convincing governments and NGOs to use their services in the first place to working out how to make enough overhead to pay staff, while also dealing with the fickle issues of building on a cutting-edge platform.

Blockchain-based humanitarian initiatives will also need to reckon with the problem of accountability in their efforts to aid refugees and those trapped in the limbo of statelessness.

Dilek Genc, a PhD candidate at the University of Edinburgh who studies blockchain-type applications in humanitarian aid and development, saysif the aid community continues to push innovation using Silicon Valley’s creed of “fail fast and often,” and experiment on vulnerable peoples they will be fundamentally at odds with humanitarian principles and fail to address the political roots of issues facing refugees…(More)”.

Government for the Future Reflection and Vision for Tomorrow’s Leaders


Book by Mark A. Abramson, Daniel J. Chenok and John M. Kamensky: “In recognition of its 20th anniversary, The IBM Center for the Business of Government offers a retrospective of the most significant changes in government management during that period and looks forward over the next 20 years to offer alternative scenarios as to what government management might look like by the year 2040.

Part I will discuss significant management improvements in the federal government over the past 20 years, based in part on a crowdsourced survey of knowledgeable government officials and public administration experts in the field. It will draw on themes and topics examined in the 350 IBM Center reports published over the past two decades. Part II will outline alternative scenarios of how government might change over the coming 20 years. The scenarios will be developed based on a series of envisioning sessions which are bringing together practitioners and academics to examine the future. The scenarios will be supplemented with short essays on various topics. Part II will also include essays by winners of the Center’s Challenge Grant competition. Challenge Grant winners will be awarded grants to identify futuristic visions of government in 2040….(More)”.