Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates


Marshall Allen at ProPublica: “With little public scrutiny, the health insurance industry has joined forces with data brokers to vacuum up personal details about hundreds of millions of Americans, including, odds are, many readers of this story. The companies are tracking your race, education level, TV habits, marital status, net worth. They’re collecting what you post on social media, whether you’re behind on your bills, what you order online. Then they feed this information into complicated computer algorithms that spit out predictions about how much your health care could cost them.

Are you a woman who recently changed your name? You could be newly married and have a pricey pregnancy pending. Or maybe you’re stressed and anxious from a recent divorce. That, too, the computer models predict, may run up your medical bills.

Are you a woman who’s purchased plus-size clothing? You’re considered at risk of depression. Mental health care can be expensive.

Low-income and a minority? That means, the data brokers say, you are more likely to live in a dilapidated and dangerous neighborhood, increasing your health risks.

“We sit on oceans of data,” said Eric McCulley, director of strategic solutions for LexisNexis Risk Solutions, during a conversation at the data firm’s booth. And he isn’t apologetic about using it. “The fact is, our data is in the public domain,” he said. “We didn’t put it out there.”

Insurers contend they use the information to spot health issues in their clients — and flag them so they get services they need. And companies like LexisNexis say the data shouldn’t be used to set prices. But as a research scientist from one company told me: “I can’t say it hasn’t happened.”

At a time when every week brings a new privacy scandal and worries abound about the misuse of personal information, patient advocates and privacy scholars say the insurance industry’s data gathering runs counter to its touted, and federally required, allegiance to patients’ medical privacy. The Health Insurance Portability and Accountability Act, or HIPAA, only protects medical information.

“We have a health privacy machine that’s in crisis,” said Frank Pasquale, a professor at the University of Maryland Carey School of Law who specializes in issues related to machine learning and algorithms. “We have a law that only covers one source of health information. They are rapidly developing another source.”…(More)”.

How Charities Are Using Artificial Intelligence to Boost Impact


Nicole Wallace at the Chronicle of Philanthropy: “The chaos and confusion of conflict often separate family members fleeing for safety. The nonprofit Refunite uses advanced technology to help loved ones reconnect, sometimes across continents and after years of separation.

Refugees register with the service by providing basic information — their name, age, birthplace, clan and subclan, and so forth — along with similar facts about the people they’re trying to find. Powerful algorithms search for possible matches among the more than 1.1 million individuals in the Refunite system. The analytics are further refined using the more than 2,000 searches that the refugees themselves do daily.

The goal: find loved ones or those connected to them who might help in the hunt. Since Refunite introduced the first version of the system in 2010, it has helped more than 40,000 people reconnect.

One factor complicating the work: Cultures define family lineage differently. Refunite co-founder Christopher Mikkelsen confronted this problem when he asked a boy in a refugee camp if he knew where his mother was. “He asked me, ‘Well, what mother do you mean?’ ” Mikkelsen remembers. “And I went, ‘Uh-huh, this is going to be challenging.’ ”

Fortunately, artificial intelligence is well suited to learn and recognize different family patterns. But the technology struggles with some simple things like distinguishing the image of a chicken from that of a car. Mikkelsen believes refugees in camps could offset this weakness by tagging photographs — “car” or “not car” — to help train algorithms. Such work could earn them badly needed cash: The group hopes to set up a system that pays refugees for doing such work.

“To an American, earning $4 a day just isn’t viable as a living,” Mikkelsen says. “But to the global poor, getting an access point to earning this is revolutionizing.”

Another group, Wild Me, a nonprofit created by scientists and technologists, has created an open-source software platform that combines artificial intelligence and image recognition, to identify and track individual animals. Using the system, scientists can better estimate the number of endangered animals and follow them over large expanses without using invasive techniques….

To fight sex trafficking, police officers often go undercover and interact with people trying to buy sex online. Sadly, demand is high, and there are never enough officers.

Enter Seattle Against Slavery. The nonprofit’s tech-savvy volunteers created chatbots designed to disrupt sex trafficking significantly. Using input from trafficking survivors and law-enforcement agencies, the bots can conduct simultaneous conversations with hundreds of people, engaging them in multiple, drawn-out conversations, and arranging rendezvous that don’t materialize. The group hopes to frustrate buyers so much that they give up their hunt for sex online….

A Philadelphia charity is using machine learning to adapt its services to clients’ needs.

Benefits Data Trust helps people enroll for government-assistance programs like food stamps and Medicaid. Since 2005, the group has helped more than 650,000 people access $7 billion in aid.

The nonprofit has data-sharing agreements with jurisdictions to access more than 40 lists of people who likely qualify for government benefits but do not receive them. The charity contacts those who might be eligible and encourages them to call the Benefits Data Trust for help applying….(More)”.

What if people were paid for their data?


The Economist: “Data Slavery” Jennifer Lyn Morone, an American artist, thinks this is the state in which most people now live. To get free online services, she laments, they hand over intimate information to technology firms. “Personal data are much more valuable than you think,” she says. To highlight this sorry state of affairs, Ms Morone has resorted to what she calls “extreme capitalism”: she registered herself as a company in Delaware in an effort to exploit her personal data for financial gain. She created dossiers containing different subsets of data, which she displayed in a London gallery in 2016 and offered for sale, starting at £100 ($135). The entire collection, including her health data and social-security number, can be had for £7,000.

Only a few buyers have taken her up on this offer and she finds “the whole thing really absurd”. ..Given the current state of digital affairs, in which the collection and exploitation of personal data is dominated by big tech firms, Ms Morone’s approach, in which individuals offer their data for sale, seems unlikely to catch on. But what if people really controlled their data—and the tech giants were required to pay for access? What would such a data economy look like?…

Labour, like data, is a resource that is hard to pin down. Workers were not properly compensated for labour for most of human history. Even once people were free to sell their labour, it took decades for wages to reach liveable levels on average. History won’t repeat itself, but chances are that it will rhyme, Mr Weyl predicts in “Radical Markets”, a provocative new book he has co-written with Eric Posner of the University of Chicago. He argues that in the age of artificial intelligence, it makes sense to treat data as a form of labour.

To understand why, it helps to keep in mind that “artificial intelligence” is something of a misnomer. Messrs Weyl and Posner call it “collective intelligence”: most AI algorithms need to be trained using reams of human-generated examples, in a process called machine learning. Unless they know what the right answers (provided by humans) are meant to be, algorithms cannot translate languages, understand speech or recognise objects in images. Data provided by humans can thus be seen as a form of labour which powers AI. As the data economy grows up, such data work will take many forms. Much of it will be passive, as people engage in all kinds of activities—liking social-media posts, listening to music, recommending restaurants—that generate the data needed to power new services. But some people’s data work will be more active, as they make decisions (such as labelling images or steering a car through a busy city) that can be used as the basis for training AI systems….

But much still needs to happen for personal data to be widely considered as labour, and paid for as such. For one thing, the right legal framework will be needed to encourage the emergence of a new data economy. The European Union’s new General Data Protection Regulation, which came into effect in May, already gives people extensive rights to check, download and even delete personal data held by companies. Second, the technology to keep track of data flows needs to become much more capable. Research to calculate the value of particular data to an AI service is in its infancy.

Third, and most important, people will have to develop a “class consciousness” as data workers. Most people say they want their personal information to be protected, but then trade it away for nearly nothing, something known as the “privacy paradox”. Yet things may be changing: more than 90% of Americans think being in control of who can get data on them is important, according to the Pew Research Centre, a think-tank….(More)”.

Ways to think about machine learning


Benedict Evans: “We’re now four or five years into the current explosion of machine learning, and pretty much everyone has heard of it. It’s not just that startups are forming every day or that the big tech platform companies are rebuilding themselves around it – everyone outside tech has read the Economist or BusinessWeek cover story, and many big companies have some projects underway. We know this is a Next Big Thing.

Going a step further, we mostly understand what neural networks might be, in theory, and we get that this might be about patterns and data. Machine learning lets us find patterns or structures in data that are implicit and probabilistic (hence ‘inferred’) rather than explicit, that previously only people and not computers could find. They address a class of questions that were previously ‘hard for computers and easy for people’, or, perhaps more usefully, ‘hard for people to describe to computers’. And we’ve seen some cool (or worrying, depending on your perspective) speech and vision demos.

I don’t think, though, that we yet have a settled sense of quite what machine learning means – what it will mean for tech companies or for companies in the broader economy, how to think structurally about what new things it could enable, or what machine learning means for all the rest of us, and what important problems it might actually be able to solve.

This isn’t helped by the term ‘artificial intelligence’, which tends to end any conversation as soon as it’s begun. As soon as we say ‘AI’, it’s as though the black monolith from the beginning of 2001 has appeared, and we all become apes screaming at it and shaking our fists. You can’t analyze ‘AI’.

Indeed, I think one could propose a whole list of unhelpful ways of talking about current developments in machine learning. For example:

  • Data is the new oil
  • Google and China (or Facebook, or Amazon, or BAT) have all the data
  • AI will take all the jobs
  • And, of course, saying AI itself.

More useful things to talk about, perhaps, might be:

  • Automation
  • Enabling technology layers
  • Relational databases. …(More).

Against the Dehumanisation of Decision-Making – Algorithmic Decisions at the Crossroads of Intellectual Property, Data Protection, and Freedom of Information


Paper by Guido Noto La Diega: “Nowadays algorithms can decide if one can get a loan, is allowed to cross a border, or must go to prison. Artificial intelligence techniques (natural language processing and machine learning in the first place) enable private and public decision-makers to analyse big data in order to build profiles, which are used to make decisions in an automated way.

This work presents ten arguments against algorithmic decision-making. These revolve around the concepts of ubiquitous discretionary interpretation, holistic intuition, algorithmic bias, the three black boxes, psychology of conformity, power of sanctions, civilising force of hypocrisy, pluralism, empathy, and technocracy.

The lack of transparency of the algorithmic decision-making process does not stem merely from the characteristics of the relevant techniques used, which can make it impossible to access the rationale of the decision. It depends also on the abuse of and overlap between intellectual property rights (the “legal black box”). In the US, nearly half a million patented inventions concern algorithms; more than 67% of the algorithm-related patents were issued over the last ten years and the trend is increasing.

To counter the increased monopolisation of algorithms by means of intellectual property rights (with trade secrets leading the way), this paper presents three legal routes that enable citizens to ‘open’ the algorithms.

First, copyright and patent exceptions, as well as trade secrets are discussed.

Second, the GDPR is critically assessed. In principle, data controllers are not allowed to use algorithms to take decisions that have legal effects on the data subject’s life or similarly significantly affect them. However, when they are allowed to do so, the data subject still has the right to obtain human intervention, to express their point of view, as well as to contest the decision. Additionally, the data controller shall provide meaningful information about the logic involved in the algorithmic decision.

Third, this paper critically analyses the first known case of a court using the access right under the freedom of information regime to grant an injunction to release the source code of the computer program that implements an algorithm.

Only an integrated approach – which takes into account intellectual property, data protection, and freedom of information – may provide the citizen affected by an algorithmic decision of an effective remedy as required by the Charter of Fundamental Rights of the EU and the European Convention on Human Rights….(More)”.

AI Nationalism


Blog by Ian Hogarth: “The central prediction I want to make and defend in this post is that continued rapid progress in machine learning will drive the emergence of a new kind of geopolitics; I have been calling it AI Nationalism. Machine learning is an omni-use technology that will come to touch all sectors and parts of society.

The transformation of both the economy and the military by machine learning will create instability at the national and international level forcing governments to act. AI policy will become the single most important area of government policy. An accelerated arms race will emerge between key countries and we will see increased protectionist state action to support national champions, block takeovers by foreign firms and attract talent. I use the example of Google, DeepMind and the UK as a specific example of this issue.

This arms race will potentially speed up the pace of AI development and shorten the timescale for getting to AGI. Although there will be many common aspects to this techno-nationalist agenda, there will also be important state specific policies. There is a difference between predicting that something will happen and believing this is a good thing. Nationalism is a dangerous path, particular when the international order and international norms will be in flux as a result and in the concluding section I discuss how a period of AI Nationalism might transition to one of global cooperation where AI is treated as a global public good….(More)”.

Our Infant Information Revolution


Joseph Nye at Project Syndicate: “…When people are overwhelmed by the volume of information confronting them, it is hard to know what to focus on. Attention, not information, becomes the scarce resource. The soft power of attraction becomes an even more vital power resource than in the past, but so does the hard, sharp power of information warfare. And as reputation becomes more vital, political struggles over the creation and destruction of credibility multiply. Information that appears to be propaganda may not only be scorned, but may also prove counterproductive if it undermines a country’s reputation for credibility.

During the Iraq War, for example, the treatment of prisoners at Abu Ghraib and Guantanamo Bay in a manner inconsistent with America’s declared values led to perceptions of hypocrisy that could not be reversed by broadcasting images of Muslims living well in America. Similarly, President Donald Trump’s tweets that prove to be demonstrably false undercut American credibility and reduce its soft power.

The effectiveness of public diplomacy is judged by the number of minds changed (as measured by interviews or polls), not dollars spent. It is interesting to note that polls and the Portland index of the Soft Power 30show a decline in American soft power since the beginning of the Trump administration. Tweets can help to set the global agenda, but they do not produce soft power if they are not credible.

Now the rapidly advancing technology of artificial intelligence or machine learning is accelerating all of these processes. Robotic messages are often difficult to detect. But it remains to be seen whether credibility and a compelling narrative can be fully automated….(More)”.

Data Protection and e-Privacy: From Spam and Cookies to Big Data, Machine Learning and Profiling


Chapter by Lilian Edwards in L Edwards ed Law, Policy and the Internet (Hart , 2018): “In this chapter, I examine in detail how data subjects are tracked, profiled and targeted by their activities on line and, increasingly, in the “offline” world as well. Tracking is part of both commercial and state surveillance, but in this chapter I concentrate on the former. The European law relating to spam, cookies, online behavioural advertising (OBA), machine learning (ML) and the Internet of Things (IoT) is examined in detail, using both the GDPR and the forthcoming draft ePrivacy Regulation. The chapter concludes by examining both code and law solutions which might find a way forward to protect user privacy and still enable innovation, by looking to paradigms not based around consent, and less likely to rely on a “transparency fallacy”. Particular attention is drawn to the new work around Personal Data Containers (PDCs) and distributed ML analytics….(More)”.

Artificial intelligence in non-profit organizations


Darrell M. West and Theron Kelso at Brookings: “Artificial intelligence provides a way to use automated software to perform a number of different tasks. Private industry, government, and universities have deployed it to manage routine requests and common administrative processes. Fields from finance and healthcare to retail and defense are witnessing a dramatic expansion in the use of these tools.

Yet non-profits often lack the financial resources or organizational capabilities to innovate through technology. Most non-profits struggle with small budgets and inadequate staffing, and they fall behind the cutting edge of new technologies. This limits their group’s efficiency and effectiveness, and makes it difficult to have the kind of impact they would like.

However, there is growing interest in artificial intelligence (AI), machine learning (ML), and data analytics in non-profit organizations. Below are some of the many examples of non-profits using emerging technologies to handle finance, human resources, communications, internal operations, and sustainability.

FINANCE

Fraud and corruption are major challenges for any kind of organization as it is hard to monitor every financial transaction and business contract. AI tools can help managers automatically detect actions that warrant additional investigation. Businesses long have used AI and ML to create early warning systems, spot abnormalities, and thereby minimize financial misconduct. These tools offer ways to combat fraud and detect unusual transactions.

HUMAN RESOURCES

Advanced software helps organizations advertise, screen, and hire promising staff members. Once managers have decided what qualities they are seeking, AI can match applicants with employers. Automated systems can pre-screen resumes, check for relevant experience and skills, and identify applicants who are best suited for particular organizations. They also can weed out those who lack the required skills or do not pass basic screening criteria.

COMMUNICATIONS

Every non-profit faces challenges in terms of communications. In a rapidly-changing world, it is hard to keep in touch with outside donors, internal staff, and interested individuals. Chatbots automate conversations for commonly asked questions through text messaging. These tools can help with customer service and routine requests such as how to contribute money, address a budget question, or learn about upcoming programs. They represent an efficient and effective way to communicate with internal and external audiences….(More)”.

AI trust and AI fears: A media debate that could divide society


Article by Vyacheslav Polonski: “Unless you live under a rock, you probably have been inundated with recent news on machine learning and artificial intelligence (AI). With all the recent breakthroughs, it almost seems like AI can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Of course, many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictionsRecent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place….

Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background. Instead, they are acutely aware of instances where AI goes terribly wrong:

These unfortunate examples have received a disproportionate amount of media attention, emphasising the message that humans cannot always rely on technology. In the end, it all goes back to the simple truth that machine learning is not foolproof, in part because the humans who design it aren’t….

Fortunately we already have some ideas about how to improve trust in AI — there’s light at the end of the tunnel.

  1. Experience: One solution may be to provide more hands-on experiences with automation apps and other AI applications in everyday situations (like this robot that can get you a beer from the fridge). Thus, instead of presenting the Sony’s new robot dog Aibo as an exclusive product for the upper-class, we’d recommend making these kinds of innovations more accessible to the masses. Simply having previous experience with AI can significantly improve people’s attitudes towards the technology, as we found in our experimental study. And this is especially important for the general public that may not have a very sophisticated understanding of the technology. Similar evidence also suggests the more you use other technologies such as the Internet, the more you trust them.
  2. Insight: Another solution may be to open the “black-box” of machine learning algorithms and be slightly more transparent about how they work. Companies such as GoogleAirbnb and Twitter already release transparency reports on a regular basis. These reports provide information about government requests and surveillance disclosures. A similar practice for AI systems could help people have a better understanding of how algorithmic decisions are made. Therefore, providing people with a top-level understanding of machine learning systems could go a long way towards alleviating algorithmic aversion.
  3. Control: Lastly, creating more of a collaborative decision-making process will help build trust and allow the AI to learn from human experience. In our work at Avantgarde Analytics, we have also found that involving people more in the AI decision-making process could improve trust and transparency. In a similar vein, a group of researchers at the University of Pennsylvania recently found that giving people control over algorithms can help create more trust in AI predictions. Volunteers in their study who were given the freedom to slightly modify an algorithm felt more satisfied with it, more likely to believe it was superior and more likely to use in in the future.

These guidelines (experience, insight and control) could help making AI systems more transparent and comprehensible to the individuals affected by their decisions….(More)”.