Artificial Intelligence


Stanford Encyclopedia of Philosophy: “Artificial intelligence (AI) is the field devoted to building artificial animals (or at least artificial creatures that – in suitable contexts – appear to be animals) and, for many, artificial persons (or at least artificial creatures that – in suitable contexts – appear to be persons).[1] Such goals immediately ensure that AI is a discipline of considerable interest to many philosophers, and this has been confirmed (e.g.) by the energetic attempt, on the part of numerous philosophers, to show that these goals are in fact un/attainable. On the constructive side, many of the core formalisms and techniques used in AI come out of, and are indeed still much used and refined in, philosophy: first-order logic and its extensions; intensional logics suitable for the modeling of doxastic attitudes and deontic reasoning; inductive logic, probability theory, and probabilistic reasoning; practical reasoning and planning, and so on. In light of this, some philosophers conduct AI research and development as philosophy.

In the present entry, the history of AI is briefly recounted, proposed definitions of the field are discussed, and an overview of the field is provided. In addition, both philosophical AI (AI pursued as and out of philosophy) and philosophy of AI are discussed, via examples of both. The entry ends with some de rigueur speculative commentary regarding the future of AI….(More)”.

How To Use Bureaucracies


Samo Burja at LessWrong:” …The purpose of a bureaucracy is to save the time of a competent person. Put another way: to save time, some competent people will create a system that is meant to do exactly what they want — nothing more and nothing less. In particular, it’s necessary to create a bureaucracy when you are both (a) trying to do something that you do not have the capacity to do on your own, and (b) unable to find a competent, aligned person to handle the project for you. Bureaucracies ameliorate the problem of talent and alignment scarcity.

Features of Bureaucracies

Bureaucrats are expected to act according to a script, or a set of procedures — and that’s it.

Owners don’t trust that bureaucrats will be competent or aligned enough to act in line with the owner’s wishes of their own accord. Given this lack of trust, owners should be trying to disempower bureaucrats. Bureaucracies are built to align people and make them sufficiently competent by chaining them with rules. When bureaucracies deliberately restrict innovation, they are doing it for good reason.

Bureaucrats are meant to have only borrowed power (power that can easily be taken away) given to them by the owner or operator of the bureaucracy.

Effective Bureaucracies

What is an effective, owned bureaucracy? Why are effective bureaucracies owned? To begin, we must make two important distinctions: one between owned and abandoned bureaucracies, and one between effective and ineffective bureaucracies.

Owned bureaucracies are bureaucracies with an owner; they’re bureaucracies that someone can shape. Abandoned bureaucracies are bureaucracies without an owner.

If a bureaucracy is owned, the bureaucracy’s creator is likely the owner. The creator will have knowledge about the setup of the bureaucracy that is necessary for properly reforming it. Others, unless given this information, will not understand the bureaucracy well enough to properly reform it.

The person technically in charge of the bureaucracy (e.g. the C.E.O. of a company who is not its founder) might not be its owner simply because he or she doesn’t have sufficient information about the bureaucracy’s setup to guide it. As a result, the official head of a given bureaucracy may just be another bureaucrat.

While the owner is typically the creator, this needn’t be true, as long as the new owner has come to understand enough of the function of the bureaucracy to make effective adaptations to its procedures.

Effective bureaucracies are bureaucracies that are handling the project they were created to handle. Ineffective bureaucracies are bureaucracies that are not handling the project they were created to handle.

Bureaucracies that are properly set up will be effective at the start. Changes in reality require changes in procedures, however, so a bureaucracy’s procedures inevitably need to be altered appropriately for it to remain effective. Over time, abandoned bureaucracies, having no person who can functionally shape the bureaucracy to make these changes, quickly become ineffective bureaucracies.

Owned bureaucracies, on the other hand, have a shot at making these adaptations to prevent decay. If the owner is skilled, the bureaucracy’s procedures can be modified, and the bureaucracy will continue serving its original purpose. If the owner is unskilled, it is as if the bureaucracy is abandoned — the owner’s efforts to change the bureaucracy’s strategies won’t yield successful adaptation, and the bureaucracy will become ineffective. As a result, for a bureaucracy to remain effective over time, it must be an owned, not abandoned, bureaucracy with a sufficiently capable owner.

Losing and Dismantling Bureaucracies

Bureaucracies are best thought of as an extension of their creator and as a source of power for him or her. However, the owner can lose control of the bureaucracy over time, as bureaucrats convert borrowed power into owned power by exploiting information asymmetries. While owners will try to limit the owned power of their bureaucrats, the bureaucrats will have more than enough time to study the instruments of their control and will learn what is rewarded and what isn’t….

The origin of bureaucracies lies in them extending power and effects far beyond what a single individual can do. They can do so in the absence of expensive and difficult coordination, or difficult to train and evaluate individual talent.

Much like factories can produce cheap products at scale with unskilled labor, displacing craftsmen, so have bureaucracies displaced local social fabric as the generators of social outcomes.

We find ourselves embedded in a bureaucratized landscape. What can or cannot be done in it, is determined by the organizations composing it. The constant drive by talented individuals to both extend power and make due with unskilled white collar labor (a category that economists should recognize and talk more about) have littered the landscape with many large organizations. Some remain piloted, others are long abandoned. Some continue to perform vital social functions, others lumber about making life difficult.

Much as we might bemoan the very real human cost bureaucracies impose, they currently provide services at economies that are otherwise simply not possible. We must acknowledge our collective and individual dependence on them and plan to interact accordingly….(More)”.

Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates


Marshall Allen at ProPublica: “With little public scrutiny, the health insurance industry has joined forces with data brokers to vacuum up personal details about hundreds of millions of Americans, including, odds are, many readers of this story. The companies are tracking your race, education level, TV habits, marital status, net worth. They’re collecting what you post on social media, whether you’re behind on your bills, what you order online. Then they feed this information into complicated computer algorithms that spit out predictions about how much your health care could cost them.

Are you a woman who recently changed your name? You could be newly married and have a pricey pregnancy pending. Or maybe you’re stressed and anxious from a recent divorce. That, too, the computer models predict, may run up your medical bills.

Are you a woman who’s purchased plus-size clothing? You’re considered at risk of depression. Mental health care can be expensive.

Low-income and a minority? That means, the data brokers say, you are more likely to live in a dilapidated and dangerous neighborhood, increasing your health risks.

“We sit on oceans of data,” said Eric McCulley, director of strategic solutions for LexisNexis Risk Solutions, during a conversation at the data firm’s booth. And he isn’t apologetic about using it. “The fact is, our data is in the public domain,” he said. “We didn’t put it out there.”

Insurers contend they use the information to spot health issues in their clients — and flag them so they get services they need. And companies like LexisNexis say the data shouldn’t be used to set prices. But as a research scientist from one company told me: “I can’t say it hasn’t happened.”

At a time when every week brings a new privacy scandal and worries abound about the misuse of personal information, patient advocates and privacy scholars say the insurance industry’s data gathering runs counter to its touted, and federally required, allegiance to patients’ medical privacy. The Health Insurance Portability and Accountability Act, or HIPAA, only protects medical information.

“We have a health privacy machine that’s in crisis,” said Frank Pasquale, a professor at the University of Maryland Carey School of Law who specializes in issues related to machine learning and algorithms. “We have a law that only covers one source of health information. They are rapidly developing another source.”…(More)”.

Exploring New Labscapes: Converging and Diverging on Social Innovation Labs


Essay by Marlieke Kieboom:”…The question ‘what is a (social innovation) lab?’ is as old as the lab community itself and seems to return at every (social innovation) lab gathering. It came up at the very first event of its kind (Kennisland’s Lab2: Lab for Labs, Amsterdam 2013) and has been debated at every consequent event ever since under hashtags like #socinnlabs, #sociallabs and #psilabs (see MaRs’s Labs for Systems Change — 2014, Nesta’s Labworks — 2015, EU Policy lab’s Lab Connections — 2016 and ESADE’s Labs for Social Innovation — 2017).

However, the concept has remained roughly the same since we saw the first wave of labs (Helsinki Design LabMindLab and Reos’ Change Labs) in the early 2010’s. Social innovation labs are permanent or short term structures/projects/events that use a variety of experimental methods to support collaboration between stakeholders to collectively address social challenges at a systemic level. Stakeholders range from citizens and community action groups to businesses, universities and public administrations. Their specific characteristics (e.g. developing experimental user-led research methods, building innovation capacity building, convening multi-disciplinary teams, working to reach scale) and shapes (public sector innovation labs, social innovations labs, digital service labs, policy labs) are well described in many publications (e.g. Lab Matters, 2014; Labs for Social Innovation, 2017).

As Nesta neatly shows innovation labs are part of a family, or a movement of connected experimental, innovative approaches like service design, behavioural insights, citizen engagement, and so on.

 
Spot the labs (Source: https://www.nesta.org.uk/blog/landscape-of-innovation-approaches/)

So why does this question keep coming back? The roots of the confusion and debates may lie in the word ‘social’. The medical, technological, and business sectors know exactly what they aim for in their innovation labs. They are ‘controlled-for’ environments where experimentation leads to developing, testing and scaling futuristic (mostly for profit) products, like self-driving cars, cancer medicines, drug test strips and cultured meat. Some of these products contribute to a more just, equal, sustainable world, while others don’t.

For working on societal issues like climate change, immigration patterns or a drug overdose crisis, lab settings are and should be unmistakably more open and porous. Complex, systemic challenges are impossible to capture between four lab walls, nor should we even try as they arguably arose from isolated, closed, and disconnected socio-economic interactions. Value creation for these type of challenges therefore lies outside closed, competitive, measurable spaces: in forging new collaborations, open-sourcing methodologies, encouraging curious mindsets and diversifying social movements. Consequently social lab outcomes are less measurable and concrete, ranging from reframing existing (socio-cultural) paradigms, to designing new procurement procedures and policies, to delivering new (digital and non-digital) public services. Try to ‘randomize-control-trial’ that!…(More).

Does E-government reduce corruption? Evidence from a heterogeneous panel data model


Paper by Devid Kumar Basyal et al: “The purpose of this paper is to revisit the relationship between E-government and corruption using global panel data from 176 countries covering the period from 2003 to 2014, considering other potential determinants, such as economic prosperity (gross domestic product per capita [GDPPC]), price stability (inflation), good governance (political stability and government effectiveness) and press freedom (civil liberties and political rights) indicators. Hence, the main rationale of this study is to reexamine the conventional wisdom as to the relationship between E-government and corruption using panel data independent of any preexisting notions. …

No statistical evidence was found for the idea that E-government has a positive impact on corruption reduction following a rigorous test of the proposition. However, strong evidence was found for the positive impact of a country’s government effectiveness, political stability and economic status. There also appears to be some evidence for the effect of GDPPC and civil liberties. There is no evidence to prove that inflation and political rights have any corruption reducing the effect…

The findings of the study demonstrate that E-government is less significant for reducing corruption compared to other factors. Hence, policymakers should further focus on other potential areas such as socio-economic factors, good governance, culture and transparency to combat corruption in addition to improving digital government…(More)”.

‘Data is a fingerprint’: why you aren’t as anonymous as you think online


Olivia Solon at The Guardian: “In August 2016, the Australian government released an “anonymised” data set comprising the medical billing records, including every prescription and surgery, of 2.9 million people.

Names and other identifying features were removed from the records in an effort to protect individuals’ privacy, but a research team from the University of Melbourne soon discovered that it was simple to re-identify people, and learn about their entire medical history without their consent, by comparing the dataset to other publicly available information, such as reports of celebrities having babies or athletes having surgeries.

The government pulled the data from its website, but not before it had been downloaded 1,500 times.

This privacy nightmare is one of many examples of seemingly innocuous, “de-identified” pieces of information being reverse-engineered to expose people’s identities. And it’s only getting worse as people spend more of their lives online, sprinkling digital breadcrumbs that can be traced back to them to violate their privacy in ways they never expected.

Nameless New York taxi logs were compared with paparazzi shots at locations around the city to reveal that Bradley Cooper and Jessica Alba were bad tippers. In 2017 German researchers were able to identify people based on their “anonymous” web browsing patterns. This week University College London researchers showed how they could identify an individual Twitter user based on the metadata associated with their tweets, while the fitness tracking app Polar revealed the homes and in some cases names of soldiers and spies.

“It’s convenient to pretend it’s hard to re-identify people, but it’s easy. The kinds of things we did are the kinds of things that any first-year data science student could do,” said Vanessa Teague, one of the University of Melbourne researchers to reveal the flaws in the open health data.

One of the earliest examples of this type of privacy violation occurred in 1996 when the Massachusetts Group Insurance Commission released “anonymised” data showing the hospital visits of state employees. As with the Australian data, the state removed obvious identifiers like name, address and social security number. Then the governor, William Weld, assured the public that patients’ privacy was protected….(More)”.

The Diversity Dashboard


Engaging Local Government Leaders:  “The Diversity Dashboard is a crowd-funded data collection effort managed by ELGL and hosted on the OpenGovplatform. The data collection includes the self reported gender, race, age, and veteran status of Chief Administrative Officers and Assistant Chief Administrative Officers in local governments of all sizes and forms.

This link includes background information about the Diversity Dashboard, and access to the “Stories” module where we highlight some key findings.

From there, you can drill down into the data, looking at pre-formatted reports and creating your own reports using the submitted data.

The more local government leaders who take the survey, the bigger the dataset, the better our understanding of what the local government leadership landscape looks like. If your local government hasn’t yet completed the survey, please take the survey!…(More)”.

Suspect Citizens: What 20 Million Traffic Stops Tell Us About Policing and Race


Book by Frank R. Baumgartner, Derek A. Epp, and Kelsey Shoub: “Suspect Citizens offers the most comprehensive look to date at the most common form of police-citizen interactions, the routine traffic stop. Throughout the war on crime, police agencies have used traffic stops to search drivers suspected of carrying contraband.

From the beginning, police agencies made it clear that very large numbers of police stops would have to occur before an officer might interdict a significant drug shipment. Unstated in that calculation was that many Americans would be subjected to police investigations so that a small number of high-level offenders might be found. The key element in this strategy, which kept it hidden from widespread public scrutiny, was that middle-class white Americans were largely exempt from its consequences.

Tracking these police practices down to the officer level, Suspect Citizens documents the extreme rarity of drug busts and reveals sustained and troubling disparities in how racial groups are treated….

  • Offers an empirically rigorous examination of who the police interact with and how, analyzing a database of 20 million traffic stops collected over more than a decade
  • Assesses both the efficacy and costs of war on crime policies and discusses implications for American democracy
  • Suggests practical policy reforms police administrators can implement today to reduce disparities, improve police-citizen relations, and help fight crime…(More)”

A model to help tech companies make responsible technology a reality


Sam Brown at DotEveryone: “..adopting a Responsible Technology approach isn’t straightforward. There’s currently no roadmap, or even any common language, about how to embed responsible technology practices in practical and tangible ways.

That’s why Doteveryone has spent the last year researching the issues organisations face and we’re now developing a model that will help organisations do just that.

The 3C model helps to guide organisations on how to assess the level of responsibility of their technology products or services as they develop them.

It’s not an ethical bible which dictates right from wrong, but a framework which gives teams space and parameters to foresee the potential impacts their technologies could have and to consider how to handle them.

Our 3C Model of Responsible Technology considers:

  1. the Context of the wider world a technology product or service exists within
  2. the potential ways technology can have unintended Consequences
  3. the different Contribution people make to a technology — how value is given and received

We are developing a number of assessment tools which product teams can work through to help them examine and evaluate each of these areas in real time during the development cycle. The form of the assessments range from checklists to step-by-step information mapping to team board games….(More)”.

How Charities Are Using Artificial Intelligence to Boost Impact


Nicole Wallace at the Chronicle of Philanthropy: “The chaos and confusion of conflict often separate family members fleeing for safety. The nonprofit Refunite uses advanced technology to help loved ones reconnect, sometimes across continents and after years of separation.

Refugees register with the service by providing basic information — their name, age, birthplace, clan and subclan, and so forth — along with similar facts about the people they’re trying to find. Powerful algorithms search for possible matches among the more than 1.1 million individuals in the Refunite system. The analytics are further refined using the more than 2,000 searches that the refugees themselves do daily.

The goal: find loved ones or those connected to them who might help in the hunt. Since Refunite introduced the first version of the system in 2010, it has helped more than 40,000 people reconnect.

One factor complicating the work: Cultures define family lineage differently. Refunite co-founder Christopher Mikkelsen confronted this problem when he asked a boy in a refugee camp if he knew where his mother was. “He asked me, ‘Well, what mother do you mean?’ ” Mikkelsen remembers. “And I went, ‘Uh-huh, this is going to be challenging.’ ”

Fortunately, artificial intelligence is well suited to learn and recognize different family patterns. But the technology struggles with some simple things like distinguishing the image of a chicken from that of a car. Mikkelsen believes refugees in camps could offset this weakness by tagging photographs — “car” or “not car” — to help train algorithms. Such work could earn them badly needed cash: The group hopes to set up a system that pays refugees for doing such work.

“To an American, earning $4 a day just isn’t viable as a living,” Mikkelsen says. “But to the global poor, getting an access point to earning this is revolutionizing.”

Another group, Wild Me, a nonprofit created by scientists and technologists, has created an open-source software platform that combines artificial intelligence and image recognition, to identify and track individual animals. Using the system, scientists can better estimate the number of endangered animals and follow them over large expanses without using invasive techniques….

To fight sex trafficking, police officers often go undercover and interact with people trying to buy sex online. Sadly, demand is high, and there are never enough officers.

Enter Seattle Against Slavery. The nonprofit’s tech-savvy volunteers created chatbots designed to disrupt sex trafficking significantly. Using input from trafficking survivors and law-enforcement agencies, the bots can conduct simultaneous conversations with hundreds of people, engaging them in multiple, drawn-out conversations, and arranging rendezvous that don’t materialize. The group hopes to frustrate buyers so much that they give up their hunt for sex online….

A Philadelphia charity is using machine learning to adapt its services to clients’ needs.

Benefits Data Trust helps people enroll for government-assistance programs like food stamps and Medicaid. Since 2005, the group has helped more than 650,000 people access $7 billion in aid.

The nonprofit has data-sharing agreements with jurisdictions to access more than 40 lists of people who likely qualify for government benefits but do not receive them. The charity contacts those who might be eligible and encourages them to call the Benefits Data Trust for help applying….(More)”.