Stefaan Verhulst
Paper by Yoshio Kamijo et al: “People to be born in the future have no direct
influence on current affairs. Given the disconnect between people who are currently living and those who will inherit the planet left for them, individuals who are currently alive tend to be more oriented toward the present, posing a fundamental problem related to sustainability.
In this study, we propose a new framework for reconciling the disconnect between the present and the future whereby some individuals in the current generation serve as an imaginary
Colin Wood at StateScoop: “Civilian analysts and officers within the New York City Police Department are using a unique computational tool to spot patterns in crime data that is closing cases.
A collection of machine-learning models, which the department calls Patternizr, was first deployed in December 2016, but the department only revealed the system last month when its developers published a research paper in the Informs Journal on Applied Analytics. Drawing on 10 years of historical data about burglary, robbery and grand larceny, the tool is the first of its kind to be used by law enforcement, the developers wrote.
The NYPD hired 100 civilian analysts in 2017 to use Patternizr. It’s also available to all officers through the department’s Domain Awareness System, a citywide network of sensors, databases, devices, software and other technical infrastructure. Researchers told StateScoop the tool has generated leads on several cases that traditionally would have stretched officers’ memories and traditional evidence-gathering abilities.
Connecting similar crimes into patterns is a crucial part of gathering evidence and eventually closing in on an arrest, said Evan Levine, the NYPD’s assistant commissioner of data analytics and one of Patternizr’s developers. Taken independently, each crime in a string of crimes may not yield enough evidence to identify a perpetrator, but the work of finding patterns is slow and each officer only has a limited amount of working knowledge surrounding an incident, he said.
“The goal here is to alleviate all that kind of busywork you might have to do to find hits on a pattern,” said Alex Chohlas-Wood, a Patternizr researcher and deputy director of the Computational Policy Lab at Stanford University.
The knowledge of individual officers is limited in scope by dint of the NYPD’s organizational structure. The department divides New York into 77 precincts, and a person who commits crimes across precincts, which often have arbitrary boundaries, is often more difficult to catch because individual beat officers are typically focused on a single neighborhood.
There’s also a lot of data to sift through. In 2016 alone, about 13,000 burglaries, 15,000 robberies and 44,000 grand larcenies were reported across the five boroughs.
Levine said that last month, police used Patternizr to spot a pattern of three knife-point robberies around a Bronx subway station. It would have taken police much longer to connect those crimes manually, Levine said.
The software works by an analyst feeding it “seed” case, which is then compared against a database of hundreds of thousands of
Paper by Crystal G. Konny, Brendan K. Williams, and David M. Friedman: “The Bureau of Labor Statistics (BLS) has generally relied on its own sample surveys to collect the price and expenditure information necessary to produce the Consumer Price Index (CPI). The burgeoning availability of big data has created a proliferation of information that could lead to methodological improvements and cost savings in the CPI. The BLS has undertaken several pilot projects in an attempt to supplement and/or replace its traditional field collection of price data with alternative sources. In addition to cost reductions, these projects have demonstrated the potential to expand sample size, reduce respondent burden, obtain transaction prices more consistently, and improve price index estimation by incorporating real-time expenditure information—a foundational component of price index theory that has not been practical until now. In CPI, we use the term alternative data to refer to any data not collected through traditional field collection procedures by CPI staff, including
Paper by Erik Brynjolfsson, Avinash Collis, and Felix Eggers: “Gross domestic product (GDP) and derived metrics such as productivity have been central to our understanding of economic progress and well-being. In principle, changes in consumer surplus provide a superior, and more direct,
For example, the median user needed
Blog Post by Annette Zimmermann and Bendert Zevenbergen: “… In what follows, we outline seven ‘AI ethics traps’. In doing so, we hope to provide a resource for readers who want to understand and navigate the public debate on the ethics of AI better, who want to contribute to ongoing discussions in an informed and nuanced way, and who want to think critically and constructively about ethical considerations in science and technology more broadly. Of course, not everybody who contributes to the current debate on AI Ethics is guilty of endorsing any or all of these traps: the traps articulate extreme versions of a range of possible misconceptions, formulated in a deliberately strong way to highlight the ways in which one might prematurely dismiss ethical reasoning about AI as futile.
1. The reductionism trap:
“Doing the morally right thing is essentially the same as acting in a fair way. (or: transparent, or egalitarian, or <substitute any other value>). So ethics is the same as fairness (or transparency, or equality, etc.). If we’re being fair, then we’re being ethical.”
Even though the problem of algorithmic bias and its unfair impact on decision outcomes is an urgent problem, it does not exhaust the ethical problem space. As important as algorithmic fairness is, it is crucial to avoid reducing ethics to a fairness problem alone. Instead, it is important to pay attention to how the ethically valuable goal of optimizing for a specific value like fairness interacts with other important ethical goals. Such goals could include—amongst many others—the goal of creating transparent and explainable systems which are open to democratic oversight and contestation, the goal of improving the predictive accuracy of machine learning systems, the goal of avoiding paternalistic infringements of autonomy rights, or the goal of protecting the privacy interests of data subjects. Sometimes, these different values may conflict: we cannot always optimize for everything at once. This makes it all the more important to adopt a sufficiently rich, pluralistic view of the full range of relevant ethical values at stake—only then can one reflect critically on what kinds of ethical trade-offs one may have to confront.
2. The simplicity trap:
“In order to make ethics practical and action-guiding, we need to distill our moral framework into a user-friendly compliance checklist. After we’ve decided on a particular path of action, we’ll go through that checklist to make sure that we’re being ethical.”
Given the high visibility and urgency of ethical dilemmas arising in the context of AI, it is not surprising that there are more and more calls to develop actionable AI ethics checklists. For instance, a 2018 draft report by the European Commission’s High-Level Expert Group on Artificial Intelligence specifies a preliminary ‘assessment list’ for ‘trustworthy AI’. While the report plausibly acknowledges that such an assessment list must be context-sensitive and that it is not exhaustive, it nevertheless identifies a list of ten fixed ethical goals, including privacy and transparency. But can and should ethical values be articulated in a checklist in the first place? It is worth examining this underlying assumption critically. After all, a checklist implies a one-off review process: on that view, developers or policy-makers could determine whether a particular system is ethically defensible at a specific moment in time, and then move on without confronting any further ethical concerns once the checklist criteria have been satisfied once. But ethical reasoning cannot be a static one-off assessment: it required an ongoing process of reflection, deliberation, and contestation. Simplicity is good—but the willingness to reconsider simple frameworks, when required, is better. Setting a fixed ethical agenda ahead of time risks obscuring new ethical problems that may arise at a later point in time, or ongoing ethical problems that become apparent to human decision-makers only later.
3. The relativism trap:
“We all disagree about what is morally valuable, so it’s pointless to imagine that there is a universalbaseline against which we can use in order to evaluate moral choices. Nothing is objectively morally good: things can only be morally good relative to each person’s individual value framework.”
Public discourse on the ethics of AI frequently produces little more than an exchange of personal opinions or institutional positions. In light of pervasive moral disagreement, it is easy to conclude that ethical reasoning can never stand on firm ground: it always seems to be relative to a person’s views and context. But this does not mean that ethical reasoning about AI and its social and political implications is futile: some ethical arguments about AI may ultimately be more persuasive than others. While it may not always be possible to determine ‘the one right answer’, it is often possible to identify at least some paths of action are clearly wrong, and some paths of action that are comparatively better (if not optimal all things considered). If that is the case, comparing the respective merits of ethical arguments can be action-guiding for developers and policy-makers, despite the presence of moral disagreement. Thus, it is possible and indeed constructive for AI ethics to welcome value pluralism, without collapsing into extreme value relativism.
4. The value alignment trap:
“If relativism is wrong (see #3), there must be one morally right answer. We need to find that right answer, and ensure that everyone in our
organisation acts in alignment with that answer. If our ethical reasoning leads to moral disagreement, that means that we have failed.”…(More)”.
Book by Yakov Ben-Haim: “Innovations create both opportunities and dilemmas. They provide new and supposedly better opportunities, but — because of their newness — they are often more uncertain and potentially worse than existing options. Recent inventions and discoveries include new drugs, new energy sources, new foods, new manufacturing technologies, new toys and new pedagogical methods, new weapon systems, new home appliances and many other discoveries and inventions.
Is it better to use or not to use a new and promising but unfamiliar and hence uncertain innovation? That dilemma faces just about everybody. The paradigm of the innovation dilemma characterizes many situations, even when a new technology is not actually involved. The dilemma arises from new attitudes, like individual responsibility for the global environment, or new social conceptions, like global allegiance and self-identity transcending nation-states. These dilemmas have far-reaching implications for individuals, organizations, and society at large as they make decisions in the age of innovation. The uncritical belief in outcome-optimization — “more is better, so most is best” — pervades decision-making in all domains, but is often irresponsible when facing the uncertainties of innovation.
There is a great need for practical conceptual tools for understanding and managing the dilemmas of innovation. This book offers a new direction for a wide audience. It discusses examples from many fields, including e-reading, bipolar disorder and pregnancy, disruptive technology in
Robert Morrell at The Conversation: “Globalisation and new technology have changed the ways that knowledge is made, disseminated and consumed. At the push of a button, one can find articles or sources from all over the world. Yet the global knowledge economy is still marked by its history.
The former colonial nations of the nineteenth and twentieth centuries – the rich countries of Europe and North America which are collectively called the global North (normally considered to include the West and the first world, the North contains a quarter of the world’s population but controls 80% of income earned) – are still central in the knowledge economy. But the story is not one simply of Northern dominance. A process of making knowledge in the South is underway.
European colonisers encountered many sophisticated and complex knowledge systems among the colonised. These had their own intellectual workforces, their own environmental, geographical, historical and medical sciences. They also had their own means of developing knowledge. Sometimes the colonisers tried to obliterate these knowledges.
In other instances colonisers appropriated local knowledge, for instance in agriculture, fisheries and mining. Sometimes they recognised and even honoured other knowledge systems and intellectuals. This was the case among some of the British in India, and was the early form of “Orientalism”, the study of people and cultures from the East.
In the past few decades, there’s been more critique of global knowledge inequalities and the global North’s dominance. There have also been shifts in knowledge production patterns; some newer disciplines have stepped away from old patterns of inequality.
These issues are examined in a new book, Knowledge and Global Power: Making new sciences in the South (published by Wits University Press), which I co-authored with Fran Collyer, Raewyn Connell
We work with a set of 3000 posts to online health forums in breast cancer, morbus crohn and different allergies. Each sentence in a post is manually labeled as “experience”, “fact” or “opinion”. Using this data, we train a support vector machine algorithm to perform classification. The results are evaluated in a 10-fold cross validation procedure.
Overall, we find that it is possible to predict the type of information contained in a forum post with
Jeanette Beebe at Fast Company: “Every time you shuffle through a line at the pharmacy, every time you try to get comfortable in those awkward doctor’s office chairs, every time you scroll through the web while you’re put on hold with a question about your medical bill, take a second to think about the person ahead of you and behind you.
Chances are, at least one of you is being monitored by a third party like data analytics giant Optum, which is owned by UnitedHealth Group, Inc. Since 1993, it’s captured medical data—lab results, diagnoses, prescriptions, and more—from 150 million Americans. That’s almost half of the U.S. population.
“They’re the ones that are tapping the data. They’re in there. I can’t remove them from my own health insurance contracts. So I’m stuck. It’s just part of the system,” says Joel Winston, an attorney who specializes in privacy and data protection law.
Healthcare providers can legally sell their data to a now-dizzyingly vast spread of companies, who can use it to make decisions, from designing new drugs to pricing your insurance rates to developing highly targeted advertising.
It’s written in the fine print: You don’t own your medical records. Well, except if you live in New Hampshire. It’s the only state that mandates its residents own their medical data. In 21 states, the law explicitly says that healthcare providers own these records, not patients. In the rest of the country, it’s up in the air.
Every time you visit a doctor or a pharmacy, your record grows. The details can be colorful: Using sources like Milliman’s IntelliScript and ExamOne’s ScriptCheck, a fuller picture of you emerges. Your interactions with the health are system, your medical payments, your prescription drug purchase history. And the market for the data is surging.
Its buyers and sharers—pharma giants, insurers, credit reporting agencies, and other data-hungry companies or “fourth parties” (like Facebook)—say that these massive health data sets can improve healthcare delivery and fuel advances in so-called “precision medicine.”
Still, this glut of health data has raised alarms among privacy advocates, who say many consumers are in the dark about how much of their health-related info is being gathered and mined….
Gardner predicted that traditional health data systems—electronic health records and electronic medical records—are less than ideal, given the “rigidity of the vendors and the products” and the way our data is owned and secured. Don’t count on them being around much longer, she said, “beyond the next few years.”
The future, Gardner suggested, is a system that runs on blockchain, which she defined for the committee as “basically a secure, visible, irrefutable ledger of transactions and ownership.” Still, a recent analysis of over 150 white papers revealed most healthcare blockchain projects “fall somewhere between half-baked and overly optimistic.”
As larger companies like IBM sign on, the technology may be edging closer to reality. Last year, Proof Work outlined a HIPAA-compliant system that manages patients’ medical histories over time, from acute care in the hospital to preventative checkups. The goal is to give these records to patients on their
Darshana Narayanan at The Economist: “Current forms of democracy exclude most people from political decision-making. We elect representatives and participate in the occasional referendums, but we mainly remain on the outside. The result is that a handful of people in power dictate what ought to be collective decisions. What we have now is hardly a democracy, or at least, not a democracy that we should settle for.
To design a truer form of democracy—that is, fair representation and an outcome determined by a plurality—we might draw some lessons from the collective
Of course fish are not the same as humans. But that study does suggest a way of thinking about decision-making. Instead of limiting influence to experts and strongly motivated interest groups, we should actively work to broaden participation to ensure that we include people lacking strong preferences or prior knowledge of an issue. In other words, we need to go against the ingrained thinking that non-experts should be excluded from decision-making. Inclusivity might just improve our chances of reaching a real, democratic consensus.
How can our political institutions facilitate this? In my work over the past several years I have tried to apply findings from behavioural science into institutions and into code to create better systems of governance. In the course of my work, I have found some promising experiments taking place around the world that harness new digital tools. They point the way to how democracy can be practiced in the 21st century….(More)”.