How the Math Men Overthrew the Mad Men


 in the New Yorker: “Once, Mad Men ruled advertising. They’ve now been eclipsed by Math Men—the engineers and data scientists whose province is machines, algorithms, pureed data, and artificial intelligence. Yet Math Men are beleaguered, as Mark Zuckerberg demonstrated when he humbled himself before Congress, in April. Math Men’s adoration of data—coupled with their truculence and an arrogant conviction that their “science” is nearly flawless—has aroused government anger, much as Microsoft did two decades ago.

The power of Math Men is awesome. Google and Facebook each has a market value exceeding the combined value of the six largest advertising and marketing holding companies. Together, they claim six out of every ten dollars spent on digital advertising, and nine out of ten new digital ad dollars. They have become more dominant in what is estimated to be an up to two-trillion-dollar annual global advertising and marketing business. Facebook alone generates more ad dollars than all of America’s newspapers, and Google has twice the ad revenues of Facebook.

In the advertising world, Big Data is the Holy Grail, because it enables marketers to target messages to individuals rather than general groups, creating what’s called addressable advertising. And only the digital giants possess state-of-the-art Big Data. “The game is no longer about sending you a mail order catalogue or even about targeting online advertising,” Shoshana Zuboff, a professor of business administration at the Harvard Business School, wrote on faz.net, in 2016. “The game is selling access to the real-time flow of your daily life—your reality—in order to directly influence and modify your behavior for profit.” Success at this “game” flows to those with the “ability to predict the future—specifically the future of behavior,” Zuboff writes. She dubs this “surveillance capitalism.”

However, to thrash just Facebook and Google is to miss the larger truth: everyone in advertising strives to eliminate risk by perfecting targeting data. Protecting privacy is not foremost among the concerns of marketers; protecting and expanding their business is. The business model adopted by ad agencies and their clients parallels Facebook and Google’s. Each aims to massage data to better identify potential customers. Each aims to influence consumer behavior. To appreciate how alike their aims are, sit in an agency or client marketing meeting and you will hear wails about Facebook and Google’s “walled garden,” their unwillingness to share data on their users. When Facebook or Google counter that they must protect “the privacy” of their users, advertisers cry foul: You’re using the data to target ads we paid for—why won’t you share it, so that we can use it in other ad campaigns?…(More)”

AI trust and AI fears: A media debate that could divide society


Article by Vyacheslav Polonski: “Unless you live under a rock, you probably have been inundated with recent news on machine learning and artificial intelligence (AI). With all the recent breakthroughs, it almost seems like AI can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Of course, many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictionsRecent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place….

Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background. Instead, they are acutely aware of instances where AI goes terribly wrong:

These unfortunate examples have received a disproportionate amount of media attention, emphasising the message that humans cannot always rely on technology. In the end, it all goes back to the simple truth that machine learning is not foolproof, in part because the humans who design it aren’t….

Fortunately we already have some ideas about how to improve trust in AI — there’s light at the end of the tunnel.

  1. Experience: One solution may be to provide more hands-on experiences with automation apps and other AI applications in everyday situations (like this robot that can get you a beer from the fridge). Thus, instead of presenting the Sony’s new robot dog Aibo as an exclusive product for the upper-class, we’d recommend making these kinds of innovations more accessible to the masses. Simply having previous experience with AI can significantly improve people’s attitudes towards the technology, as we found in our experimental study. And this is especially important for the general public that may not have a very sophisticated understanding of the technology. Similar evidence also suggests the more you use other technologies such as the Internet, the more you trust them.
  2. Insight: Another solution may be to open the “black-box” of machine learning algorithms and be slightly more transparent about how they work. Companies such as GoogleAirbnb and Twitter already release transparency reports on a regular basis. These reports provide information about government requests and surveillance disclosures. A similar practice for AI systems could help people have a better understanding of how algorithmic decisions are made. Therefore, providing people with a top-level understanding of machine learning systems could go a long way towards alleviating algorithmic aversion.
  3. Control: Lastly, creating more of a collaborative decision-making process will help build trust and allow the AI to learn from human experience. In our work at Avantgarde Analytics, we have also found that involving people more in the AI decision-making process could improve trust and transparency. In a similar vein, a group of researchers at the University of Pennsylvania recently found that giving people control over algorithms can help create more trust in AI predictions. Volunteers in their study who were given the freedom to slightly modify an algorithm felt more satisfied with it, more likely to believe it was superior and more likely to use in in the future.

These guidelines (experience, insight and control) could help making AI systems more transparent and comprehensible to the individuals affected by their decisions….(More)”.

On Dimensions of Citizenship


Introduction by Niall Atkinson, Ann Lui, and Mimi Zeiger to a Special Exhibit and dedicated set of Essays: “We begin by defining citizenship as a cluster of rights, responsibilities, and attachments, and by positing their link to the built environment. Of course architectural examples of this affiliation—formal articulations of inclusion and exclusion—can seem limited and rote. The US-Mexico border wall (“The Wall,” to use common parlance) dominates the cultural imagination. As an architecture of estrangement, especially when expressed as monolithic prototypes staked in the San Diego-Tijuana landscape, the border wall privileges the rhetorical security of nationhood above all other definitions of citizenship—over the individuals, ecologies, economies, and communities in the region. And yet, as political theorist Wendy Brown points out, The Wall, like its many counterparts globally, is inherently fraught as both a physical infrastructure and a nationalist myth, ultimately racked by its own contradictions and paradoxes.

Calling border walls across the world “an ad hoc global landscape of flows and barriers,” Brown writes of the paradoxes that riddle any effort to distinguish the nation as a singular, cohesive form: “[O]ne irony of late modern walling is that a structure taken to mark and enforce an inside/outside distinction—a boundary between ‘us’ and ‘them’ and between friend and enemy—appears precisely the opposite when grasped as part of a complex of eroding lines between the police and the military, subject and patria, vigilante and state, law and lawlessness.”1 While 2018 is a moment when ideologies are most vociferously cast in binary rhetoric, the lived experience of citizenship today is rhizomic, overlapping, and distributed. A person may belong and feel rights and responsibilities to a neighborhood, a voting district, remain a part of an immigrant diaspora even after moving away from their home country, or find affiliation on an online platform. In 2017, Blizzard Entertainment, the maker of World of Warcraft, reported a user community of 46 million people across their international server network. Thus, today it is increasingly possible to simultaneously occupy multiple spaces of citizenship independent from the delineation of a formal boundary.

Conflict often makes visible emergent spaces of citizenship, as highlighted by recent acts both legislative and grassroots. Gendered bathrooms act as renewed sites of civil rights debate. Airports illustrate the thresholds of national control enacted by the recent Muslim bans. Such clashes uncover old scar tissue, violent histories and geographies of spaces. The advance of the Keystone XL pipeline across South Dakota, for example, brought the fight for indigenous sovereignty to the fore.

If citizenship itself designates a kind of border and the networks that traverse and ultimately elude such borders, then what kind of architecture might Dimensions of Citizenship offer in lieu of The Wall? What designed object, building, or space might speak to the heart of what and how it means to belong today? The participants in the United States Pavilion offer several of the clear and vital alternatives deemed so necessary by Samuel R. Delany: The Cobblestone. The Space Station. The Watershed.

Dimensions of Citizenship argues that citizenship is indissociable from the built environment, which is exactly why that relationship can be the source for generating or supporting new forms of belonging. These new forms may be more mutable and ephemeral, but no less meaningful and even, perhaps, ultimately more equitable. Through commissioned projects, and through film, video artworks, and responsive texts, Dimensions of Citizenship exhibits the ways that architects, landscape architects, designers, artists, and writers explore the changing form of citizenship: the different dimensions it can assume (legal, social, emotional) and the different dimensions (both actual and virtual) in which citizenship takes place. The works are valuably enigmatic, wide-ranging, even elusive in their interpretations, which is what contemporary conditions seem to demand. More often than not, the spaces of citizenship under investigation here are marked by histories of inequality and the violence imposed on people, non-human actors, ecologies. Our exhibition features spaces and individuals that aim to manifest the democratic ideals of inclusion against the grain of broader systems: new forms of “sharing economy” platforms, the legacies of the Underground Railroad, tenuous cross-national alliances at the border region, or the seemingly Sisyphean task of buttressing coastline topologies against the rising tides….(More)”.

Inclusive Innovation in Biohacker Spaces: The Role of Systems and Networks


Paper by Jeremy de Beer and Vipal Jain in Technology Innovation Management Review: “The biohacking movement is changing who can innovate in biotechnology. Driven by principles of inclusivity and open science, the biohacking movement encourages sharing and transparency of data, ideas, and resources. As a result, innovation is now happening outside of traditional research labs, in unconventional spaces – do-it-yourself (DIY) biology labs known as “biohacker spaces”. Labelled like “maker spaces” (which contain the fabrication, metal/woodworking, additive manufacturing/3D printing, digitization, and related tools that “makers” use to tinker with hardware and software), biohacker spaces are attracting a growing number of entrepreneurs, students, scientists, and members of the public.

A biohacker space is a space where people with an interest in biotechnology gather to tinker with biological materials. These spaces, such as Genspace in New York, Biotown in Ottawa, and La Paillasse in Paris, exist outside of traditional academic and research labs with the aim of democratizing and advancing science by providing shared access to tools and resources (Scheifele & Burkett, 2016).

Biohacker spaces hold great potential for promoting innovation. Numerous innovative projects have emerged from these spaces. For example, biohackers have developed cheaper tools and equipment (Crook, 2011; see also Bancroft, 2016). They are also working to develop low-cost medicines for conditions such as diabetes (Ossolo, 2015). There is a general, often unspoken assumption that the openness of biohacker spaces facilitates greater participation in biotechnology research, and therefore, more inclusive innovation. In this article, we explore that assumption using the inclusive innovation framework developed by Schillo and Robinson (2017).

Inclusive innovation requires that opportunities for participation are broadly available to all and that the benefits of innovation are broadly shared by all (CSLS, 2016). In Schillo and Robinson’s framework, there are four dimensions along which innovation may be inclusive:

  1. The people involved in innovation (who)
  2. The type of innovation activities (what)
  3. The range of outcomes to be captured (why)
  4. The governance mechanism of innovation (how)…(More)”.

Open data work: understanding open data usage from a practice lens


Paper by Emma Ruijer in the International Review of Administrative Sciences: “During recent years, the amount of data released on platforms by public administrations around the world have exploded. Open government data platforms are aimed at enhancing transparency and participation. Even though the promises of these platforms are high, their full potential has not yet been reached. Scholars have identified technical and quality barriers of open data usage. Although useful, these issues fail to acknowledge that the meaning of open data also depends on the context and people involved. In this study we analyze open data usage from a practice lens – as a social construction that emerges over time in interaction with governments and users in a specific context – to enhance our understanding of the role of context and agency in the development of open data platforms. This study is based on innovative action-based research in which civil servants’ and citizens’ initiatives collaborate to find solutions for public problems using an open data platform. It provides an insider perspective of Open Data Work. The findings show that an absence of a shared cognitive framework for understanding open data and a lack of high-quality datasets can prevent processes of collaborative learning. Our contextual approach stresses the need for open data practices that work on the basis of rich interactions with users rather than government-centric implementations….(More)”.

The Wisdom of the Network: How Adaptive Networks Promote Collective Intelligence


Paper by Alejandro Noriega-Campero , Abdullah Almaatouq, Peter Krafft, Abdulrahman Alotaibi, Mehdi Moussaid and Alex Pentland: “Social networks continuously change as people create new ties and break existing ones. It is widely noted that our social embedding exerts strong influence on what information we receive, and how we form beliefs and make decisions. However, most studies overlook the dynamic nature of social networks, and its role in fostering adaptive collective intelligence. It remains unknown (1) how network structures adapt to the performances of individuals, and (2) whether this adaptation promotes the accuracy of individual and collective decisions.

Here, we answer these questions through a series of behavioral experiments and simulations. Our results reveal that groups of people embedded in dynamic social networks can adapt to biased and non-stationary information environments. As a result, individual and collective accuracy is substantially improved over static networks and unconnected groups. Moreover, we show that groups in dynamic networks far outperform their best-performing member, and that even the best member’s judgment substantially benefits from group engagement. Thereby, our findings substantiate the role of dynamic social networks as adaptive mechanisms for refining individual and collective judgments….(More)”.

Crowdbreaks: Tracking Health Trends using Public Social Media Data and Crowdsourcing


Paper by Martin Mueller and Marcel Salath: “In the past decade, tracking health trends using social media data has shown great promise, due to a powerful combination of massive adoption of social media around the world, and increasingly potent hardware and software that enables us to work with these new big data streams.

At the same time, many challenging problems have been identified. First, there is often a mismatch between how rapidly online data can change, and how rapidly algorithms are updated, which means that there is limited reusability for algorithms trained on past data as their performance decreases over time. Second, much of the work is focusing on specific issues during a specific past period in time, even though public health institutions would need flexible tools to assess multiple evolving situations in real time. Third, most tools providing such capabilities are proprietary systems with little algorithmic or data transparency, and thus little buy-in from the global public health and research community.

Here, we introduce Crowdbreaks, an open platform which allows tracking of health trends by making use of continuous crowdsourced labelling of public social media content. The system is built in a way which automatizes the typical workflow from data collection, filtering, labelling and training of machine learning classifiers and therefore can greatly accelerate the research process in the public health domain. This work introduces the technical aspects of the platform and explores its future use cases…(More)”.

How the Enlightenment Ends


Henry Kissinger in the Atlantic: “…Heretofore, the technological advance that most altered the course of modern history was the invention of the printing press in the 15th century, which allowed the search for empirical knowledge to supplant liturgical doctrine, and the Age of Reason to gradually supersede the Age of Religion. Individual insight and scientific knowledge replaced faith as the principal criterion of human consciousness. Information was stored and systematized in expanding libraries. The Age of Reason originated the thoughts and actions that shaped the contemporary world order.

But that order is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.

he internet age in which we already live prefigures some of the questions and issues that AI will only make more acute. The Enlightenment sought to submit traditional verities to a liberated, analytic human reason. The internet’s purpose is to ratify knowledge through the accumulation and manipulation of ever expanding data. Human cognition loses its personal character. Individuals turn into data, and data become regnant.

Users of the internet emphasize retrieving and manipulating information over contextualizing or conceptualizing its meaning. They rarely interrogate history or philosophy; as a rule, they demand information relevant to their immediate practical needs. In the process, search-engine algorithms acquire the capacity to predict the preferences of individual clients, enabling the algorithms to personalize results and make them available to other parties for political or commercial purposes. Truth becomes relative. Information threatens to overwhelm wisdom.

Inundated via social media with the opinions of multitudes, users are diverted from introspection; in truth many technophiles use the internet to avoid the solitude they dread. All of these pressures weaken the fortitude required to develop and sustain convictions that can be implemented only by traveling a lonely road, which is the essence of creativity.

The impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on priorities by permitting a focus on specialized purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop vision.
The digital world’s emphasis on speed inhibits reflection; its incentive empowers the radical over the thoughtful; its values are shaped by subgroup consensus, not by introspection. For all its achievements, it runs the risk of turning on itself as its impositions overwhelm its conveniences….

There are three areas of special concern:

First, that AI may achieve unintended results….

Second, that in achieving intended goals, AI may change human thought processes and human values….

Third, that AI may reach intended goals, but be unable to explain the rationale for its conclusions…..(More)”

Data Stewards: Data Leadership to Address the Challenges of the 21st Century


Data Stewards_screenshot

The GovLab at the NYU Tandon School of Engineering is pleased to announce the launch of its Data Stewards website — a new portal for connecting organizations across sectors that seek to promote responsible data leadership that can address the challenges of the 21st century — developed with generous support from the William and Flora Hewlett Foundation.

Increasingly, the private sector is collaborating with the public sector and researchers on ways to use private-sector data and analytical expertise for public good. With these new practices of data collaborations come the need to reimagine roles and responsibilities to steer the process of using this data, and the insights it can generate, to address society’s biggest questions and challenges: Data Stewards.

Today, establishing and sustaining these new collaborative and accountable approaches requires significant and time-consuming effort and investment of resources for both data holders on the supply side, and institutions that represent the demand. By establishing Data Stewardship as a function — recognized within the private sector as a valued responsibility — the practice of Data Collaboratives can become more predictable, scaleable, sustainable and de-risked.

Together with BrightFront Group and Adapt, we are:

  • Exploring the needs and priorities of current private sector Data Stewards who act as change agents within their firms. Responsible for determining what, when, how and with whom to share private data for public good, these individuals are critical catalysts for ensuring insights are turned into action.
  • Identifying and connecting existing Data Stewards across sectors and regions to create an online and in-person community for exchanging knowledge and best practices.
  • Developing methodologies, tools and frameworks to use data more responsibly, systematically and efficiently to decrease the transaction cost, time and energy currently needed to establish Data Collaboratives.

To learn more about the Data Stewards Initiative, including new insights, ideas, tools and information about the Data Steward of the Year Award program, visit datastewards.net.

If you are a Data Steward, or would like to join a community of practice to learn from your peers, please contact [email protected] to join the Network of Data Stewards.”

Behavioral economics from nuts to ‘nudges’


Richard Thaler at ChicagoBoothReview: “…Behavioral economics has come a long way from my initial set of stories. Behavioral economists of the current generation are using all the modern tools of economics, from theory to big data to structural models to neuroscience, and they are applying those tools to most of the domains in which economists practice their craft. This is crucial to making descriptive economics more accurate. As the last section of this lecture highlighted, they are also influencing public-policy makers around the world, with those in the private sector not far behind. Sunstein and I did not invent nudging—we just gave it a word. People have been nudging as long as they have been trying to influence other people.

And much as we might wish it to be so, not all nudging is nudging for good. The same passive behavior we saw among Swedish savers applies to nearly everyone agreeing to software terms, or mortgage documents, or car payments, or employment contracts. We click “agree” without reading, and can find ourselves locked into a long-term contract that can only be terminated with considerable time and aggravation, or worse. Some firms are actively making use of behaviorally informed strategies to profit from the lack of scrutiny most shoppers apply. I call this kind of exploitive behavior “sludge.” It is the exact opposite of nudging for good. But whether the use of sludge is a long-term profit-maximizing strategy remains to be seen. Creating the reputation as a sludge-free supplier of goods and services may be a winning long-term strategy, just like delivering free bottles of water to victims of a hurricane.

Although not every application of behavioral economics will make the world a better place, I believe that giving economics a more human dimension and creating theories that apply to humans, not just econs, will make our discipline stronger, more useful, and undoubtedly more accurate….(More)”.