Chapter by Roberto da Mota Ueti, Daniela Fernandez Espinosa, Laura Rafferty, Patrick C. K. Hung in Big Data Applications and Use Cases: “Big Data is changing our world with masses of information stored in huge servers spread across the planet. This new technology is changing not only companies but governments as well. Mexico and Brazil, two of the most influential countries in Latin America, are entering a new era and as a result, facing challenges in all aspects of public policy. Using Big Data, the Brazilian Government is trying to decrease spending and use public money better by grouping public information with stored information on citizens in public services. With new reforms in education, finances and telecommunications, the Mexican Government is taking on a bigger role in efforts to channel the country’s economic policy into an improvement of the quality of life of their habitants. It is known that technology is an important part for sub-developed countries, who are trying to make a difference in certain contexts such as reducing inequality or regulating the good usage of economic resources. The good use of Big Data, a new technology that is in charge of managing a big quantity of information, can be crucial for the Mexican Government to reach the goals that have been set in the past under Peña Nieto’s administration. This article focuses on how the Brazilian and Mexican Governments are managing the emerging technologies of Big Data and how it includes them in social and industrial projects to enhance the growth of their economies. The article also discusses the benefits of these uses of Big Data and the possible problems that occur related to security and privacy of information….(More)’
Nudging – Possibilities, Limitations and Applications in European Law and Economics
Book edited by Mathis, Klaus and Tor, Avishalom: “This anthology provides an in-depth analysis and discusses the issues surrounding nudging and its use in legislation, regulation, and policy making more generally. The 17 essays in this anthology provide startling insights into the multifaceted debate surrounding the use of nudges in European Law and Economics.
Nudging is a tool aimed at altering people’s behaviour in a predictable way without forbidding any option or significantly changing economic incentives. It can be used to help people make better decisions to influence human behaviour without forcing them because they can opt out. Its use has sparked lively debates in academia as well as in the public sphere. This book explores who decides which behaviour is desired. It looks at whether or not the state has sufficient information for debiasing, and if there are clear-cut boundaries between paternalism, manipulation and indoctrination. The first part of this anthology discusses the foundations of nudging theory and the problems associated, as well as outlining possible solutions to the problems raised. The second part is devoted to the wide scope of applications of nudges from contract law, tax law and health claim regulations, among others.
This volume is a result of the flourishing annual Law and Economics Conference held at the law faculty of the University of Lucerne. The conferences have been instrumental in establishing a strong and ever-growing Law and Economics movement in Europe, providing unique insights in the challenges faced by Law and Economics when applied in European legal traditions….(More)”
Post, Mine, Repeat: Social Media Data Mining Becomes Ordinary
“In this book, Helen Kennedy argues that as social media data mining becomes more and more ordinary, as we post, mine and repeat, new data relations emerge. These new data relations are characterised by a widespread desire for numbers and the troubling consequences of this desire, and also by the possibility of doing good with data and resisting data power, by new and old concerns, and by instability and contradiction. Drawing on action research with public sector organisations, interviews with commercial social insights companies and their clients, focus groups with social media users and other research, Kennedy provides a fascinating and detailed account of living with social media data mining inside the organisations that make up the fabric of everyday life….(More)”
Regulatory Transformations: An Introduction
Chapter by Bettina Lange and Fiona Haines in the book Regulatory Transformations: “Regulation is no longer the prerogative of either states or markets. Increasingly citizens in association with businesses catalyse regulation which marks the rise of a social sphere in regulation. Around the world, in San Francisco, Melbourne, Munich and Mexico City, citizens have sought to transform how and to what end economic transactions are conducted. For instance, ‘carrot mob’ initiatives use positive economic incentives, not provided by a state legal system, but by a collective of civil society actors in order to change business behaviour. In contrast to ‘negative’ consumer boycotts, ‘carrotmob’ events use ‘buycotts’. They harness competition between businesses as the lever for changing how and for what purpose business transactions are conducted. Through new social media ‘carrotmobs’ mobilize groups of citizens to purchase goods at a particular time in a specific shop. The business that promises to spend the greatest percentage of its takings on, for instance, environmental improvements, such as switching to a supplier of renewable energy, will be selected for an organized shopping spree and financially benefit from the extra income it receives from the ‘carrot mob’ event.’Carrot mob’ campaigns chime with other fundamental challenges to conventional economic activity, such as the shared use of consumer goods through citizens collective consumption which questions traditional conceptions of private property….(More; Other Chapters)”
Is behavioural economics ready to save the world?
Book review by Trenton G Smith of Behavioral Economics and Public Health : “Modern medicine has long doled out helpful advice to ailing patients about not only drug treatments, but also diet, exercise, alcohol abuse, and many other lifestyle decisions. And for just as long, patients have been failing to follow doctors’ orders. Many of today’s most pressing public health problems would disappear if people would just make better choices.
Enter behavioural economics. A fairly recent offshoot of the dismal science, behavioural economics aims to take the coldly rational decision makers who normally populate economic theories, and instil in them a host of human foibles. Neoclassical (ie, conventional) economics, after all is the study of optimising behaviour in the presence of material constraints—why not add constraints on cognitive capacity, or self-control, or susceptibility to the formation of bad habits? The hope is that by incorporating insights from other behavioural sciences (most notably cognitive psychology and neuroscience) while retaining the methodological rigour of neoclassical economics, behavioural economics will yield a more richly descriptive theory of human behaviour, and generate new and important insights to better inform public policy.
Policy makers have taken notice. In an era in which free-market rhetoric dominates the political landscape, the idea that small changes to public health policies might serve to nudge consumers towards healthier behaviours holds great appeal. Even though some (irrational) consumers might be better off, the argument goes, if certain unhealthy food products were banned (or worse, taxed), this approach would infringe on the rights of the many consumers who want to indulge occasionally, and fully understand the consequences. If governments could instead use evidence from consumer science to make food labels more effective, or to improve the way that healthy foods are presented in school cafeterias, more politically unpalatable interventions in the marketplace might not be needed. This idea, dubbed “libertarian paternalism” by Richard Thaler and Cass Sunstein, has been pursued with gusto in both the UK (David Cameron’s Government formed the Behavioural Insights Team—unofficially described as the Nudge Unit) and the USA (where Sunstein spent time in the Obama administration’s Office of Information and Regulatory Affairs).
Whatever public health practitioners might think about these developments—or indeed, of economics as a discipline—this turn of events has rather suddenly given scholars at the cutting edge of consumer science an influential voice in the regulatory process, and some of the best and brightest have stepped up to contribute. Behavioral Economics & Public Health (edited by Christina Roberto and Ichiro Kawachi) is the product of a 2014 Harvard University exploratory workshop on applying social science insights to public health. As might be expected in a volume that aims to bring together two such inherently multidisciplinary fields, the book’s 11 chapters offer an eclectic mix of perspectives. The editors begin with an excellent overview of the field of behavioural economics and its applications to public health, and an economic perspective can also be found in four of the other chapters: Justin White and William Dow write about intertemporal choice, Kristina Lewis and Jason Block review the use of incentives to promote health, Michael Sanders and Michael Hallsworth describe their experience working within the UK’s Behavioural Insights Team, and Frederick Zimmerman concludes with a thoughtful critique of the field of behavioural economics. The other contributions are largely from the perspectives of psychology and marketing: Dennis Runger and Wendy Wood discuss habit formation, Rebecca Ferrer and colleagues emphasise the importance of emotion in decision making, Brent McFerran discusses social norms in the context of obesity, Jason Riis and Rebecca Ratner explain why some public health communication strategies are more effective than others, and Zoe Chance and colleagues and Brian Wansink offer frameworks for designing environments (eg, in schools and workplaces) that are conducive to healthy choices.
This collection of essays holds many hidden gems, but the one that surprised me the most was the attention given (by Runger and Wood briefly, and Zimmerman extensively) to a dirty little secret that behavioural economists rarely mention: once it is acknowledged that sometimes-irrational consumers can be manipulated into making healthy choices, it does not require much of a leap to conclude that business interests can—and do—use the same methods to push back in the other direction. This conclusion leads Zimmerman to a discussion of power in the marketplace and in our collective political economy, and to a call to action on these larger structural issues in society that neoclassical theory has long neglected….(More; Book)
The New Power Politics: Networks and Transnational Security Governance
Book edited by Deborah Avant and Oliver Westerwinter: “Traditional analyses of global security cannot explain the degree to which there is “governance” of important security issues — from combatting piracy to curtailing nuclear proliferation to reducing the contributions of extractive industries to violence and conflict. They are even less able to explain why contemporary governance schemes involve the various actors and take the many forms they do.
Juxtaposing the insights of scholars writing about new modes of governance with the logic of network theory, The New Power Politics offers a framework for understanding contemporary security governance and its variation. The framework rests on a fresh view of power and how it works in global politics. Though power is integral to governance, it is something that emerges from, and depends on, relationships. Thus, power is dynamic; it is something that governors must continually cultivate with a wide range of consequential global players, and how a governor uses power in one situation can have consequences for her future relationships, and thus, future power.
Understanding this new power politics is crucial for explaining and shaping the future of global security politics. This stellar group of scholars analyzes both the networking strategies of would-be governors and their impacts on the effectiveness of governance and whether it reflects broad or narrow concerns on a wide range of contemporary governance issues….(More)”
Hail the maintainers
Andrew Russell & Lee Vinsel at AEON: “The trajectory of ‘innovation’ from core, valued practice to slogan of dystopian societies, is not entirely surprising, at a certain level. There is a formulaic feel: a term gains popularity because it resonates with the zeitgeist, reaches buzzword status, then suffers from overexposure and cooptation. Right now, the formula has brought society to a question: after ‘innovation’ has been exposed as hucksterism, is there a better way to characterise relationships between society and technology?
There are three basic ways to answer that question. First, it is crucial to understand that technology is not innovation. Innovation is only a small piece of what happens with technology. This preoccupation with novelty is unfortunate because it fails to account for technologies in widespread use, and it obscures how many of the things around us are quite old. In his book, Shock of the Old (2007), the historian David Edgerton examines technology-in-use. He finds that common objects, like the electric fan and many parts of the automobile, have been virtually unchanged for a century or more. When we take this broader perspective, we can tell different stories with drastically different geographical, chronological, and sociological emphases. The stalest innovation stories focus on well-to-do white guys sitting in garages in a small region of California, but human beings in the Global South live with technologies too. Which ones? Where do they come from? How are they produced, used, repaired? Yes, novel objects preoccupy the privileged, and can generate huge profits. But the most remarkable tales of cunning, effort, and care that people direct toward technologies exist far beyond the same old anecdotes about invention and innovation.
Second, by dropping innovation, we can recognise the essential role of basic infrastructures. ‘Infrastructure’ is a most unglamorous term, the type of word that would have vanished from our lexicon long ago if it didn’t point to something of immense social importance. Remarkably, in 2015 ‘infrastructure’ came to the fore of conversations in many walks of American life. In the wake of a fatal Amtrak crash near Philadelphia, President Obama wrestled with Congress to pass an infrastructure bill that Republicans had been blocking, but finally approved in December 2015. ‘Infrastructure’ also became the focus of scholarly communities in history and anthropology, even appearing 78 times on the programme of the annual meeting of the American Anthropological Association. Artists, journalists, and even comedians joined the fray, most memorably with John Oliver’s hilarious sketch starring Edward Norton and Steve Buscemi in a trailer for an imaginary blockbuster on the dullest of subjects. By early 2016, the New York Review of Books brought the ‘earnest and passive word’ to the attention of its readers, with a depressing essay titled ‘A Country Breaking Down’.
Despite recurring fantasies about the end of work, the central fact of our industrial civilisation is labour, most of which falls far outside the realm of innovation
The best of these conversations about infrastructure move away from narrow technical matters to engage deeper moral implications. Infrastructure failures – train crashes, bridge failures, urban flooding, and so on – are manifestations of and allegories for America’s dysfunctional political system, its frayed social safety net, and its enduring fascination with flashy, shiny, trivial things. But, especially in some corners of the academic world, a focus on the material structures of everyday life can take a bizarre turn, as exemplified in work that grants ‘agency’ to material things or wraps commodity fetishism in the language of high cultural theory, slick marketing, and design. For example, Bloomsbury’s ‘Object Lessons’ series features biographies of and philosophical reflections on human-built things, like the golf ball. What a shame it would be if American society matured to the point where the shallowness of the innovation concept became clear, but the most prominent response was an equally superficial fascination with golf balls, refrigerators, and remote controls.
Third, focusing on infrastructure or on old, existing things rather than novel ones reminds us of the absolute centrality of the work that goes into keeping the entire world going…..
We organised a conference to bring the work of the maintainers into clearer focus. More than 40 scholars answered a call for papers asking, ‘What is at stake if we move scholarship away from innovation and toward maintenance?’ Historians, social scientists, economists, business scholars, artists, and activists responded. They all want to talk about technology outside of innovation’s shadow.
One important topic of conversation is the danger of moving too triumphantly from innovation to maintenance. There is no point in keeping the practice of hero-worship that merely changes the cast of heroes without confronting some of the deeper problems underlying the innovation obsession. One of the most significant problems is the male-dominated culture of technology, manifest in recent embarrassments such as the flagrant misogyny in the ‘#GamerGate’ row a couple of years ago, as well as the persistent pay gap between men and women doing the same work.
There is an urgent need to reckon more squarely and honestly with our machines and ourselves. Ultimately, emphasising maintenance involves moving from buzzwords to values, and from means to ends. In formal economic terms, ‘innovation’ involves the diffusion of new things and practices. The term is completely agnostic about whether these things and practices are good. Crack cocaine, for example, was a highly innovative product in the 1980s, which involved a great deal of entrepreneurship (called ‘dealing’) and generated lots of revenue. Innovation! Entrepreneurship! Perhaps this point is cynical, but it draws our attention to a perverse reality: contemporary discourse treats innovation as a positive value in itself, when it is not.
Entire societies have come to talk about innovation as if it were an inherently desirable value, like love, fraternity, courage, beauty, dignity, or responsibility. Innovation-speak worships at the altar of change, but it rarely asks who benefits, to what end? A focus on maintenance provides opportunities to ask questions about what we really want out of technologies. What do we really care about? What kind of society do we want to live in? Will this help get us there? We must shift from means, including the technologies that underpin our everyday actions, to ends, including the many kinds of social beneficence and improvement that technology can offer. Our increasingly unequal and fearful world would be grateful….(More)”
Ethical Reasoning in Big Data
Book edited by Collmann, Jeff, and Matei, Sorin Adam: “This book springs from a multidisciplinary, multi-organizational, and multi-sector conversation about the privacy and ethical implications of research in human affairs using big data. The need to cultivate and enlist the public’s trust in the abilities of particular scientists and scientific institutions constitutes one of this book’s major themes. The advent of the Internet, the mass digitization of research information, and social media brought about, among many other things, the ability to harvest – sometimes implicitly – a wealth of human genomic, biological, behavioral, economic, political, and social data for the purposes of scientific research as well as commerce, government affairs, and social interaction. What type of ethical dilemmas did such changes generate? How should scientists collect, manipulate, and disseminate this information? The effects of this revolution and its ethical implications are wide-ranging.
This book includes the opinions of myriad investigators, practitioners, and stakeholders in big data on human beings who also routinely reflect on the privacy and ethical issues of this phenomenon. Dedicated to the practice of ethical reasoning and reflection in action, the book offers a range of observations, lessons learned, reasoning tools, and suggestions for institutional practice to promote responsible big data research on human affairs. It caters to a broad audience of educators, researchers, and practitioners. Educators can use the volume in courses related to big data handling and processing. Researchers can use it for designing new methods of collecting, processing, and disseminating big data, whether in raw form or as analysis results. Lastly, practitioners can use it to steer future tools or procedures for handling big data. As this topic represents an area of great interest that still remains largely undeveloped, this book is sure to attract significant interest by filling an obvious gap in currently available literature. …(More)”
How Big Data Creates False Confidence
Jesse Dunietz at Nautilus: “…A feverish push for “big data” analysis has swept through biology, linguistics, finance, and every field in between. Although no one can quite agree how to define it, the general idea is to find datasets so enormous that they can reveal patterns invisible to conventional inquiry. The data are often generated by millions of real-world user actions, such as tweets or credit-card purchases, and they can take thousands of computers to collect, store, and analyze. To many companies and researchers, though, the investment is worth it because the patterns can unlock information about anything from genetic disorders to tomorrow’s stock prices.
But there’s a problem: It’s tempting to think that with such an incredible volume of data behind them, studies relying on big data couldn’t be wrong. But the bigness of the data can imbue the results with a false sense of certainty. Many of them are probably bogus—and the reasons why should give us pause about any research that blindly trusts big data.
In the case of language and culture, big data showed up in a big way in 2011, when Google released itsNgrams tool. Announced with fanfare in the journal Science, Google Ngrams allowed users to search for short phrases in Google’s database of scanned books—about 4 percent of all books ever published!—and see how the frequency of those phrases has shifted over time. The paper’s authors heralded the advent of “culturomics,” the study of culture based on reams of data and, since then, Google Ngrams has been, well, largely an endless source of entertainment—but also a goldmine for linguists, psychologists, and sociologists. They’ve scoured its millions of books to show that, for instance, yes, Americans are becoming more individualistic; that we’re “forgetting our past faster with each passing year”; and that moral ideals are disappearing from our cultural consciousness.
The problems start with the way the Ngrams corpus was constructed. In a study published last October, three University of Vermont researchers pointed out that, in general, Google Books includes one copy of every book. This makes perfect sense for its original purpose: to expose the contents of those books to Google’s powerful search technology. From the angle of sociological research, though, it makes the corpus dangerously skewed….
Even once you get past the data sources, there’s still the thorny issue of interpretation. Sure, words like “character” and “dignity” might decline over the decades. But does that mean that people care about morality less? Not so fast, cautions Ted Underwood, an English professor at the University of Illinois, Urbana-Champaign. Conceptions of morality at the turn of the last century likely differed sharply from ours, he argues, and “dignity” might have been popular for non-moral reasons. So any conclusions we draw by projecting current associations backward are suspect.
Of course, none of this is news to statisticians and linguists. Data and interpretation are their bread and butter. What’s different about Google Ngrams, though, is the temptation to let the sheer volume of data blind us to the ways we can be misled.
This temptation isn’t unique to Ngrams studies; similar errors undermine all sorts of big data projects. Consider, for instance, the case of Google Flu Trends (GFT). Released in 2008, GFT would count words like “fever” and “cough” in millions of Google search queries, using them to “nowcast” how many people had the flu. With those estimates, public health officials could act two weeks before the Centers for Disease Control could calculate the true numbers from doctors’ reports.
When big data isn’t seen as a panacea, it can be transformative.
Initially, GFT was claimed to be 97 percent accurate. But as a study out of Northeastern University documents, that accuracy was a fluke. First, GFT completely missed the “swine flu” pandemic in the spring and summer of 2009. (It turned out that GFT was largely predicting winter.) Then, the system began to overestimate flu cases. In fact, it overshot the peak 2013 numbers by a whopping 140 percent. Eventually, Google just retired the program altogether.
So what went wrong? As with Ngrams, people didn’t carefully consider the sources and interpretation of their data. The data source, Google searches, was not a static beast. When Google started auto-completing queries, users started just accepting the suggested keywords, distorting the searches GFT saw. On the interpretation side, GFT’s engineers initially let GFT take the data at face value; almost any search term was treated as a potential flu indicator. With millions of search terms, GFT was practically guaranteed to over-interpret seasonal words like “snow” as evidence of flu.
But when big data isn’t seen as a panacea, it can be transformative. Several groups, like Columbia University researcher Jeffrey Shaman’s, for example, have outperformed the flu predictions of both the CDC and GFT by using the former to compensate for the skew of the latter. “Shaman’s team tested their model against actual flu activity that had already occurred during the season,” according to the CDC. By taking the immediate past into consideration, Shaman and his team fine-tuned their mathematical model to better predict the future. All it takes is for teams to critically assess their assumptions about their data….(More)
Foreign Policy has lost its creativity. Design thinking is the answer.
Elizabeth Radziszewski at The Wilson Quaterly: “Although the landscape of threats has changed in recent years, U.S. strategies bear striking resemblance to the ways policymakers dealt with crises in the past. Whether it involves diplomatic overtures, sanctions, bombing campaigns, or the use of special ops and covert operations, the range of responses suffers from innovation deficit. Even the use of drones, while a new tool of warfare, is still part of the limited categories of responses that focus mainly on whether or not to kill, cooperate, or do nothing. To meet the evolving nature of threats posed by nonstate actors such as ISIS, the United States needs a strategy makeover — a creative lift, so to speak.
Sanctions, diplomacy, bombing campaigns, special ops, covert operations — the range of our foreign policy responses suffers from an innovation deficit.
Enter the business world. Today’s top companies face an increasingly competitive marketplace where innovative approaches to product and service development are a necessity. Just as the market has changed for companies since the forces of globalization and the digital economy took over, so has the security landscape evolved for the world’s leading hegemon. Yet the responses of top businesses to these changes stand in stark contrast to the United States’ stagnant approaches to current national security threats. Many of today’s thriving businesses have embraced design thinking (DT), an innovative process that identifies consumer needs through immersive ethnographic experiences that are melded with creative brainstorming and quick prototyping.
What would happen if U.S. policymakers took cues from the business world and applied DT in policy development? Could the United States prevent the threats from metastasizing with more proactive rather than reactive strategies — by discovering, for example, how ideas from biology, engineering, and other fields could help analysts inject fresh perspective into tired solutions? Put simply, if U.S. policymakers want to succeed in managing future threats, then they need to start thinking more like business innovators who integrate human needs with technology and economic feasibility.
In his 1969 book The Sciences of the Artificial, Herbert Simon made the first connection between design and a way of thinking. But it was not until the 1980s and 1990s that Stanford scientists began to see the benefits of design practices used by industrial designers as a method for creative thinking. At the core of DT is the idea that solving a challenge requires a deeper understanding of the problem’s true nature and the processes and people involved. This approach contrasts greatly with more standard innovation styles, where a policy solution is developed and then resources are used to fit the solution to the problem. DT reverses the order.
DT encourages divergent thinking, the process of generating many ideas before converging to select the most feasible ones, including making connections between different-yet-related worlds. Finally, the top ideas are quickly prototyped and tested so that early solutions can be modified without investing many resources and risking the biggest obstacle to real innovation: the impulse to try fitting an idea, product, policy to the people, rather of the other way around…
If DT has reenergized the innovative process in the business and nonprofit sector, a systematic application of its methodology could just as well revitalize U.S. national security policies. Innovation in security and foreign policy is often framed around the idea of technological breakthroughs. Thanks toDefense Advanced Research Projects Agency (DARPA), the Department of Defense has been credited with such groundbreaking inventions as GPS, the Internet, and stealth fighters — all of which have created rich opportunities to explore new military strategies. Reflecting this infatuation with technology, but with a new edge, is Defense Secretary Ashton Carter’s unveiling of the Defense Innovation Unit Experimental, an initiative to scout for new technologies, improve outreach to startups, and form deeper relationships between the Pentagon and Silicon Valley. The new DIUE effort signals what businesses have already noticed: the need to be more flexible in establishing linkages with people outside of the government in search for new ideas.
Yet because the primary objective of DIUE remains technological prowess, the effort alone is unlikely to drastically improve the management of national security. Technology is not a substitute for an innovative process. When new invention is prized as the sole focus of innovation, it can, paradoxically, paralyze innovation. Once an invention is adopted, it is all too tempting to mold subsequent policy development around emergent technology, even if other solutions could be more appropriate….(More)”