Recalculating GDP for the Facebook age


Gillian Tett at the Financial Times: How big is the impact of Facebook on our lives? That question has caused plenty of hand-wringing this year, as revelations have tumbled out about the political influence of Big Tech companies.

Economists are attempting to look at this question too — but in a different way. They have been quietly trying to calculate the impact of Facebook on gross domestic product data, ie to measure what our social-media addiction is doing to economic output….

Kevin Fox, an Australian economist, thinks there is. Working with four other economists, including Erik Brynjolfsson, a professor at MIT, he recently surveyed consumers to see what they would “pay” for Facebook in monetary terms, concluding conservatively that this was about $42 a month. Extrapolating this to the wider economy, he then calculated that the “value” of the social-media platform is equivalent to 0.11 per cent of US GDP. That might not sound transformational. But this week Fox presented the group’s findings at an IMF conference on the digital economy in Washington DC and argued that if Facebook activity had been counted as output in the GDP data, it would have raised the annual average US growth rate from 1.83 per cent to 1.91 per cent between 2003 and 2017. The number would rise further if you included other platforms – researchers believe that “maps” and WhatsApp are particularly important – or other services.  Take photographs.

Back in 2000, as the group points out, about 80 billion photos were taken each year at a cost of 50 cents a picture in camera and processing fees. This was recorded in GDP. Today, 1.6 trillion photos are taken each year, mostly on smartphones, for “free”, and excluded from that GDP data. What would happen if that was measured too, along with other types of digital services?

The bad news is that there is no consensus among economists on this point, and the debate is still at a very early stage. … A separate paper from Charles Hulten and Leonard Nakamura, economists at the University of Maryland and Philadelphia Fed respectively, explained another idea: a measurement known as “EGDP” or “Expanded GDP”, which incorporates “welfare” contributions from digital services. “The changes wrought by the digital revolution require changes to official statistics,” they said.

Yet another paper from Nakamura, co-written with Diane Coyle of Cambridge University, argued that we should also reconfigure the data to measure how we “spend” our time, rather than “just” how we spend our money. “To recapture welfare in the age of digitalisation, we need shadow prices, particularly of time,” they said. Meanwhile, US government number-crunchers have been trying to measure the value of “free” open-source software, such as R, Python, Julia and Java Script, concluding that if captured in statistics these would be worth about $3bn a year. Another team of government statisticians has been trying to value the data held by companies – this estimates, using one method, that Amazon’s data is currently worth $125bn, with a 35 per cent annual growth rate, while Google’s is worth $48bn, growing at 22 per cent each year. It is unlikely that these numbers – and methodologies – will become mainstream any time soon….(More)”.

The soft spot of hard code: blockchain technology, network governance and pitfalls of technological utopianism


Moritz Hutten at Global Networks: “The emerging blockchain technology is expected to contribute to the transformation of ownership, government services and global supply chains. By analysing a crisis that occurred with one of its frontrunners, Ethereum, in this article I explore the discrepancies between the purported governance of blockchains and the de facto control of them through expertise and reputation. Ethereum is also thought to exemplify libertarian techno‐utopianism.

When ‘The DAO’, a highly publicized but faulty crowd‐funded venture fund was deployed on the Ethereum blockchain, the techno‐utopianism was suspended, and developers fell back on strong network ties. Now that the blockchain technology is seeing an increasing uptake, I shall also seek to unearth broader implications of the blockchain for the proliferation or blockage of global finance and beyond. Contrasting claims about the disruptive nature of the technology, in this article I show that, by redeeming the positive utopia of ontic, individualized debt, blockchains reinforce our belief in a crisis‐ridden, financialized capitalism….(More)”.

Crowdlaw: Collective Intelligence and Lawmaking


Paper by Beth Noveck in Analyse & Kritik: “To tackle the fast-moving challenges of our age, law and policymaking must become more flexible, evolutionary and agile. Thus, in this Essay we examine ‘crowdlaw’, namely how city councils at the local level and parliaments at the regional and national level are turning to technology to engage with citizens at every stage of the law and policymaking process.

As we hope to demonstrate, crowdlaw holds the promise of improving the quality and effectiveness of outcomes by enabling policymakers to interact with a broader public using methods designed to serve the needs of both institutions and individuals. crowdlaw is less a prescription for more deliberation to ensure greater procedural legitimacy by having better inputs into lawmaking processes than a practical demand for more collaborative approaches to problem-solving that yield better outputs, namely policies that achieve their intended aims. However, as we shall explore, the projects that most enhance the epistemic quality of lawmaking are those that are designed to meet the specific informational needs for that stage of problem-solving….(More)”,

Driven to safety — it’s time to pool our data


Kevin Guo at TechCrunch: “…Anyone with experience in the artificial intelligence space will tell you that quality and quantity of training data is one of the most important inputs in building real-world-functional AI. This is why today’s large technology companies continue to collect and keep detailed consumer data, despite recent public backlash. From search engines, to social media, to self driving cars, data — in some cases even more than the underlying technology itself — is what drives value in today’s technology companies.

It should be no surprise then that autonomous vehicle companies do not publicly share data, even in instances of deadly crashes. When it comes to autonomous vehicles, the public interest (making safe self-driving cars available as soon as possible) is clearly at odds with corporate interests (making as much money as possible on the technology).

We need to create industry and regulatory environments in which autonomous vehicle companies compete based upon the quality of their technology — not just upon their ability to spend hundreds of millions of dollars to collect and silo as much data as possible (yes, this is how much gathering this data costs). In today’s environment the inverse is true: autonomous car manufacturers are focusing on are gathering as many miles of data as possible, with the intention of feeding more information into their models than their competitors, all the while avoiding working together….

The complexity of this data is diverse, yet public — I am not suggesting that people hand over private, privileged data, but actively pool and combine what the cars are seeing. There’s a reason that many of the autonomous car companies are driving millions of virtual miles — they’re attempting to get as much active driving data as they can. Beyond the fact that they drove those miles, what truly makes that data something that they have to hoard? By sharing these miles, by seeing as much of the world in as much detail as possible, these companies can focus on making smarter, better autonomous vehicles and bring them to market faster.

If you’re reading this and thinking it’s deeply unfair, I encourage you to once again consider 40,000 people are preventably dying every year in America alone. If you are not compelled by the massive life-saving potential of the technology, consider that publicly licenseable self-driving data sets would accelerate innovation by removing a substantial portion of the capital barrier-to-entry in the space and increasing competition….(More)”

Blockchain systems are tracking food safety and origins


Nir Kshetri at The Conversation: “When a Chinese consumer buys a package labeled “Australian beef,” there’s only a 50-50 chance the meat inside is, in fact, Australian beef. It could just as easily contain rat, dog, horse or camel meat – or a mixture of them all. It’s gross and dangerous, but also costly.

Fraud in the global food industry is a multi-billion-dollar problem that has lingered for years, duping consumers and even making them ill. Food manufacturers around the world are concerned – as many as 39 percent of them are worried that their products could be easily counterfeited, and 40 percent say food fraud is hard to detect.

In researching blockchain for more than three years, I have become convinced that this technology’s potential to prevent fraud and strengthen security could fight agricultural fraud and improve food safety. Many companies agree, and are already running various tests, including tracking wine from grape to bottle and even following individual coffee beans through international trade.

Tracing food items

An early trial of a blockchain system to track food from farm to consumer was in 2016, when Walmart collected information about pork being raised in China, where consumers are rightly skeptical about sellers’ claims of what their food is and where it’s from. Employees at a pork farm scanned images of farm inspection reports and livestock health certificates, storing them in a secure online database where the records could not be deleted or modified – only added to.

As the animals moved from farm to slaughter to processing, packaging and then to stores, the drivers of the freight trucks played a key role. At each step, they would collect documents detailing the shipment, storage temperature and other inspections and safety reports, and official stamps as authorities reviewed them – just as they did normally. In Walmart’s test, however, the drivers would photograph those documents and upload them to the blockchain-based database. The company controlled the computers running the database, but government agencies’ systems could also be involved, to further ensure data integrity.

As the pork was packaged for sale, a sticker was put on each container, displaying a smartphone-readable code that would link to that meat’s record on the blockchain. Consumers could scan the code right in the store and assure themselves that they were buying exactly what they thought they were. More recent advances in the technology of the stickers themselves have made them more secure and counterfeitresistant.

Walmart did similar tests on mangoes imported to the U.S. from Latin America. The company found that it took only 2.2 seconds for consumers to find out an individual fruit’s weight, variety, growing location, time it was harvested, date it passed through U.S. customs, when and where it was sliced, which cold-storage facility the sliced mango was held in and for how long it waited before being delivered to a store….(More)”.

Big Data Ethics and Politics: Toward New Understandings


Introductory paper by Wenhong Chen and Anabel Quan-Haase of Special Issue of the Social Science Computer Review:  “The hype around big data does not seem to abate nor do the scandals. Privacy breaches in the collection, use, and sharing of big data have affected all the major tech players, be it Facebook, Google, Apple, or Uber, and go beyond the corporate world including governments, municipalities, and educational and health institutions. What has come to light is that enabled by the rapid growth of social media and mobile apps, various stakeholders collect and use large amounts of data, disregarding the ethics and politics.

As big data touch on many realms of daily life and have profound impacts in the social world, the scrutiny around big data practice becomes increasingly relevant. This special issue investigates the ethics and politics of big data using a wide range of theoretical and methodological approaches. Together, the articles provide new understandings of the many dimensions of big data ethics and politics, showing it is important to understand and increase awareness of the biases and limitations inherent in big data analysis and practices….(More)”

It’s time to let citizens tackle the wickedest public problems


Gabriella Capone at apolitical (a winner of the 2018 Apolitical Young Thought Leaders competition): “Rain ravaged Gdańsk in 2016, taking the lives of two residents and causing millions of euros in damage. Despite its 700-year history of flooding the city was overwhelmed by these especially devastating floods. Also, Gdańsk is one of the European coasts most exposed to rising sea levels. It needed a new approach to avoid similar outcomes for the next, inevitable encounter with this worsening problem.

Bringing in citizens to tackle such a difficult issue was not the obvious course of action. Yet this was the proposal of Dr. Marcin Gerwin, an advocate from a neighbouring town who paved the way for Poland’s first participatory budgeting experience.

Mayor Adamowicz of Gdańsk agreed and, within a year, they welcomed about 60 people to the first Citizens Assembly on flood mitigation. Implemented by Dr. Gerwin and a team of coordinators, the Assembly convened over four Saturdays, heard expert testimony, and devised solutions.

The Assembly was not only deliberative and educational, it was action-oriented. Mayor Adamowicz committed to implement proposals on which 80% or more of participants agreed. The final 16 proposals included the investment of nearly $40 million USD in monitoring systems and infrastructure, subsidies to incentivise individuals to improve water management on their property, and an educational “Do Not Flood” campaign to highlight emergency resources.

It may seem risky to outsource the solving of difficult issues to citizens. Yet, when properly designed, public problem-solving can produce creative resolutions to formidable challenges. Beyond Poland, public problem-solving initiatives in Mexico and the United States are making headway on pervasive issues, from flooding to air pollution, to technology in public spaces.

The GovLab, with support from the Tinker Foundation, is analysing what makes for more successful public problem-solving as part of its City Challenges program. Below, I provide a glimpse into the types of design choices that can amplify the impact of public problem-solving….(More)

You can’t characterize human nature if studies overlook 85 percent of people on Earth


Daniel Hruschka at the Conversation: “Over the last century, behavioral researchers have revealed the biases and prejudices that shape how people see the world and the carrots and sticks that influence our daily actions. Their discoveries have filled psychology textbooks and inspired generations of students. They’ve also informed how businesses manage their employees, how educators develop new curricula and how political campaigns persuade and motivate voters.

But a growing body of research has raised concerns that many of these discoveries suffer from severe biases of their own. Specifically, the vast majority of what we know about human psychology and behavior comes from studies conducted with a narrow slice of humanity – college students, middle-class respondents living near universities and highly educated residents of wealthy, industrialized and democratic nations.

Blue countries represent the locations of 93 percent of studies published in Psychological Science in 2017. Dark blue is U.S., blue is Anglophone colonies with a European descent majority, light blue is western Europe. Regions sized by population.

To illustrate the extent of this bias, consider that more than 90 percent of studies recently published in psychological science’s flagship journal come from countries representing less than 15 percent of the world’s population.

If people thought and behaved in basically the same ways worldwide, selective attention to these typical participants would not be a problem. Unfortunately, in those rare cases where researchers have reached out to a broader range of humanity, they frequently find that the “usual suspects” most often included as participants in psychology studies are actually outliers. They stand apart from the vast majority of humanity in things like how they divvy up windfalls with strangers, how they reason about moral dilemmas and how they perceive optical illusions.

Given that these typical participants are often outliers, many scholars now describe them and the findings associated with them using the acronym WEIRD, for Western, educated, industrialized, rich and democratic.

WEIRD isn’t universal

Because so little research has been conducted outside this narrow set of typical participants, anthropologists like me cannot be sure how pervasive or consequential the problem is. A growing body of case studies suggests, though, that assuming such typical participants are the norm worldwide is not only scientifically suspect but can also have practical consequences….(More)”.

The Blockchain and the New Architecture of Trust


The Blockchain and the New Architecture of Trust

Book by Kevin Werbach: “The blockchain entered the world on January 3, 2009, introducing an innovative new trust architecture: an environment in which users trust a system—for example, a shared ledger of information—without necessarily trusting any of its components. The cryptocurrency Bitcoin is the most famous implementation of the blockchain, but hundreds of other companies have been founded and billions of dollars invested in similar applications since Bitcoin’s launch. Some see the blockchain as offering more opportunities for criminal behavior than benefits to society. In this book, Kevin Werbach shows how a technology resting on foundations of mutual mistrust can become trustworthy.

The blockchain, built on open software and decentralized foundations that allow anyone to participate, seems like a threat to any form of regulation. In fact, Werbach argues, law and the blockchain need each other. Blockchain systems that ignore law and governance are likely to fail, or to become outlaw technologies irrelevant to the mainstream economy. That, Werbach cautions, would be a tragic waste of potential. If, however, we recognize the blockchain as a kind of legal technology that shapes behavior in new ways, it can be harnessed to create tremendous business and social value….(More)”

Data-Driven Development


Report by the World Bank: “…Decisions based on data can greatly improve people’s lives. Data can uncover patterns, unexpected relationships and market trends, making it possible to address previously intractable problems and leverage hidden opportunities. For example, tracking genes associated with certain types of cancer to improve treatment, or using commuter travel patterns to devise public transportation that is affordable and accessible for users, as well as profitable for operators.

Data is clearly a precious commodity, and the report points out that people should have greater control over the use of their personal data. Broadly speaking, there are three possible answers to the question “Who controls our data?”: firms, governments, or users. No global consensus yet exists on the extent to which private firms that mine data about individuals should be free to use the data for profit and to improve services.

User’s willingness to share data in return for benefits and free services – such as virtually unrestricted use of social media platforms – varies widely by country. In addition to that, early internet adopters, who grew up with the internet and are now age 30–40, are the most willing to share (GfK 2017).

Are you willing to share your data? (source: GfK 2017)

Image

On the other hand, data can worsen the digital divide – the data poor, who leave no digital trail because they have limited access, are most at risk from exclusion from services, opportunities and rights, as are those who lack a digital ID, for instance.

Firms and Data

For private sector firms, particularly those in developing countries, the report suggests how they might expand their markets and improve their competitive edge. Companies are already developing new markets and making profits by analyzing data to better understand their customers. This is transforming conventional business models. For years, telecommunications has been funded by users paying for phone calls. Today, advertisers pay for users’ data and attention are funding the internet, social media, and other platforms, such as apps, reversing the value flow.

Governments and Data

For governments and development professionals, the report provides guidance on how they might use data more creatively to help tackle key global challenges, such as eliminating extreme poverty, promoting shared prosperity, or mitigating the effects of climate change. The first step is developing appropriate guidelines for data sharing and use, and for anonymizing personal data. Governments are already beginning to use the huge quantities of data they hold to enhance service delivery, though they still have far to go to catch up with the commercial giants, the report finds.

Data for Development

The Information and Communications for Development report analyses how the data revolution is changing the behavior of governments, individuals, and firms and how these changes affect economic, social, and cultural development. This is a topic of growing importance that cannot be ignored, and the report aims to stimulate wider debate on the unique challenges and opportunities of data for development. It will be useful for policy makers, but also for anyone concerned about how their personal data is used and how the data revolution might affect their future job prospects….(More)”.