Paper by Mark Latonero and Paula Kift: “Since 2014, millions of refugees and migrants have arrived at the borders of Europe. This article argues that, in making their way to safe spaces, refugees rely not only on a physical but increasingly also digital infrastructure of movement. Social media, mobile devices, and similar digitally networked technologies comprise this infrastructure of “digital passages”—sociotechnical spaces of flows in which refugees, smugglers, governments, and corporations interact with each other and with new technologies. At the same time, a digital infrastructure for movement can just as easily be leveraged for surveillance and control. European border policies, in particular, instantiate digital controls over refugee movement and identity. We review the actors, technologies, and policies of movement and control in the EU context and argue that scholars, policymakers, and the tech community alike should pay heed to the ethics of the use of new technologies in refugee and migration flows….(More)”.
How Democracy Can Survive Big Data
Colin Koopman in The New York Times: “…The challenge of designing ethics into data technologies is formidable. This is in part because it requires overcoming a century-long ethos of data science: Develop first, question later. Datafication first, regulation afterward. A glimpse at the history of data science shows as much.
The techniques that Cambridge Analytica uses to produce its psychometric profiles are the cutting edge of data-driven methodologies first devised 100 years ago. The science of personality research was born in 1917. That year, in the midst of America’s fevered entry into war, Robert Sessions Woodworth of Columbia University created the Personal Data Sheet, a questionnaire that promised to assess the personalities of Army recruits. The war ended before Woodworth’s psychological instrument was ready for deployment, but the Army had envisioned its use according to the precedent set by the intelligence tests it had been administering to new recruits under the direction of Robert Yerkes, a professor of psychology at Harvard at the time. The data these tests could produce would help decide who should go to the fronts, who was fit to lead and who should stay well behind the lines.
The stakes of those wartime decisions were particularly stark, but the aftermath of those psychometric instruments is even more unsettling. As the century progressed, such tests — I.Q. tests, college placement exams, predictive behavioral assessments — would affect the lives of millions of Americans. Schoolchildren who may have once or twice acted out in such a way as to prompt a psychometric evaluation could find themselves labeled, setting them on an inescapable track through the education system.
Researchers like Woodworth and Yerkes (or their Stanford colleague Lewis Terman, who formalized the first SAT) did not anticipate the deep consequences of their work; they were too busy pursuing the great intellectual challenges of their day, much like Mr. Zuckerberg in his pursuit of the next great social media platform. Or like Cambridge Analytica’s Christopher Wylie, the twentysomething data scientist who helped build psychometric profiles of two-thirds of all Americans by leveraging personal information gained through uninformed consent. All of these researchers were, quite understandably, obsessed with the great data science challenges of their generation. Their failure to consider the consequences of their pursuits, however, is not so much their fault as it is our collective failing.
For the past 100 years we have been chasing visions of data with a singular passion. Many of the best minds of each new generation have devoted themselves to delivering on the inspired data science promises of their day: intelligence testing, building the computer, cracking the genetic code, creating the internet, and now this. We have in the course of a single century built an entire society, economy and culture that runs on information. Yet we have hardly begun to engineer data ethics appropriate for our extraordinary information carnival. If we do not do so soon, data will drive democracy, and we may well lose our chance to do anything about it….(More)”.
The Cambridge Handbook of Consumer Privacy
Handbook by Evan Selinger, Jules Polonetsky, and Omer Tene: “Businesses are rushing to collect personal data to fuel surging demand. Data enthusiasts claim personal information that’s obtained from the commercial internet, including mobile platforms, social networks, cloud computing, and connected devices, will unlock path-breaking innovation, including advanced data security. By contrast, regulators and activists contend that corporate data practices too often disempower consumers by creating privacy harms and related problems. As the Internet of Things matures and facial recognition, predictive analytics, big data, and wearable tracking grow in power, scale, and scope, a controversial ecosystem will exacerbate the acrimony over commercial data capture and analysis. The only productive way forward is to get a grip on the key problems right now and change the conversation….(More)”.
AI And Open Data Show Just How Often Cars Block Bus And Bike Lanes
Eillie Anzilotti in Fast Company: “…While anyone who bikes or rides a bus in New York City knows intuitively that the lanes are often blocked, there’s been little data to back up that feeling apart from the fact that last year, the NYPD issues 24,000 tickets for vehicles blocking bus lanes, and around 79,000 to cars in the bike lane. By building the algorithm, Bell essentializes what engaged citizenship and productive use of open data looks like. The New York City Department of Transportation maintains several hundred video cameras throughout the city; those cameras feed images in real time to the DOT’s open-data portal. Bell downloaded a week’s worth of footage from that portal to analyze.
To build his computer algorithm to do the analysis, he fed around 2,000 images of buses, cars, pedestrians, and vehicles like UPS trucks into TensorFlow, Google’s open-source framework that the tech giant is using to train autonomous vehicles to recognize other road users. “Because of the push into AVs, machine learning in general and neural networks have made lots of progress, because they have to answer the same questions of: What is this vehicle, and what is it going to do?” Bell says. After several rounds of processing, Bell was able to come up with an algorithm that fairly faultlessly could determine if a vehicle at the bus stop was, in fact, a bus, or if it was something else that wasn’t supposed to be there.
As cities and governments, spurred by organizations like OpenGov, have moved to embrace transparency and open data, the question remains: So, what do you do with it?
For Bell, the answer is that citizens can use it to empower themselves. “I’m a little uncomfortable with cameras and surveillance in cities,” Bell says. “But agencies like the NYPD and DOT have already made the decision to put the cameras up. We don’t know the positive and negative outcomes if more and more data from cameras is opened to the public, but if the cameras are going in, we should know what data they’re collecting and be able to access it,” he says. He’s made his algorithm publicly available in the hopes that more people will use data to investigate the issue on their own streets, and perhaps in other cities….Bell is optimistic that open data can empower more citizens to identify issues in their own cities and bring a case for why they need to be addressed….(More)”.
Finding a more human government
Report by the Centre for Public Impact: “…embarked upon a worldwide project to find out how governments can strengthen their legitimacy. Amidst the turbulence and unpredictability of recent years, there are many contemporary accounts of people feeling angry, cynical or ambivalent about government.
While much has been said about the personalities of leaders and the rise of populist parties, what’s less clear is what governments could really do to strengthen legitimacy, a concept most agree remains integral to worldwide stability and peace. To find out what legitimacy means to people today and how it could be strengthened, we decided to break out of the usual circles of influence and ensure our project heard directly from citizens from around the world. People were open and honest about the struggle for someone in government to understand and to listen. Some shed tears while others felt angry about how their voices and identities seemed undervalued. Everyone, however, wanted to show how it was still very possible to build a stronger relationship and understanding between governments and people, even if the day-to-day actions of government were not always popular.
The aim of this paper is not to provide the definitive model for legitimacy. Instead, we have sought to be open about what we heard, stay true to people’s views and shine a light on the common themes that could help governments have better conversations about building legitimacy into all their systems and with the support of their citizens.
We gathered case studies to show how this was already happening and found positive examples in places we didn’t expect. The importance of governments showing their human side – even in our age of AI and robotics – emerged as such a key priority, and is why we called this paper Finding a more human government.
This is a conversation that has only just begun. …. To see what others are saying, do take a look at our website www.findinglegitimacy.centreforpublicimpact.org”
Can Data Help Brazil Take a Bite Out of Crime?
Joe Leahy at ZY See Beyond: “When Argentine entrepreneur Federico Vega two years ago launched a startup offering Uberlike services for Brazil’s freight industry, the sector was on the cusp of a wave of cargo theft.
Across Brazil, but especially in Rio de Janeiro, crime has soared, with armed gangs robbing one truck every 50 minutes in Rio last year.
But while the authorities have reacted with force to the crime wave, Vega turned to software engineers at his CargoX startup. By studying a range of industry and security data, CargoX developed software that identifies risks and helps drivers avoid crime hot spots, or if a robbery does happen, alerts the company in real time.CargoX says that in Brazil, 0.1 percent by value of all cargo transported by trucks is stolen. “We are about 50 percent lower than that, but we still have tons of work to do,” says São Paulo–based Vega.
CargoX is one of a growing number of Brazilian technology startups that are seeking digital solutions to the problem of endemic crime in Latin America’s largest country.
Having started from zero two years ago, CargoX today has signed up more than 5,000 truckers. The company scans data from all sources to screen its motorists and study past crimes to see what routes, times, neighborhoods and types of cargo represent the highest risk.
Certain gas stations that might, for instance, be known for prostitution are avoided because of their criminal associations. Daytime delivery is better than night. Drivers are tracked by GPS and must stay inside “geofences” — known safe routes. Foraying outside these alerts the system.
Vega says the key is to learn from the data. “Everyone says it’s good to learn from your mistakes, but it’s even better to learn from other people’s mistakes.”
The use of big data to anticipate crime is at the center of the approach of another tech-savvy entrepreneur, Pedro Moura Costa, the founder of BVRio Institute, an organization that seeks market solutions to environmental issues.
Organized crime is targeting everything from highway robbery to the illegal plunder of tropical hardwoods in the Amazon while online crime such as credit card fraud is also rampant, analysts say….(More)”.
How the government will operate in 2030
Darrell West at the Hill: “Imagine it is 2030 and you are a U.S. government employee working from home. With the assistance of the latest technology, you participate in video calls with clients and colleagues, augment your job activities through artificial intelligence and a personal digital assistant, work through collaboration software, and regularly get rated on a one-to-five scale by clients regarding your helpfulness, follow-through, and task completion.
How did you — and the government — get here? The sharing economy that unfolded in 2018 has revolutionized the public-sector workforce. The days when federal employees were subject to a centrally directed Office of Personnel and Management that oversaw permanent, full-time workers sitting in downtown office buildings are long gone. In their place is a remote workforce staffed by a mix of short- and long-term employees. This has dramatically improved worker productivity and satisfaction.
In the new digital world that has emerged, the goal is to use technology to make employees accountable. Gone are 20- or 30-year careers in the federal bureaucracy. Political leaders have always preached the virtue of running government like a business, and the success of Uber, Airbnb, and WeWork has persuaded them to focus on accountability and performance.
Companies such as Facebook demonstrated they could run large and complex organizations with less than 20,000 employees, and the federal government followed suit in the late 2020s. Now, workers deploy the latest tools of artificial intelligence, virtual reality, data analytics, robots, driverless cars, and digital assistants to improve the government. Unlike the widespread mistrust and cynicism that had poisoned attitudes in the decades before, the general public now sees government as a force for achieving positive results.
Many parts of the federal government are decentralized and mid-level employees are given greater authority to make decisions — but are subject to digital ratings that keep them accountable for their performance. The U.S. government borrowed this technique from China, where airport authorities in 2018 installed digital devices that allowed visitors to rate the performance of individual passport officers after every encounter. The reams of data have enabled Chinese authorities to fire poor performers and make sure foreign visitors see a friendly and competent face at the Beijing International Airport.
Alexa-like devices are given to all federal employees. The devices are used to keep track of leave time, file reimbursement requests, request time off, and complete a range of routine tasks that used to take employees hours. Through voice-activated commands, they navigate these mundane tasks quickly and efficiently. No one can believe the mountains of paperwork required just a decade ago….(More)”.
The People vs. Democracy: Why Our Freedom Is in Danger and How to Save It
Book by Yascha Mounk: “The world is in turmoil. From India to Turkey and from Poland to the United States, authoritarian populists have seized power. As a result, Yascha Mounk shows, democracy itself may now be at risk.
Two core components of liberal democracy—individual rights and the popular will—are increasingly at war with each other. As the role of money in politics soared and important issues were taken out of public contestation, a system of “rights without democracy” took hold. Populists who rail against this say they want to return power to the people. But in practice they create something just as bad: a system of “democracy without rights.”
The consequence, Mounk shows in The People vs. Democracy, is that trust in politics is dwindling. Citizens are falling out of love with their political system. Democracy is wilting away. Drawing on vivid stories and original research, Mounk identifies three key drivers of voters’ discontent: stagnating living standards, fears of multiethnic democracy, and the rise of social media. To reverse the trend, politicians need to enact radical reforms that benefit the many, not the few.
The People vs. Democracy is the first book to go beyond a mere description of the rise of populism. In plain language, it describes both how we got here and where we need to go. For those unwilling to give up on either individual rights or the popular will, Mounk shows, there is little time to waste: this may be our last chance to save democracy….(More)”
Data for Development: What’s next? Concepts, trends and recommendations
Report by the WebFoundation: “The exponential growth of data provides powerful new ways for governments and companies to understand and respond to challenges and opportunities. This report, Data for Development: What’s next, investigates how organisations working in international development can leverage the growing quantity and variety of data to improve their investments and projects so that they better meet people’s needs.
Investigating the state of data for development and identifying emerging data trends, the study provides recommendations to support German development cooperation actors seeking to integrate data strategies and investments in their work. These insights can guide any organisation seeking to use data to enhance their development work.
The research considers four types of data: (1) big data, (2) open data, (3) citizen-generated data and (4) real-time data, and examines how they are currently being used in development-related policy-making and how they might lead to better development outcomes….(More)”.
Can Social Media Help Build Communities?
Paper by Eric Forbush and Nicol Turner-Lee: “In June 2017, Mark Zuckerberg proclaimed a new mission for Facebook, which was to “[g]ive people the power to build community and bring the world closer together” during the company’s first Community Summit. Yet, his declaration comes in the backdrop of a politically polarized America. While research has indicated that ideological polarization (the alignment and divergence of ideologies) has remained relatively unchanged, affective polarization (the degree to which Democrats and Republicans dislike each other) has skyrocketed (Gentzkow, 2016; Lelkes, 2016). This dislike for members of the opposite party may be amplified on social media platforms.
Social media have been accused of making our social networks increasingly insular, resulting in “echo chambers,” wherein individuals select information and friends who support their already held beliefs (Quattrociocchi, Scala, and Sunstein, 2016; Williams, McMurray, Kurz, and Lambert, 2015). However, the implicit message in Zuckerberg’s comments, and other leaders in this space, is that social media can provide users with a means for brokering relationships with other users that hold different values and beliefs from them. However, little is known on the extent to which social media platforms enable these opportunities.
Theories of prejudice reduction (Paluck and Green, 2009) partially explain an idealistic outcome of improved online relationships. In his seminal contact theory, Gordon Allport (1954) argued that under certain optimal conditions, all that is needed to reduce prejudice is for members of different groups to spend more time interacting with each other. However, contemporary social media platforms may not be doing enough to increase intergroup engagements, especially between politically polarized communities on issues of importance.
In this paper, we use Twitter data collected over a 20-day period, following the Day of Action for Net Neutrality on July 12, 2017. In support of a highly polarized regulatory issue, the Day of Action was organized by advocacy groups and corporations in support of an open internet, which does not discriminate against online users when accessing their preferred content. Analyzing 81,316 tweets about #netneutrality from 40,502 distinct users, we use social network analysis to develop network visualizations and conduct discrete content analysis of central tweets. Our research also divides the content by those in support and those opposed to any type of repeal of net neutrality rules by the FCC.
Our analysis of this particular issue reveals that social media is merely replicating, and potentially strengthening polarization on issues by party affiliations and online associations. Consequently, the appearance of mediators who are able to bridge online conversations or beliefs on charged issues appear to be nonexistent on both sides of the issue. Consequently, our findings suggest that social media companies may not be doing enough to bring communities together through meaningful conversations on their platforms….(More)”.