Stefaan Verhulst
Chapter by Sarah Hartmann, Agnes Mainka and Wolfgang G. Stock in Enhancing Knowledge Discovery and Innovation in the Digital Era: “Cities all over the world are challenged with problems evolving from increasing urbanity, population growth, and density. For example, one prominent issue that is addressed in many cities is mobility. To develop smart city solutions, governments are trying to introduce open innovation. They have started to open their governmental and city related data as well as awake the citizens’ awareness on urban problems through innovation contests.
Citizens are the users of the city and therefore, have a practical motivation to engage in innovation contests as for example in hackathons and app competitions. The collaboration and co-creation of civic services by means of innovation contests is a cultural development of how governments and citizens work together in an open governmental environment. A qualitative analysis of innovation contests in 24 world cities reveals this global trend. In particular, such events increase the awareness of citizens and local businesses for identifying and solving urban challenges and are helpful means to transfer the smart city idea into practicable solutions….(More)”.
Mara Hvistendahl at Wired: “…During the past 30 years, by contrast, China has grown to become the world’s second largest economy without much of a functioning credit system at all. The People’s Bank of China, the country’s central banking regulator, maintains records on millions of consumers, but they often contain little or no information. Until recently, it was difficult to get a credit card with any bank other than your own. Consumers mainly used cash….
In 2013, Ant Financial executives retreated to the mountains outside Hangzhou to discuss creating a slew of new products; one of them was Zhima Credit. The executives realized that they could use the data-collecting powers of Alipay to calculate a credit score based on an individual’s activities. “It was a very natural process,” says You Xi, a Chinese business reporter who detailed this pivotal meeting in a recent book, Ant Financial. “If you have payment data, you can assess the credit of a person.” And so the tech company began the process of creating a score that would be “credit for everything in your life,” as You explains it.
Ant Financial wasn’t the only entity keen on using data to measure people’s worth. Coincidentally or not, in 2014 the Chinese government announced it was developing what it called a system of “social credit.” In 2014, the State Council, China’s governing cabinet, publicly called for the establishment of a nationwide tracking system to rate the reputations of individuals, businesses, and even government officials. The aim is for every Chinese citizen to be trailed by a file compiling data from public and private sources by 2020, and for those files to be searchable by fingerprints and other biometric characteristics. The State Council calls it a “credit system that covers the whole society.”…
In 2015 Ant Financial was one of eight tech companies granted approval from the People’s Bank of China to develop their own private credit scoring platforms. Zhima Credit appeared in the Alipay app shortly after that. The service tracks your behavior on the app to arrive at a score between 350 and 950, and offers perks and rewards to those with good scores. Zhima Credit’s algorithm considers not only whether you repay your bills but also what you buy, what degrees you hold, and the scores of your friends. Like Fair and Isaac decades earlier, Ant Financial executives talked publicly about how a data-driven approach would open up the financial system to people who had been locked out, like students and rural Chinese. For the more than 200 million Alipay users who have opted in to Zhima Credit, the sell is clear: Your data will magically open doors for you….
Often, data brokers are flat-out wrong. The data broker Acxiom, which provides some information about what it collects on a site called AboutTheData.com, has me pegged as a single woman with a high school education and a “likely Las Vegas gambler,” when in fact I’m married, have a master’s degree, and have never even bought a lottery ticket. But it is impossible to challenge these assessments, since we’re never told that they exist. I know more about Zhima Credit’s algorithm than I do about how US data brokers rate me. This is, as Pasquale points out in his book The Black Box Society, essentially a “one-way mirror.”…(More)”.
Andrea Hickerson at the Oxford Bibliographies: “Until recently, I firmly believed whistleblowers would increasingly turn to secure, anonymizing tools and websites, like WikiLeaks, to share their data rather than take the risk of relying on a journalist to protect their identity. Now, however, WikiLeaks is implicated in aiding the election of Donald Trump, and “The Silence Breakers,” outspoken victims of sexual assault, are Time’s 2017 Person of the Year.
Not only is this moment remarkable because of the willingness of whistleblowers to come forward and show their faces, but also because women are the ones blowing the whistle. With the notable exception of Chelsea Manning who herself did not choose to be identified, the most well-known whistleblowers in modern history, arguably Daniel Ellsberg, Edward Snowden, and Jeffrey Wigand, are all men.
Research suggests key individual and organizational attributes that lend themselves to whistleblowing. On the individual level, people motivated by strong moral values or self-identity might be more likely to act. At the organizational level, individuals are more likely to report wrongdoing if they believe they will be listened to.
People who have faith in the organizations they work for are more likely to report wrongdoing internally. Those who don’t have faith look to the government, reporters, and/or hire lawyers to expose the wrongdoings.
Historically, women wouldn’t have been likely candidates to report internally becausethey haven’t been listened to or empowered in the workplace At work they are undervalued,underrepresented in leadership roles, and underpaid compared to male colleagues. This signals to women that their concerns will not be taken seriously or instigate change. Therefore, many choose to remain silent.
Whistleblowing comes with enormous risks, and those risks are greater for women….(More)”.
Symposium Paper by Dominique Boullier: “When in 2007 Savage and Burrows pointed out ‘the coming crisis of empirical methods’, they were not expecting to be so right. Their paper however became a landmark, signifying the social sciences’ reaction to the tremendous shock triggered by digital methods. As they frankly acknowledge in a more recent paper, they did not even imagine the extent to which their prediction might become true, in an age of Big Data, where sources and models have to be revised in the light of extended computing power and radically innovative mathematical approaches.They signalled not just a debate about academic methods but also a momentum for ‘commercial sociology’ in which platforms acquire the capacity to add ‘another major nail in the coffin of academic sociology claims to jurisdiction over knowledge of the social’, because ‘research methods (are) an intrinsic feature of contemporary capitalist organisations’ (Burrows and Savage, 2014, p. 2). This need for a serious account of research methods is well tuned with the claims of Social Studies of Science that should be applied to the social sciences as well.
I would like to build on these insights and principles of Burrows and Savage to propose an historical and systematic account of quantification during the last century, following in the footsteps of Alain Desrosières, and in which we see Big Data and Machine Learning as a major shift in the way social science can be performed. And since, according to Burrows and Savage (2014, p. 5), ‘the use of new data sources involves a contestation over the social itself’, I will take the risk here of identifying and defining the entities that are supposed to encapsulate the social for each kind of method: beyond the reign of ‘society’ and ‘opinion’, I will point at the emergence of the ‘replications’ that are fabricated by digital platforms but are radically different from previous entities. This is a challenge to invent not only new methods but also a new process of reflexivity for societies, made available by new stakeholders (namely, the digital platforms) which transform reflexivity into reactivity (as operational quantifiers always tend to)….(More)”.
Project by WWF: “Can data on how people respond to a warming world help us anticipate human impacts on wildlife?
Not long after Nikhil Advani joined WWF in 2013, he made an intriguing discovery. Advani, whose work focuses on climate change adaptation, was assessing the vulnerability of various species to the changing planet. “I quickly realized,” he says, “that for a lot of the species that WWF works on—like elephants, mountain gorillas, and snow leopards—the biggest climate-driven threats are likely to come from human communities affected by changes in weather and climate.”
His realization led to the launch of Climate Crowd, an online platform for crowdsourcing data about two key topics: learning how rural and indigenous communities around the world are responding to climate change, and how their responses are affecting biodiversity. (The latter topic, Advani says, is something we know very little about yet.) For example, if community members enter protected areas to collect water during droughts, how will that activity affect the flora and fauna?
Working with partners from a handful of other conservation groups, Advani designed a survey for participants to use when interviewing local community members. Participants transcribe the interviews, mark each topic discussed in a list of categories (such as drought or natural habitat encroachment), and upload the results to the online platform…(Explore the Climate Crowd).“
Sam Corbett-Davies, Sharad Goel and Sandra González-Bailón in the The New York Times: “In courtrooms across the country, judges turn to computer algorithms when deciding whether defendants awaiting trial must pay bail or can be released without payment. The increasing use of such algorithms has prompted warnings about the dangers of artificial intelligence. But research shows that algorithms are powerful tools for combating the capricious and biased nature of human decisions.
Bail decisions have traditionally been made by judges relying on intuition and personal preference, in a hasty process that often lasts just a few minutes. In New York City, the strictest judges are more than twice as likely to demand bail as the most lenient ones.
To combat such arbitrariness, judges in some cities now receive algorithmically generated scores that rate a defendant’s risk of skipping trial or committing a violent crime if released. Judges are free to exercise discretion, but algorithms bring a measure of consistency and evenhandedness to the process.
The use of these algorithms often yields immediate and tangible benefits: Jail populations, for example, can decline without adversely affecting public safety.
In one recent experiment, agencies in Virginia were randomly selected to use an algorithm that rated both defendants’ likelihood of skipping trial and their likelihood of being arrested if released. Nearly twice as many defendants were released, and there was no increase in pretrial crime….(More)”.
Gillian Bolsover and Philip Howard in the Journal Big Data: “Computational propaganda has recently exploded into public consciousness. The U.S. presidential campaign of 2016 was marred by evidence, which continues to emerge, of targeted political propaganda and the use of bots to distribute political messages on social media. This computational propaganda is both a social and technical phenomenon. Technical knowledge is necessary to work with the massive databases used for audience targeting; it is necessary to create the bots and algorithms that distribute propaganda; it is necessary to monitor and evaluate the results of these efforts in agile campaigning. Thus, a technical knowledge comparable to those who create and distribute this propaganda is necessary to investigate the phenomenon.
However, viewing computational propaganda only from a technical perspective—as a set of variables, models, codes, and algorithms—plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it. The very act of making something technical and impartial makes it seem inevitable and unbiased. This undermines the opportunities to argue for change in the social value and meaning of this content and the structures in which it exists. Big-data research is necessary to understand the socio-technical issue of computational propaganda and the influence of technology in politics. However, big data researchers must maintain a critical stance toward the data being used and analyzed so as to ensure that we are critiquing as we go about describing, predicting, or recommending changes. If research studies of computational propaganda and political big data do not engage with the forms of power and knowledge that produce it, then the very possibility for improving the role of social-media platforms in public life evaporates.
Definitionally, computational propaganda has two important parts: the technical and the social. Focusing on the technical, Woolley and Howard define computational propaganda as the assemblage of social-media platforms, autonomous agents, and big data tasked with the manipulation of public opinion. In contrast, the social definition of computational propaganda derives from the definition of propaganda—communications that deliberately misrepresent symbols, appealing to emotions and prejudices and bypassing rational thought, to achieve a specific goal of its creators—with computational propaganda understood as propaganda created or disseminated using computational (technical) means…(More) (Full Text HTML; Full Text PDF)
Andy Extance at Nature: “…The much-hyped technology behind Bitcoin, known as blockchain, has intoxicated investors around the world and is now making tentative inroads into science, spurred by broad promises that it can transform key elements of the research enterprise. Supporters say that it could enhance reproducibility and the peer review process by creating incorruptible data trails and securely recording publication decisions. But some also argue that the buzz surrounding blockchain often exceeds reality and that introducing the approach into science could prove expensive and introduce ethical problems.
A few collaborations, including Scienceroot and Pluto, are already developing pilot projects for science. Scienceroot aims to raise US$20 million, which will help pay both peer reviewers and authors within its electronic journal and collaboration platform. It plans to raise the funds in early 2018 by exchanging some of the science tokens it uses for payment for another digital currency known as ether. And the Wolfram Mathematica algebra program — which is widely used by researchers — is currently working towards offering support for an open-source blockchain platform called Multichain. Scientists could use this, for example, to upload data to a shared, open workspace that isn’t controlled by any specific party, according to Multichain….
Claudia Pagliari, who researches digital health-tracking technologies at the University of Edinburgh, UK, says that she recognizes the potential of blockchain, but researchers have yet to properly explore its ethical issues. What happens if a patient withdraws consent for a trial that is immutably recorded on a blockchain? And unscrupulous researchers could still add fake data to a blockchain, even if the process is so open that everyone can see who adds it, says Pagliari. Once added, no-one can change that information, although it’s possible they could label it as retracted….(More)”.
Lauren Kirchner at ArsTechnica: “The algorithms that play increasingly central roles in our lives often emanate from Silicon Valley, but the effort to hold them accountable may have another epicenter: New York City. Last week, the New York City Council unanimously passed a bill to tackle algorithmic discrimination—the first measure of its kind in the country.
The algorithmic accountability bill, waiting to be signed into law by Mayor Bill de Blasio, establishes a task force that will study how city agencies use algorithms to make decisions that affect New Yorkers’ lives, and whether any of the systems appear to discriminate against people based on age, race, religion, gender, sexual orientation, or citizenship status. The task force’s report will also explore how to make these decision-making processes understandable to the public.
The bill’s sponsor, Council Member James Vacca, said he was inspired by ProPublica’s investigation into racially biased algorithms used to assess the criminal risk of defendants….
A previous, more sweeping version of the bill had mandated that city agencies publish the source code of all algorithms being used for “targeting services” or “imposing penalties upon persons or policing” and to make them available for “self-testing” by the public. At a hearing at City Hall in October, representatives from the mayor’s office expressed concerns that this mandate would threaten New Yorkers’ privacy and the government’s cybersecurity.
The bill was one of two moves the City Council made last week concerning algorithms. On Thursday, the committees on health and public safety held a hearing on the city’s forensic methods, including controversial tools that the chief medical examiner’s office crime lab has used for difficult-to-analyze samples of DNA.
As a ProPublica/New York Times investigation detailed in September, an algorithm created by the lab for complex DNA samples has been called into question by scientific experts and former crime lab employees.
The software, called the Forensic Statistical Tool, or FST, has never been adopted by any other lab in the country….(More)”.
Diane Coyle in the Financial Times: “As economists have been pointing out for a while, the combination of always-online smartphones and matching algorithms (of the kind pioneered by Nobel Prize-winning economist Richard Thaler and others) reduces the transaction costs involved in economic exchanges. As Ronald Coase argued, transaction costs, because they limit the scope of exchanges in the market, help explain why companies exist. Where those costs are high, it is more efficient to keep the activities inside an organisation. The relevant costs are informational. But in dense urban markets the new technologies make it easy and fast to match up the two sides of a desired exchange, such as a would-be passenger and a would-be driver for a specific journey. This can expand the market (as well as substituting for existing forms of transport such as taxis and buses). It becomes viable to serve previously under-served areas, or for people to make journeys they previously could not have afforded. In principle all parties (customers, drivers and platform) can share the benefits.
Making more efficient use of assets such as cars is nice, but the economic gains come from matching demand with supply in contexts where there are no or few economies of scale — such as conveying a passenger or a patient from A to B, or providing a night’s accommodation in a specific place.
Rather than being seen as a threat to public services, the new technologies should be taken as a compelling opportunity to deliver inherently unscalable services efficiently, especially given the fiscal squeeze so many authorities are facing. Public transport is one opportunity. How else could cash-strapped transportation authorities even hope to provide a universal service on less busy routes? It is hard to see why they should not make contractual arrangements with private providers. Nor is there any good economic reason they could not adopt the algorithmic matching model themselves, although the organisational effort might defeat many.
However, in ageing societies the big prize will be organising other services such as adult social care this way. These are inherently person to person and there are few economies of scale. The financial pressures on governments in delivering care are only going to grow. Adopting a more efficient model for matching demand and supply is so obvious a possible solution that pilot schemes are under way in several cities — both public sector-led and private start-ups. In fact, if public authorities do not try new models, the private sector will certainly fill the gap….(More)”.