Waze-fed AI platform helps Las Vegas cut car crashes by almost 20%


Liam Tung at ZDNet: “An AI-led, road-safety pilot program between analytics firm Waycare and Nevada transportation agencies has helped reduce crashes along the busy I-15 in Las Vegas.

The Silicon Valley Waycare system uses data from connected cars, road cameras and apps like Waze to build an overview of a city’s roads and then shares that data with local authorities to improve road safety.

Waycare struck a deal with Google-owned Waze earlier this year to “enable cities to communicate back with drivers and warn of dangerous roads, hazards, and incidents ahead”. Waze’s crowdsourced data also feeds into Waycare’s traffic management system, offering more data for cities to manage traffic.

Waycare has now wrapped up a year-long pilot with the Regional Transportation Commission of Southern Nevada (RTC), Nevada Highway Patrol (NHP), and the Nevada Department of Transportation (NDOT).

RTC reports that Waycare helped the city reduce the number of primary crashes by 17 percent along the Interstate 15 Las Vegas.

Waycare’s data, as well as its predictive analytics, gave the city’s safety and traffic management agencies the ability to take preventative measures in high risk areas….(More)”.

Using Artificial Intelligence to Promote Diversity


Paul R. Daugherty, H. James Wilson, and Rumman Chowdhury at MIT Sloan Management Review:  “Artificial intelligence has had some justifiably bad press recently. Some of the worst stories have been about systems that exhibit racial or gender bias in facial recognition applications or in evaluating people for jobs, loans, or other considerations. One program was routinely recommending longer prison sentences for blacks than for whites on the basis of the flawed use of recidivism data.

But what if instead of perpetuating harmful biases, AI helped us overcome them and make fairer decisions? That could eventually result in a more diverse and inclusive world. What if, for instance, intelligent machines could help organizations recognize all worthy job candidates by avoiding the usual hidden prejudices that derail applicants who don’t look or sound like those in power or who don’t have the “right” institutions listed on their résumés? What if software programs were able to account for the inequities that have limited the access of minorities to mortgages and other loans? In other words, what if our systems were taught to ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand?

AI can do all of this — with guidance from the human experts who create, train, and refine its systems. Specifically, the people working with the technology must do a much better job of building inclusion and diversity into AI design by using the right data to train AI systems to be inclusive and thinking about gender roles and diversity when developing bots and other applications that engage with the public.

Design for Inclusion

Software development remains the province of males — only about one-quarter of computer scientists in the United States are women— and minority racial groups, including blacks and Hispanics, are underrepresented in tech work, too.  Groups like Girls Who Code and AI4ALL have been founded to help close those gaps. Girls Who Code has reached almost 90,000 girls from various backgrounds in all 50 states,5 and AI4ALL specifically targets girls in minority communities….(More)”.

Giving Voice to Patients: Developing a Discussion Method to Involve Patients in Translational Research


Paper by Marianne Boenink, Lieke van der Scheer, Elisa Garcia and Simone van der Burg in NanoEthics: “Biomedical research policy in recent years has often tried to make such research more ‘translational’, aiming to facilitate the transfer of insights from research and development (R&D) to health care for the benefit of future users. Involving patients in deliberations about and design of biomedical research may increase the quality of R&D and of resulting innovations and thus contribute to translation. However, patient involvement in biomedical research is not an easy feat. This paper discusses the development of a method for involving patients in (translational) biomedical research aiming to address its main challenges.

After reviewing the potential challenges of patient involvement, we formulate three requirements for any method to meaningfully involve patients in (translational) biomedical research. It should enable patients (1) to put forward their experiential knowledge, (2) to develop a rich view of what an envisioned innovation might look like and do, and (3) to connect their experiential knowledge with the envisioned innovation. We then describe how we developed the card-based discussion method ‘Voice of patients’, and discuss to what extent the method, when used in four focus groups, satisfied these requirements. We conclude that the method is quite successful in mobilising patients’ experiential knowledge, in stimulating their imaginaries of the innovation under discussion and to some extent also in connecting these two. More work is needed to translate patients’ considerations into recommendations relevant to researchers’ activities. It also seems wise to broaden the audience for patients’ considerations to other actors working on a specific innovation….(More)”

Explaining Explanations in AI


Paper by Brent Mittelstadt Chris Russell and Sandra Wachter: “Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that “All models are wrong but some are useful.”

We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a “do it yourself kit” for explanations, allowing a practitioner to directly answer “what if questions” or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly… (More)”.

Recalculating GDP for the Facebook age


Gillian Tett at the Financial Times: How big is the impact of Facebook on our lives? That question has caused plenty of hand-wringing this year, as revelations have tumbled out about the political influence of Big Tech companies.

Economists are attempting to look at this question too — but in a different way. They have been quietly trying to calculate the impact of Facebook on gross domestic product data, ie to measure what our social-media addiction is doing to economic output….

Kevin Fox, an Australian economist, thinks there is. Working with four other economists, including Erik Brynjolfsson, a professor at MIT, he recently surveyed consumers to see what they would “pay” for Facebook in monetary terms, concluding conservatively that this was about $42 a month. Extrapolating this to the wider economy, he then calculated that the “value” of the social-media platform is equivalent to 0.11 per cent of US GDP. That might not sound transformational. But this week Fox presented the group’s findings at an IMF conference on the digital economy in Washington DC and argued that if Facebook activity had been counted as output in the GDP data, it would have raised the annual average US growth rate from 1.83 per cent to 1.91 per cent between 2003 and 2017. The number would rise further if you included other platforms – researchers believe that “maps” and WhatsApp are particularly important – or other services.  Take photographs.

Back in 2000, as the group points out, about 80 billion photos were taken each year at a cost of 50 cents a picture in camera and processing fees. This was recorded in GDP. Today, 1.6 trillion photos are taken each year, mostly on smartphones, for “free”, and excluded from that GDP data. What would happen if that was measured too, along with other types of digital services?

The bad news is that there is no consensus among economists on this point, and the debate is still at a very early stage. … A separate paper from Charles Hulten and Leonard Nakamura, economists at the University of Maryland and Philadelphia Fed respectively, explained another idea: a measurement known as “EGDP” or “Expanded GDP”, which incorporates “welfare” contributions from digital services. “The changes wrought by the digital revolution require changes to official statistics,” they said.

Yet another paper from Nakamura, co-written with Diane Coyle of Cambridge University, argued that we should also reconfigure the data to measure how we “spend” our time, rather than “just” how we spend our money. “To recapture welfare in the age of digitalisation, we need shadow prices, particularly of time,” they said. Meanwhile, US government number-crunchers have been trying to measure the value of “free” open-source software, such as R, Python, Julia and Java Script, concluding that if captured in statistics these would be worth about $3bn a year. Another team of government statisticians has been trying to value the data held by companies – this estimates, using one method, that Amazon’s data is currently worth $125bn, with a 35 per cent annual growth rate, while Google’s is worth $48bn, growing at 22 per cent each year. It is unlikely that these numbers – and methodologies – will become mainstream any time soon….(More)”.

The soft spot of hard code: blockchain technology, network governance and pitfalls of technological utopianism


Moritz Hutten at Global Networks: “The emerging blockchain technology is expected to contribute to the transformation of ownership, government services and global supply chains. By analysing a crisis that occurred with one of its frontrunners, Ethereum, in this article I explore the discrepancies between the purported governance of blockchains and the de facto control of them through expertise and reputation. Ethereum is also thought to exemplify libertarian techno‐utopianism.

When ‘The DAO’, a highly publicized but faulty crowd‐funded venture fund was deployed on the Ethereum blockchain, the techno‐utopianism was suspended, and developers fell back on strong network ties. Now that the blockchain technology is seeing an increasing uptake, I shall also seek to unearth broader implications of the blockchain for the proliferation or blockage of global finance and beyond. Contrasting claims about the disruptive nature of the technology, in this article I show that, by redeeming the positive utopia of ontic, individualized debt, blockchains reinforce our belief in a crisis‐ridden, financialized capitalism….(More)”.

Crowdlaw: Collective Intelligence and Lawmaking


Paper by Beth Noveck in Analyse & Kritik: “To tackle the fast-moving challenges of our age, law and policymaking must become more flexible, evolutionary and agile. Thus, in this Essay we examine ‘crowdlaw’, namely how city councils at the local level and parliaments at the regional and national level are turning to technology to engage with citizens at every stage of the law and policymaking process.

As we hope to demonstrate, crowdlaw holds the promise of improving the quality and effectiveness of outcomes by enabling policymakers to interact with a broader public using methods designed to serve the needs of both institutions and individuals. crowdlaw is less a prescription for more deliberation to ensure greater procedural legitimacy by having better inputs into lawmaking processes than a practical demand for more collaborative approaches to problem-solving that yield better outputs, namely policies that achieve their intended aims. However, as we shall explore, the projects that most enhance the epistemic quality of lawmaking are those that are designed to meet the specific informational needs for that stage of problem-solving….(More)”,

Driven to safety — it’s time to pool our data


Kevin Guo at TechCrunch: “…Anyone with experience in the artificial intelligence space will tell you that quality and quantity of training data is one of the most important inputs in building real-world-functional AI. This is why today’s large technology companies continue to collect and keep detailed consumer data, despite recent public backlash. From search engines, to social media, to self driving cars, data — in some cases even more than the underlying technology itself — is what drives value in today’s technology companies.

It should be no surprise then that autonomous vehicle companies do not publicly share data, even in instances of deadly crashes. When it comes to autonomous vehicles, the public interest (making safe self-driving cars available as soon as possible) is clearly at odds with corporate interests (making as much money as possible on the technology).

We need to create industry and regulatory environments in which autonomous vehicle companies compete based upon the quality of their technology — not just upon their ability to spend hundreds of millions of dollars to collect and silo as much data as possible (yes, this is how much gathering this data costs). In today’s environment the inverse is true: autonomous car manufacturers are focusing on are gathering as many miles of data as possible, with the intention of feeding more information into their models than their competitors, all the while avoiding working together….

The complexity of this data is diverse, yet public — I am not suggesting that people hand over private, privileged data, but actively pool and combine what the cars are seeing. There’s a reason that many of the autonomous car companies are driving millions of virtual miles — they’re attempting to get as much active driving data as they can. Beyond the fact that they drove those miles, what truly makes that data something that they have to hoard? By sharing these miles, by seeing as much of the world in as much detail as possible, these companies can focus on making smarter, better autonomous vehicles and bring them to market faster.

If you’re reading this and thinking it’s deeply unfair, I encourage you to once again consider 40,000 people are preventably dying every year in America alone. If you are not compelled by the massive life-saving potential of the technology, consider that publicly licenseable self-driving data sets would accelerate innovation by removing a substantial portion of the capital barrier-to-entry in the space and increasing competition….(More)”

Blockchain systems are tracking food safety and origins


Nir Kshetri at The Conversation: “When a Chinese consumer buys a package labeled “Australian beef,” there’s only a 50-50 chance the meat inside is, in fact, Australian beef. It could just as easily contain rat, dog, horse or camel meat – or a mixture of them all. It’s gross and dangerous, but also costly.

Fraud in the global food industry is a multi-billion-dollar problem that has lingered for years, duping consumers and even making them ill. Food manufacturers around the world are concerned – as many as 39 percent of them are worried that their products could be easily counterfeited, and 40 percent say food fraud is hard to detect.

In researching blockchain for more than three years, I have become convinced that this technology’s potential to prevent fraud and strengthen security could fight agricultural fraud and improve food safety. Many companies agree, and are already running various tests, including tracking wine from grape to bottle and even following individual coffee beans through international trade.

Tracing food items

An early trial of a blockchain system to track food from farm to consumer was in 2016, when Walmart collected information about pork being raised in China, where consumers are rightly skeptical about sellers’ claims of what their food is and where it’s from. Employees at a pork farm scanned images of farm inspection reports and livestock health certificates, storing them in a secure online database where the records could not be deleted or modified – only added to.

As the animals moved from farm to slaughter to processing, packaging and then to stores, the drivers of the freight trucks played a key role. At each step, they would collect documents detailing the shipment, storage temperature and other inspections and safety reports, and official stamps as authorities reviewed them – just as they did normally. In Walmart’s test, however, the drivers would photograph those documents and upload them to the blockchain-based database. The company controlled the computers running the database, but government agencies’ systems could also be involved, to further ensure data integrity.

As the pork was packaged for sale, a sticker was put on each container, displaying a smartphone-readable code that would link to that meat’s record on the blockchain. Consumers could scan the code right in the store and assure themselves that they were buying exactly what they thought they were. More recent advances in the technology of the stickers themselves have made them more secure and counterfeitresistant.

Walmart did similar tests on mangoes imported to the U.S. from Latin America. The company found that it took only 2.2 seconds for consumers to find out an individual fruit’s weight, variety, growing location, time it was harvested, date it passed through U.S. customs, when and where it was sliced, which cold-storage facility the sliced mango was held in and for how long it waited before being delivered to a store….(More)”.

Big Data Ethics and Politics: Toward New Understandings


Introductory paper by Wenhong Chen and Anabel Quan-Haase of Special Issue of the Social Science Computer Review:  “The hype around big data does not seem to abate nor do the scandals. Privacy breaches in the collection, use, and sharing of big data have affected all the major tech players, be it Facebook, Google, Apple, or Uber, and go beyond the corporate world including governments, municipalities, and educational and health institutions. What has come to light is that enabled by the rapid growth of social media and mobile apps, various stakeholders collect and use large amounts of data, disregarding the ethics and politics.

As big data touch on many realms of daily life and have profound impacts in the social world, the scrutiny around big data practice becomes increasingly relevant. This special issue investigates the ethics and politics of big data using a wide range of theoretical and methodological approaches. Together, the articles provide new understandings of the many dimensions of big data ethics and politics, showing it is important to understand and increase awareness of the biases and limitations inherent in big data analysis and practices….(More)”