Digital mile-markers provide navigation in cities


Springwise: “UK-based Maynard Design Consultancy has developed a system to help people navigate the changing landscape of city neighbourhoods. A prototype of a wayfinding solution for districts in London combines smart physical markers and navigational apps. The physical markers, inspired by traditional mile markers, include a digital screen. They provide real-time information, including daily news and messages from local businesses. The markers also track how people use the park, providing valuable information to the city and urban planners. The partnering apps provide up-to-date information about the changing environment in the city, such as on-going construction and delays due to large-scale events.

Unlike traditional, smartphone based navigational apps, this concept uses technology to help us reconnect with our surroundings, Maynard Design said.

The proposal won the Smart London District Challenge competition set by the Institute for Sustainability. Maynard is currently looking for partner companies to pilot its concept.

Takeaway: The Maynard design represents the latest efforts to use smartphones to amplify public safety announcements, general information and local businesses. The concept moves past traditional wayfinding markers to link people to a smart-city grid. By tracking how people use parks and other urban spaces, the markers will provide valuable insight for city officials. We expect more innovations like this as cities increasingly move toward seamless communication between services and city residents, aided by smart technologies. Over the past several months, we have seen technology to connect drivers to parking spaces and a prototype pavement that can change functions based on people’s needs….(More)”

How Tech Utopia Fostered Tyranny


Jon Askonas at The New Atlantis: “The rumors spread like wildfire: Muslims were secretly lacing a Sri Lankan village’s food with sterilization drugs. Soon, a video circulated that appeared to show a Muslim shopkeeper admitting to drugging his customers — he had misunderstood the question that was angrily put to him. Then all hell broke loose. Over a several-day span, dozens of mosques and Muslim-owned shops and homes were burned down across multiple towns. In one home, a young journalist was trapped, and perished.

Mob violence is an old phenomenon, but the tools encouraging it, in this case, were not. As the New York Times reported in April, the rumors were spread via Facebook, whose newsfeed algorithm prioritized high-engagement content, especially videos. “Designed to maximize user time on site,” as the Times article describes, the newsfeed algorithm “promotes whatever wins the most attention. Posts that tap into negative, primal emotions like anger or fear, studies have found, produce the highest engagement, and so proliferate.” On Facebook in Sri Lanka, posts with incendiary rumors had among the highest engagement rates, and so were among the most highly promoted content on the platform. Similar cases of mob violence have taken place in India, Myanmar, Mexico, and elsewhere, with misinformation spread mainly through Facebook and the messaging tool WhatsApp.

Follow The New AtlantisThis is in spite of Facebook’s decision in January 2018 to tweak its algorithm, apparently to prevent the kind of manipulation we saw in the 2016 U.S. election, when posts and election ads originating from Russia reportedly showed up in newsfeeds of up to 126 million American Facebook users. The company explained that the changes to its algorithm will mean that newsfeeds will be “showing more posts from friends and family and updates that spark conversation,” and “less public content, including videos and other posts from publishers or businesses.” But these changes, which Facebook had tested out in countries like Sri Lanka in the previous year, may actually have exacerbated the problem — which is that incendiary content, when posted by friends and family, is guaranteed to “spark conversation” and therefore to be prioritized in newsfeeds. This is because “misinformation is almost always more interesting than the truth,” as Mathew Ingram provocatively put it in the Columbia Journalism Review.

How did we get here, from Facebook’s mission to “give people the power to build community and bring the world closer together”? Riot-inducing “fake news” and election meddling are obviously far from what its founders intended for the platform. Likewise, Google’s founders surely did not build their search engine with the intention of its being censored in China to suppress free speech, and yet, after years of refusing this demand from Chinese leadership, Google has recently relented rather than pull their search engine from China entirely. And YouTube’s creators surely did not intend their feature that promotes “trending” content to help clickbait conspiracy-theory videos go viral.

These outcomes — not merely unanticipated by the companies’ founders but outright opposed to their intentions — are not limited to social media. So far, Big Tech companies have presented issues of incitement, algorithmic radicalization, and “fake news” as merely bumps on the road of progress, glitches and bugs to be patched over. In fact, the problem goes deeper, to fundamental questions of human nature. Tools based on the premise that access to information will only enlighten us and social connectivity will only make us more humane have instead fanned conspiracy theories, information bubbles, and social fracture. A tech movement spurred by visions of libertarian empowerment and progressive uplift has instead fanned a global resurgence of populism and authoritarianism.

Despite the storm of criticism, Silicon Valley has still failed to recognize in these abuses a sharp rebuke of its sunny view of human nature. It remains naïvely blind to how its own aspirations for social engineering are on a spectrum with the tools’ “unintended” uses by authoritarian regimes and nefarious actors….(More)”.

How to keep good research from dying a bad death: Strategies for co-creating research with impact


Blog post by Bridget Konadu Gyamfi and Bethany Park…:”Researchers are often invested in disseminating the results of their research to the practitioners and policymakers who helped enable it—but disseminating a paper, developing a brief, or even holding an event may not truly empower decision-makers to make changes based on the research.  

Disseminate results in stages and determine next steps

Mapping evidence to real-world decisions and processes in order to determine the right course of action can be complex. Together with our partners, we gather the troops—researchers, implementers, and IPA’s research and policy team—and have a discussion around what the implications of the research are for policy and practice.

This staged dissemination is critically important: having private discussions first helps partners digest the results and think through their reactions in a lower-stakes setting. We help the partners think about not only the results, but how their stakeholders will respond to the results, and how we can support their ongoing learning, whether results are “good” or not as hoped. Later, we hold larger dissemination events to inform the public. But we try to work closely with researchers and implementers to think through next steps right after results are available—before the window of opportunity passes.

Identify & prioritize policy opportunities

Many of our partners have already written smart advice about how to identify policy opportunities (windows, openings… etc.), so there’s no need for us to restate all that great thinking (go read it!). However, we get asked frequently how we prioritize policy opportunities, and we do have a clear internal process for making that decision. Here are our criteria:

High Impact Policy Activities.png
  1. A body of evidence to build on: One single study doesn’t often present the best policy opportunities. This is a generalization, of course, and there are exceptions, but typically our policy teams pay the most attention to bodies of evidence that are coming to a consensus. These are the opportunities for which we feel most able to recommend next steps related to policy and practice—there is a clearer message to communicate and research conclusions we can state with greater confidence.
  2. Relationships to open doors: Our long-term in-country presence and deep involvement with partners through research projects means that we have many relationships and doors open to us. Yet some of these relationships are stronger than others, and some partners are more influential in the processes we want to impact. We use stakeholder mapping tools to clarify who is invested and who has influence. We also track our stakeholder outreach to make sure our relationships stay strong and mutually beneficial.
  3. A concrete decision or process that we can influence: This is the typical understanding of a “policy opening,” and it’s an important one. What are the partner’s priorities, felt needs, and open questions? Where do those create opportunities for our influence? If the evidence would indicate one course of action, but that course isn’t even an option our partner would consider or be able to consider (for cost or other practical reasons), we have to give the opportunity a pass.
  4. Implementation funding: In the countries where we work, even when we have strong relationships, strong evidence, and the partner is open to influence, there is still one crucial ingredient missing: implementation funding. Addressing this constraint means getting evidence-based programming onto the agenda of major donors.

Get partners on board

Forming a coalition of partners and funders who will partner with us as we move forward is crucial. As a research and policy organization, we can’t scale effective solutions alone—nor is that the specialty that we want to develop, since there are others to fill that role. We need partners like Evidence Action Beta to help us pressure test solutions as they move towards scale, or partners like Living Goods who already have nationwide networks of community health workers who can reach communities efficiently and effectively. And we need governments who are willing to make public investments and decisions based on evidence….(More)”.

This is how AI bias really happens—and why it’s so hard to fix


Karen Hao at MIT Technology Review: “Over the past few months, we’ve documented how the vast majority of AI’s applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data. We’ve also covered how these technologies affect people’s lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system.

But it’s not enough just to know that this bias exists. If we want to be able to fix it, we need to understand the mechanics of how it arises in the first place.

How AI bias happens

We often shorthand our explanation of AI bias by blaming it on biased training data. The reality is more nuanced: bias can creep in long before the data is collected as well as at many other stages of the deep-learning process. For the purposes of this discussion, we’ll focus on three key stages.Sign up for the The AlgorithmArtificial intelligence, demystified

By signing up you agree to receive email newsletters and notifications from MIT Technology Review. You can change your preferences at any time. View our Privacy Policy for more detail.

Framing the problem. The first thing computer scientists do when they create a deep-learning model is decide what they actually want it to achieve. A credit card company, for example, might want to predict a customer’s creditworthiness, but “creditworthiness” is a rather nebulous concept. In order to translate it into something that can be computed, the company must decide whether it wants to, say, maximize its profit margins or maximize the number of loans that get repaid. It could then define creditworthiness within the context of that goal. The problem is that “those decisions are made for various business reasons other than fairness or discrimination,” explains Solon Barocas, an assistant professor at Cornell University who specializes in fairness in machine learning. If the algorithm discovered that giving out subprime loans was an effective way to maximize profit, it would end up engaging in predatory behavior even if that wasn’t the company’s intention.

Collecting the data. There are two main ways that bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices. The first case might occur, for example, if a deep-learning algorithm is fed more photos of light-skinned faces than dark-skinned faces. The resulting face recognition system would inevitably be worse at recognizing darker-skinned faces. The second case is precisely what happened when Amazon discovered that its internal recruiting tool was dismissing female candidates. Because it was trained on historical hiring decisions, which favored men over women, it learned to do the same.

Preparing the data. Finally, it is possible to introduce bias during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. (This is not to be confused with the problem-framing stage. You can use the same attributes to train a model for very different goals or use very different attributes to train a model for the same goal.) In the case of modeling creditworthiness, an “attribute” could be the customer’s age, income, or number of paid-off loans. In the case of Amazon’s recruiting tool, an “attribute” could be the candidate’s gender, education level, or years of experience. This is what people often call the “art” of deep learning: choosing which attributes to consider or ignore can significantly influence your model’s prediction accuracy. But while its impact on accuracy is easy to measure, its impact on the model’s bias is not.

Why AI bias is hard to fix

Given that context, some of the challenges of mitigating bias may already be apparent to you. Here we highlight four main ones….(More)”

Institutions as Social Theory


Blogpost by Titus Alexander: “The natural sciences comprise of a set of institutions and methods designed to improve our understanding of the physical world. One of the most powerful things science does is to produce theories – models of reality – that are used by others to change the world. The benefits of using science are so great that societies have created many channels to develop and use research to improve the human condition.

Social scientists also seek to improve the human condition. However, the channels from research to application are often weak and most social research is buried in academic papers and books. Some will inform policy via think tanks, civil servants or pressure groups but practitioners and politicians often prefer their own judgement and prejudices, using research only when it suits them. But a working example – the institution as the method – has more influence than a research paper. The evidence is tangible, like an experiment in natural science, and includes all the complexities of real life. It demonstrates its reliability over time and provides proof of what works.

Reflexivity is key to social science

In the physical sciences the investigator is separate from the subject of investigation and she or he has no influence on what they observe. Generally, theories in the human sciences cannot provide this kind of detached explanation, because societies are reflexive. When we study human behaviour we also influence it. People change what they do in response to being studied. They use theories to change their own behaviour or the behaviour of others. Many scholars and practitioners have explored reflexivity, including Albert BanduraPierre Bourdieu and the financier George Soros. Anthony Giddens called it the ‘double hermeneutic’.

The fact that society is reflexive is the key to effective social science. Like scientists, societies create systematic detachment to increase objectivity in decision-making, through advisers, boards, regulators, opinion polls and so on. Peer reviewed social science research is a form of detachment, but it is often so detached to be irrelevant….(More)”.

Hundreds of Bounty Hunters Had Access to AT&T, T-Mobile, and Sprint Customer Location Data for Years


Joseph Cox at Motherboard: ” In January, Motherboard revealed that AT&T, T-Mobile, and Sprint were selling their customers’ real-time location data, which trickled down through a complex network of companies until eventually ending up in the hands of at least one bounty hunter. Motherboard was also able to purchase the real-time location of a T-Mobile phone on the black market from a bounty hunter source for $300. In response, telecom companies said that this abuse was a fringe case.

In reality, it was far from an isolated incident.

Around 250 bounty hunters and related businesses had access to AT&T, T-Mobile, and Sprint customer location data, with one bail bond firm using the phone location service more than 18,000 times, and others using it thousands or tens of thousands of times, according to internal documents obtained by Motherboard from a company called CerCareOne, a now-defunct location data seller that operated until 2017. The documents list not only the companies that had access to the data, but specific phone numbers that were pinged by those companies.

In some cases, the data sold is more sensitive than that offered by the service used by Motherboard last month, which estimated a location based on the cell phone towers that a phone connected to. CerCareOne sold cell phone tower data, but also sold highly sensitive and accurate GPS data to bounty hunters; an unprecedented move that means users could locate someone so accurately so as to see where they are inside a building. This company operated in near-total secrecy for over 5 years by making its customers agree to “keep the existence of CerCareOne.com confidential,” according to a terms of use document obtained by Motherboard.

Some of these bounty hunters then resold location data to those unauthorized to handle it, according to two independent sources familiar with CerCareOne’s operations.

The news shows how widely available Americans’ sensitive location data was to bounty hunters. This ease-of-access dramatically increased the risk of abuse….(More)”.

The Untapped Potential of Civic Technology


DemocracyLab: “Today’s most significant problems are being addressed primarily by governments, using systems and tools designed hundreds of years ago. From climate change to inequality, the status quo is proving inadequate, and time is running out.

The role of our democratic institutions is analogous to breathing — inhaling citizen input and exhaling government action. The civic technology movement is inventing new ways to gather input, make decisions and execute collective action. The science fiction end state is an enlightened collective intelligence. But in the short term, it’s enough to seek incremental improvements in how citizens are engaged and government services are delivered. This will increase our chances of solving a wide range of problems in communities of all scales.

The match between the government and tech sectors is complementary. Governments and nonprofits are widely perceived as lagging in technological adoption and innovation. The tech sector’s messiah complex has been muted by Cambridge Analytica, but the principles of user-centered design, iterative development, and continuous learning have not lost their value. Small groups of committed technologists can easily test hypotheses about ways to make institutions work better. Trouble is, it’s really hard for them to earn a living doing it.

The Problem for Civic Tech

The unique challenge facing civic tech was noted by Fast Forward, a tech nonprofit accelerator, in a recent report that aptly described the chicken and egg problem plaguing tech nonprofits:

Many foundations will not fund a nonprofit without signs of proven impact. Tech nonprofits are unique. They must build their product before they can prove impact, and they cannot build the tech product without funding.

This is compounded by the fact that government procurement processes are often protracted and purchasers risk averse. Rather than a thousand flowers blooming in learning-rich civic experiments, civic entrepreneurs are typically frustrated and ineffectual, finding that their ideas are difficult to monetize, met with skepticism by government, and starved for capital.

These challenges are described well in the Knight and Rita Allen Foundations’ report Scaling Civic Tech. The report notes the difference between “buyer” revenue that is earned from providing services, and “builder” capital that is invested to increase organizations’ capacities. The report calls for more builder capital investment and better coordination among donors. Other recommendations made in the report include building competencies within organizations by tapping into knowledge sharing resources and skilled volunteerism, measuring and communicating impact, and nurturing infrastructure that supports collaboration….(More)”.

What Makes a City Street Smart?


Taxi and Limousine Commission’s (TLC): “Cities aren’t born smart. They become smart by understanding what is happening on their streets. Measurement is key to management, and amid the incomparable expansion of for-hire transportation service in New York City, measuring street activity is more important than ever. Between 2015 (when app companies first began reporting data) and June 2018, trips by app services increased more than 300%, now totaling over 20 million trips each month. That’s more cars, more drivers, and more mobility.

Taxi and Limousine Commission’s (TLC): “Cities aren’t born smart. They become smart by understanding what is happening on their streets. Measurement is key to management, and amid the incomparable expansion of for-hire transportation service in New York City, measuring street activity is more important than ever. Between 2015 (when app companies first began reporting data) and June 2018, trips by app services increased more than 300%, now totaling over 20 million trips each month. That’s more cars, more drivers, and more mobility.

We know the true scope of this transformation today only because of the New York City Taxi and Limousine Commission’s (TLC) pioneering regulatory actions. Unlike most cities in the country, app services cannot operate in NYC unless they give the City detailed information about every trip. This is mandated by TLC rules and is not contingent on companies voluntarily “sharing” only a self-selected portion of the large amount of data they collect. Major trends in the taxi and for-hire vehicle industry are highlighted in TLC’s 2018 Factbook.

What Transportation Data Does TLC Collect?

Notably, Uber, Lyft, and their competitors today must give the TLC granular data about each and every trip and request for service. TLC does not receive passenger information; we require only the data necessary to understand traffic patterns, working conditions, vehicle efficiency, service availability, and other important information.

One of the most important aspects of the data TLC collects is that they are stripped of identifying information and made available to the public. Through the City’s Open Data portal, TLC’s trip data help businesses distinguish new business opportunities from saturated markets, encourage competition, and help investors follow trends in both new app transportation and the traditional car service and hail taxi markets. As app companies contemplate going public, their investors have surely already bookmarked TLC’s Open Data site.

Using Data to Improve Mobility

With this information NYC now knows people are getting around the boroughs using app services and shared rides with greater frequency. These are the same NYC neighborhoods that traditionally were not served by yellow cabs and often have less robust public transportation options. We also know these services provide an increasing number of trips in congested areas like Manhattan and the inner rings of Brooklyn and Queens, where public transportation options are relatively plentiful….(More)”.

Toward an Open Data Demand Assessment and Segmentation Methodology


Stefaan Verhulst and Andrew Young at IADB: “Across the world, significant time and resources are being invested in making government data accessible to all with the broad goal of improving people’s lives. Evidence of open data’s impact – on improving governance, empowering citizens, creating economic opportunity, and solving public problems – is emerging and is largely encouraging. Yet much of the potential value of open data remains untapped, in part because we often do not understand who is using open data or, more importantly, who is not using open data but could benefit from the insights it may generate. By identifying, prioritizing, segmenting, and engaging with the actual and future demand for open data in a systemic and systematic way, practitioners can ensure that open data is more targeted. Understanding and meeting the demand for open data can increase overall impact and return on investment of public funds.

The GovLab, in partnership with the Inter-American Development Bank, and with the support of the French Development Agency developed the Open Data Demand and Assessment Methodology to provide open data policymakers and practitioners with an approach for identifying, segmenting, and engaging with demand. This process specifically seeks to empower data champions within public agencies who want to improve their data’s ability to improve people’s lives….(More)”.

Evidence vs Democracy: what are we doing to bridge the divide?


Jonathan Breckon, and Anna Hopkins at the Alliance for Useful Evidence: “People are hacked off with politicians. Whether it’s hurling abuse at MPs outside the House of Commons, or the burning barricades of Gilets Jaunes in Toulouse, discontent is in the air.

The evidence movement must respond to the ‘politics of distrust’. We cannot carry on regardless. For evidence advocates like us, reaching over the heads of the public to get research into the hands of elite policy-makers is not enough. Let’s be honest and accept that a lot of our work goes on behind closed doors. The UK’s nine What Works Centres only rarely engage with the public – more often with professionals, budget holders or civil servants. The evidence movement needs to democratise.

However, the difficulty is that evidence is hard work. It needs slow-thinking, and at least a passing knowledge of statistics, economics, or science.  How on earth can you do all that on Twitter or Facebook?

In a report published today we look at ‘mini-publics’ – an alternative democratic platform to connect citizens with research. Citizens’ Juries, Deliberative Polls, Consensus Conferences and other mini-publics are forums that bring people and evidence together, for constructive, considered debate. Ideally, people work in small groups, that are randomly chosen, and have the chance to interrogate experts in the field in question.

This is not a new idea. The idea of a ‘minipopulus’ was set out by the American political theorist Robert Dahl in the 1970s. Indeed, there is an even older heritage. Athenian classical democracy did for a time select small groups of officials by lot.

It’s also not a utopian idea from the past, as we have found many promising recent examples. For example in the UK, a Citizens’ Assembly on adult social care gave recommendations to two parliamentary Select Committees last year. There are also examples of citizens contributing to our public institutions and agendas by deliberating – through NICE’s Citizens Council or the James Lind Alliance.

We shouldn’t ignore this resistance to the mood of disaffection. Initiatives like the RSA’s Campaign for Deliberative Democracy are making the case for a step-change. To break the political deadlock on Brexit, there has been a call to create a Citizens’ Assembly on Brexit by former Prime Minister Gordon Brown, Stella Creasy MP and others. And there are many hopeful visions of a democratic future from abroad – like the experiments in Canada and Australia. Our report explores many of these international examples.

Citizens can make informed decisions – if we allow them to be citizens. They can understand, debate and interrogate research in platforms like mini-publics. And they can use evidence to help make the case for their priorities and concerns….(More)”.