Most Public Engagement is Worthless


Charles Marohn at Strong Towns: “…Our thinking is a byproduct of the questions we ask. …I’m a planner and I’m a policy nerd. I had all the training in how to hold a public meeting and solicit feedback through SWOT (strengths, weaknesses, opportunities, threats) questions. I’ve been taught how to reach out to marginalized groups and make sure they too have a voice in the process. That is, so long as that voice fit into the paradigm of a planner and a policy nerd. Or so long as I could make it fit.

Modern Planner: What percentage of the city budget should we spend on parks?

Steve Jobs: Do you use the park?

Our planning efforts should absolutely be guided by the experiences of real people. But their actions are the data we should be collecting, not their stated preferences. To do the latter is to get comfortable trying to build a better Walkman.  We should be designing the city equivalent of the iPod: something that responds to how real people actually live. It’s a messier and less affirming undertaking.

I’ve come to the point in my life where I think municipal comprehensive planning is worthless. More often than not, it is a mechanism to wrap a veneer of legitimacy around the large policy objectives of influential people. Most cities would be better off putting together a good vision statement and a set of guiding principles for making decisions, then getting on with it.

That is, get on with the hard work of iteratively building a successful city. That work is a simple, four-step process:

  1. Humbly observe where people in the community struggle.
  2. Ask the question: What is the next smallest thing we can do right now to address that struggle?
  3. Do that thing. Do it right now.
  4. Repeat.

It’s challenging to be humble, especially when you are in a position, or are part of a profession, whose internal narrative tells you that you already knowwhat to do. It’s painful to observe, especially when that means confronting messy realities that do not fit with your view of the world. It’s unsatisfying, at times, to try many small things when the “obvious” fix is right there. If only those around you just shared your “courage” to undertake it (of course, with no downside to you if you’re wrong). If only people had the patience to see it through (while they, not you, continue to struggle in the interim).

Yet what if we humbly observe where people in our community struggle—if we use the experiences of others as our data—and we continually take the actions we are capable of taking, right now, to alleviate those struggles? And what if we do this in neighborhood after neighborhood across the entire city, month after month and year after year? If we do that, not only will we make the lowest risk, highest returning public investments it is possible to make, we won’t help but improve people’s lives in the process….(More)”.

To the smart city and beyond? Developing a typology of smart urban innovation


Maja Nilssen in Technological Forecasting and Social Change: “The smart city is an increasingly popular topic in urban development, arousing both excitement and skepticism. However, despite increasing enthusiasm regarding the smartness of cities, the concept is still regarded as somewhat evasive. Encouraged by the multifaceted character of the concept, this article examines how we can categorize the different dimensions often included in the smart city concept, and how these dimensions are coupled to innovation. Furthermore, the article examines the implications of the different understandings of the smart city concept for cities’ abilities to be innovative.

Building on existing scholarly contributions on the smartness of cities and innovation literature, the article develops a typology of smart city initiatives based on the extent and types of innovations they involve. The typology is structured as a smart city continuum, comprising four dimensions of innovation: (1) technological, (2) organizational, (3) collaborative, (4) experimental.

The smart city continuum is then utilized to analyze empirical data from a Norwegian urban development project triggered by a critical juncture. The empirical data shows that the case holds elements of different dimensions of the continuum, supporting the need for a typology of smart cities as multifaceted urban innovation. The continuum can be used as an analytical model for different types of smart city initiatives, and thus shed light on what types of innovation are central in the smart city. Consequently, the article offers useful insights for both practitioners and scholars interested in smart city initiatives….(More)”

Programmers need ethics when designing the technologies that influence people’s lives


Cherri M. Pancake at The Conversation: “Computing professionals are on the front lines of almost every aspect of the modern world. They’re involved in the response when hackers steal the personal information of hundreds of thousands of people from a large corporation. Their work can protect – or jeopardize – critical infrastructure like electrical grids and transportation lines. And the algorithms they write may determine who gets a job, who is approved for a bank loan or who gets released on bail.

Technological professionals are the first, and last, lines of defense against the misuse of technology. Nobody else understands the systems as well, and nobody else is in a position to protect specific data elements or ensure the connections between one component and another are appropriate, safe and reliable. As the role of computing continues its decades-long expansion in society, computer scientists are central to what happens next.

That’s why the world’s largest organization of computer scientists and engineers, the Association for Computing Machinery, of which I am president, has issued a new code of ethics for computing professionals. And it’s why ACM is taking other steps to help technologists engage with ethical questions….

ACM’s new ethics code has several important differences from the 1992 version. One has to do with unintended consequences. In the 1970s and 1980s, technologists built software or systems whose effects were limited to specific locations or circumstances. But over the past two decades, it has become clear that as technologies evolve, they can be applied in contexts very different from the original intent.

For example, computer vision research has led to ways of creating 3D models of objects – and people – based on 2D images, but it was never intended to be used in conjunction with machine learning in surveillance or drone applications. The old ethics code asked software developers to be sure a program would actually do what they said it would. The new version also exhorts developers to explicitly evaluate their work to identify potentially harmful side effects or potential for misuse.

Another example has to do with human interaction. In 1992, most software was being developed by trained programmers to run operating systems, databases and other basic computing functions. Today, many applications rely on user interfaces to interact directly with a potentially vast number of people. The updated code of ethics includes more detailed considerations about the needs and sensitivities of very diverse potential users – including discussing discrimination, exclusion and harassment….(More)”.

How Taiwan’s online democracy may show future of humans and machines


Shuyang Lin at the Sydney Morning Herald: “Taiwanese citizens have spent the past 30 years prototyping future democracy since the lift of martial law in 1987. Public participation in Taiwan has been developed in several formats, from face-to-face to deliberation over the internet. This trajectory coincides with the advancement of technology, and as new tools arrived, democracy evolved.

The launch of vTaiwan (v for virtual, vote, voice and verb), an experiment that prototypes an open consultation process for the civil society, showed that by using technology creatively humanity can facilitate deep and fair conversations, form collective consensus, and deliver solutions we can all live with.

It is a prototype that helps us envision what future democracy could look like….

Decision-making is not an easy task, especially when it has to do with a larger group of people. Group decision-making could take several protocols, such as mandate, to decide and take questions; advise, to listen before decisions; consent, to decide if no one objects; and consensus, to decide if everyone agrees. So there is a pressing need for us to be able to collaborate together in a large scale decision-making process to update outdated standards and regulations.

The future of human knowledge is on the web. Technology can help us to learn, communicate, and make better decisions faster with larger scale. The internet could be the facilitation and AI could be the catalyst. It is extremely important to be aware that decision-making is not a one-off interaction. The most important direction of decision-making technology development is to have it allow humans to be engaged in the process anytime and also have an invitation to request and submit changes.

Humans have started working with computers, and we will continue to work with them. They will help us in the decision-making process and some will even make decisions for us; the actors in collaboration don’t necessarily need to be just humans. While it is up to us to decide what and when to opt in or opt out, we should work together with computers in a transparent, collaborative and inclusive space.

Where shall we go as a society? What do we want from technology? As Audrey Tang,  Digital Minister without Portfolio of Taiwan, puts it: “Deliberation — listening to each other deeply, thinking together and working out something that we can all live with — is magical.”…(More)”.

Introducing the (World’s First) Ethical Operating System


Article by Paula Goldman and Raina Kumra: “Is it possible for tech developers to anticipate future risks? Or are these future risks so unknowable to us here in the present that, try as we might to make our tech safe, continued exposure to risks is simply the cost of engagement?

 Today, in collaboration with Institute for the Future (IFTF), a leading non-profit strategic futures organization, Omidyar Network is excited to introduce the Ethical Operating System (or Ethical OS for short), a toolkit for helping developers and designers anticipate the future impact of technologies they’re working on today. We designed the Ethical OS to facilitate better product development, faster deployment, and more impactful innovation — all while striving to minimize technical and reputational risks. The hope is that, with the Ethical OS in hand, technologists can begin to build responsibility into core business and product decisions, and contribute to a thriving tech industry.

The Ethical OS is already being piloted by nearly 20 tech companies, schools, and startups, including Mozilla and Techstars. We believe it can better equip technologists to grapple with three of the most pressing issues facing our community today:

    • If the technology you’re building right now will someday be used in unexpected ways, how can you hope to be prepared?

 

    • What new categories of risk should you pay special attention to right now?

 

  • Which design, team, or business model choices can actively safeguard users, communities, society, and your company from future risk?

As large sections of the public grow weary of a seemingly constant stream of data safety and security issues, and with growing calls for heightened government intervention and oversight, the time is now for the tech community to get this right.

We created the Ethical OS as a pilot to help make ethical thinking and future risk mitigation integral components of all design and development processes. It’s not going to be easy. The industry has far more work to do, both inside individual companies and collectively. But with our toolkit as a guide, developers will have a practical means of helping to begin working to ensure their tech is as good as their intentions…(More)”.

China’s Aggressive Surveillance Technology Will Spread Beyond Its Borders


Already there are reports that Zimbabwe, for example, is turning to Chinese firms to implement nationwide facial-recognition and surveillance programs, wrapped into China’s infrastructure investments and a larger set of security agreements as well, including for policing online communication. The acquisition of black African faces will help China’s tech sector improve its overall data set.

Malaysia, too, announced new partnerships this spring with China to equip police with wearable facial-recognition cameras. There are quiet reports of Arab Gulf countries turning to China not just for the drone technologies America has denied but also for the authoritarian suite of surveillance, recognition, and data tools perfected in China’s provinces. In a recent article on Egypt’s military-led efforts to build a new capital city beyond Cairo’s chaos and revolutionary squares, a retired general acting as project spokesman declared, “a smart city means a safe city, with cameras and sensors everywhere. There will be a command center to control the entire city.” Who is financing construction? China.

While many governments are making attempts to secure this information, there have been several alarming stories of data leaks. Moreover, these national identifiers create an unprecedented opportunity for state surveillance at scale. What about collecting biometric information in nondemocratic regimes? In 2016, the personal details of nearly 50 million people in Turkey were leaked….

China and other determined authoritarian states may prove undeterrable in their zeal to adopt repressive technologies. A more realistic goal, as Georgetown University scholar Nicholas Wright has argued, is to sway countries on the fence by pointing out the reputational costs of repression and supporting those who are advocating for civil liberties in this domain within their own countries. Democracy promoters (which we hope will one day again include the White House) will also want to recognize the coming changes to the authoritarian public sphere. They can start now in helping vulnerable populations and civil society to gain greater technological literacy to advocate for their rights in new domains. It is not too early for governments and civil society groups alike to study what technological and tactical countermeasures exist to circumvent and disrupt new authoritarian tools.

Seven years ago, techno-optimists expressed hope that a wave of new digital tools for social networking and self-expression could help young people in the Middle East and elsewhere to find their voices. Today, a new wave of Chinese-led technological advances threatens to blossom into what we consider an “Arab spring in reverse”—in which the next digital wave shifts the pendulum back, enabling state domination and repression at a staggering scale and algorithmic effectiveness.

Americans are absolutely right to be urgently focused on countering Russian weaponized hacking and leaking as its primary beneficiary sits in the Oval Office. But we also need to be more proactive in countering the tools of algorithmic authoritarianism that will shape the worldwide future of individual freedom….(More)”.

Decision-Making, the Direction of Change, and the Governance of Complex, Large-Scale Settlement Systems


Chapter by William Bowen and Robert Gleeson in The Evolution of Human Settlements: “…argue that the evolutionary processes by which human settlements have evolved through countless experiments throughout millennia are the most likely paths for resolving today’s greatest problems. Darwin’s great insight has important implications for understanding decision-making, the direction of change and the governance of complex, large-scale settlement systems. Darwinian views accommodate fallible Homo sapiens making decisions, some of which work and others that do not. Darwinian views imply the value of diverse institutions and reliance upon general patterns of social, ideational, and technical interaction rather than upon specific policies designed to directly produce particular results for particular individuals, groups, and settlement systems. Solutions will evolve only if we ensure continuous, diverse, problem-solving initiatives….(More).

Humans are a post-truth species


Yuval Noah Harari at the Guardian: “….A cursory look at history reveals that propaganda and disinformation are nothing new, and even the habit of denying entire nations and creating fake countries has a long pedigree. In 1931 the Japanese army staged mock attacks on itself to justify its invasion of China, and then created the fake country of Manchukuo to legitimise its conquests. China itself has long denied that Tibet ever existed as an independent country. British settlement in Australia was justified by the legal doctrine of terra nullius (“nobody’s land”), which effectively erased 50,000 years of Aboriginal history. In the early 20th century, a favourite Zionist slogan spoke of the return of “a people without a land [the Jews] to a land without a people [Palestine]”. The existence of the local Arab population was conveniently ignored.

In 1969 Israeli prime minister Golda Meir famously said that there is no Palestinian people and never was. Such views are very common in Israel even today, despite decades of armed conflicts against something that doesn’t exist. For example, in February 2016 MP Anat Berko gave a speech in the Israeli parliament in which she doubted the reality and history of the Palestinian people. Her proof? The letter “p” does not even exist in Arabic, so how can there be a Palestinian people? (In Arabic, “F” stands for “P”, and the Arabic name for Palestine is Falastin.)

In fact, humans have always lived in the age of post-truth. Homo sapiens is a post-truth species, whose power depends on creating and believing fictions. Ever since the stone age, self-reinforcing myths have served to unite human collectives. Indeed, Homo sapiensconquered this planet thanks above all to the unique human ability to create and spread fictions. We are the only mammals that can cooperate with numerous strangers because only we can invent fictional stories, spread them around, and convince millions of others to believe in them. As long as everybody believes in the same fictions, we all obey the same laws, and can thereby cooperate effectively.

So if you blame Facebook, Trump or Putin for ushering in a new and frightening era of post-truth, remind yourself that centuries ago millions of Christians locked themselves inside a self-reinforcing mythological bubble, never daring to question the factual veracity of the Bible, while millions of Muslims put their unquestioning faith in the Qur’an. For millennia, much of what passed for “news” and “facts” in human social networks were stories about miracles, angels, demons and witches, with bold reporters giving live coverage straight from the deepest pits of the underworld. We have zero scientific evidence that Eve was tempted by the serpent, that the souls of all infidels burn in hell after they die, or that the creator of the universe doesn’t like it when a Brahmin marries an Untouchable – yet billions of people have believed in these stories for thousands of years. Some fake news lasts for ever.

I am aware that many people might be upset by my equating religion with fake news, but that’s exactly the point. When a thousand people believe some made-up story for one month, that’s fake news. When a billion people believe it for a thousand years, that’s a religion, and we are admonished not to call it fake news in order not to hurt the feelings of the faithful (or incur their wrath). Note, however, that I am not denying the effectiveness or potential benevolence of religion. Just the opposite. For better or worse, fiction is among the most effective tools in humanity’s toolkit. By bringing people together, religious creeds make large-scale human cooperation possible. They inspire people to build hospitals, schools and bridges in addition to armies and prisons. Adam and Eve never existed, but Chartres Cathedral is still beautiful. Much of the Bible may be fictional, but it can still bring joy to billions and encourage humans to be compassionate, courageous and creative – just like other great works of fiction, such as Don QuixoteWar and Peace and Harry Potter….(More)”.

Civil Society as Public Conscience


Larry Kramer at the Stanford Social Innovation Review: “Does civil society address questions of values in ways that government and business cannot? This question makes sense if we presuppose limits on the values government and business can express. However, there are no such limits, as evidenced by the way both sectors have, throughout US history, taken positions and played roles on all sides of our nation’s great moral and political debates. This is hardly surprising inasmuch as “government” and “business,” no less than “civil society,” comprise a multiplicity of actors with widely divergent interests, passions, and beliefs. The principle of federalism is built on the idea (well-established empirically) that different governments, operating at different levels and in different places, will respond to problems differently, creating multiple channels for competitive democratic action. Likewise, the competitiveness of the marketplace ensures that, with rare exceptions, there are business interests on different sides of most questions.

Yet while government and business may not be monoliths, their decisions and actions are subject to predictable, systematic forms of distortion….

What sets civil society organizations apart is that they are free from precisely the forces that limit actors in government and business; they are neither responsible to voters nor (usually) restricted by market discipline. They can be entirely mission driven, which gives them the freedom to test controversial ideas, develop challenging positions, and advocate for change based wholly on the magnitude and meaning of an issue or objective. As important, they can use this freedom to intervene with government or business in ways that overcome or circumvent the obstacles that bias these sectors’ decisions and activities. Short-term pressures may make it difficult for government agencies to invest in experiments, for example, but they can take up proven concepts. Civil society organizations can establish the necessary proof and, within legal limits, help overcome political barriers that may block adoption. Nonprofit activity may likewise be able to correct market defects or foster conditions that encourage deeper business investment. Nonprofit leaders can take risks that government agents and business managers dependably shy away from, and they can stay with efforts that take time to show results.

More profoundly, nonprofits have the freedom to play the role of “prodder,” of idea advocate, of irritant to systems that need to be irritated. Civil society can be our public conscience, helping make sure that we do not turn our back on fundamental values, or forget about those who lack market and political power.

There is a rub, of course (there’s always a rub). Civil society organizations may be free from political and market discipline, but only by subjecting themselves to the whims and caprice of philanthropic funders. This alternative distortion is to some extent blunted by the pluralistic, decentralized nature of the funder community; there are a great many funders out there, and they represent a broad range of ideologies, interests, and viewpoints. But the flaws in this system are many and well known. Scrambling for dollars is time consuming and difficult, and most funders restrict their support while failing to cover a grantees’ full costs. Awkward differences between how funders and grantees understand a problem or think it should be addressed are common. Nonprofits understandably feel that funders sometimes undervalue their expertise and front-line experience, while funders just as understandably feel responsible for making independent judgments about how nonprofits should use their resources. And while the funder community is more pluralistic than its critics allow, many viewpoints and approaches indubitably fail to find support—sometimes for worse, as well as for better…(More)”.

Mapping the Privacy-Utility Tradeoff in Mobile Phone Data for Development


Paper by Alejandro Noriega-Campero, Alex Rutherford, Oren Lederman, Yves A. de Montjoye, and Alex Pentland: “Today’s age of data holds high potential to enhance the way we pursue and monitor progress in the fields of development and humanitarian action. We study the relation between data utility and privacy risk in large-scale behavioral data, focusing on mobile phone metadata as paradigmatic domain. To measure utility, we survey experts about the value of mobile phone metadata at various spatial and temporal granularity levels. To measure privacy, we propose a formal and intuitive measure of reidentification riskthe information ratioand compute it at each granularity level. Our results confirm the existence of a stark tradeoff between data utility and reidentifiability, where the most valuable datasets are also most prone to reidentification. When data is specified at ZIP-code and hourly levels, outside knowledge of only 7% of a person’s data suffices for reidentification and retrieval of the remaining 93%. In contrast, in the least valuable dataset, specified at municipality and daily levels, reidentification requires on average outside knowledge of 51%, or 31 data points, of a person’s data to retrieve the remaining 49%. Overall, our findings show that coarsening data directly erodes its value, and highlight the need for using data-coarsening, not as stand-alone mechanism, but in combination with data-sharing models that provide adjustable degrees of accountability and security….(More)”.