When Patients Become Innovators


Article by Harold DeMonaco, Pedro Oliveira, Andrew Torrance, Christiana von Hippel, and Eric von Hippel: “Patients are increasingly able to conceive and develop sophisticated medical devices and services to meet their own needs — often without any help from companies that produce or sell medical products. This “free” patient-driven innovation process enables them to benefit from important advances that are not commercially available. Patient innovation also can provide benefits to companies that produce and sell medical devices and services. For them, patient do-it-yourself efforts can be free R&D that informs and amplifies in-house development efforts.

In this article, we will look at two examples of free innovation in the medical field — one for managing type 1 diabetes and the other for managing Crohn’s disease. We will set these cases within the context of the broader free innovation movement that has been gaining momentum in an array of industries1 and apply the general lessons of free innovation to the specific circumstances of medical innovation by patients….

What is striking about both of these cases is that neither commercial medical producers nor the clinical care system offered a solution that these patients urgently needed. Motivated patients stepped forward to develop solutions for themselves, entirely without commercial support.4

Free innovation in the medical field follows the general pattern seen in many other areas, including crafts, sporting goods, home and garden equipment, pet products, and apparel.5 Enabled by technology, social media, and a keen desire to find solutions aligned with their own needs, consumers of all kinds are designing new products for themselves….(More)”


Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems


Introduction by A.F. Winfield, K. Michael, J. Pitt, V. Evers of Special Issue of Proceedings of the IEEE: “…The primary focus of this special issue is machine ethics, that is the question of how autonomous systems can be imbued with ethical values. Ethical autonomous systems are needed because, inevitably, near future systems are moral agents; consider driverless cars, or medical diagnosis AIs, both of which will need to make choices with ethical consequences. This special issue includes papers that describe both implicit ethical agents, that is machines designed to avoid unethical outcomes, and explicit ethical agents: machines which either encode or learn ethics and determine actions based on those ethics. Of course ethical machines are socio-technical systems thus, as a secondary focus, this issue includes papers that explore the societal and regulatory implications of machine ethics, including the question of ethical governance. Ethical governance is needed in order to develop standards and processes that allow us to transparently and robustly assure the safety of ethical autonomous systems and hence build public trust and confidence….(More)?

China, India and the rise of the ‘civilisation state’


Gideon Rachman at the Financial Times: “The 19th-century popularised the idea of the “nation state”. The 21st could be the century of the “civilisation state”. A civilisation state is a country that claims to represent not just a historic territory or a particular language or ethnic-group, but a distinctive civilisation.

It is an idea that is gaining ground in states as diverse as China, India, Russia, Turkey and, even, the US. The notion of the civilisation state has distinctly illiberal implications. It implies that attempts to define universal human rights or common democratic standards are wrong-headed, since each civilisation needs political institutions that reflect its own unique culture. The idea of a civilisation state is also exclusive. Minority groups and migrants may never fit in because they are not part of the core civilisation.

One reason that the idea of the civilisation state is likely to gain wider currency is the rise of China. In speeches to foreign audiences, President Xi Jinping likes to stress the unique history and civilisation of China. This idea has been promoted by pro-government intellectuals, such as Zhang Weiwei of Fudan university. In an influential book, The China Wave: Rise of a Civilisational State, Mr Zhang argues that modern China has succeeded because it has turned its back on western political ideas — and instead pursued a model rooted in its own Confucian culture and exam-based meritocratic traditions. Mr Zhang was adapting an idea first elaborated by Martin Jacques, a western writer, in a bestselling book, When China Rules The World. “China’s history of being a nation state”, Mr Jacques argues, “dates back only 120-150 years: its civilisational history dates back thousands of years.” He believes that the distinct character of Chinese civilisation leads to social and political norms that are very different from those prevalent in the west, including “the idea that the state should be based on familial relations [and] a very different view of the relationship between the individual and society, with the latter regarded as much more important”. …

Civilisational views of the state are also gaining ground in Russia. Some of the ideologues around Vladimir Putin now embrace the idea that Russia represents a distinct Eurasian civilisation, which should never have sought to integrate with the west. In a recent article Vladislav Surkov, a close adviser to the Russian president, argued that his country’s “repeated fruitless efforts to become a part of western civilisation are finally over”. Instead, Russia should embrace its identity as “a civilisation that has absorbed both east and west” with a “hybrid mentality, intercontinental territory and bipolar history. It is charismatic, talented, beautiful and lonely. Just as a half-breed should be.” In a global system moulded by the west, it is unsurprising that some intellectuals in countries such as China, India or Russia should want to stress the distinctiveness of their own civilisations.

What is more surprising is that rightwing thinkers in the US are also retreating from the idea of “universal values” — in favour of emphasising the unique and allegedly endangered nature of western civilisation….(More)”.

Systems change and philanthropy


Introduction by Julian Corner to Special Issue of Alliance: “This special feature explores a growing aspiration in philanthropy to achieve system-level change. It looks at the potential and pitfalls by profiling a number of approaches adopted by different foundations….

While the fortunes of systems thinking have ebbed and flowed over the decades, it has mainly occurred on the margins of organisations. This time something different seems to be happening, at least in terms of philanthropy. A number of major foundations are embracing systems approaches as a core methodology. How should we understand this?…

I detect at least four broad approaches or attitudes to systems in foundations’ work, all of which have been at play in Lankelly Chase’s work at different points:

1.The system as a unit of intervention
Many foundations are trying to take in a broader canvas, recognising that both problems and solutions are generated by the interplay of multiple variables. They hope to find leverage points among these variables, so that their investment can unlock so-called system-level change. Some of their strategies include: working for policy changes, scaling disruptive innovations, supporting advocacy for people’s rights, and improving the evidence base used by system actors. These approaches seem to work best when there is common agreement on an identifiable system, such as the criminal justice system, which can be mapped and acted on.

2.Messy contested systems
Some foundations find they are drawn deeper into complexity. They unearth conflicting perspectives on the nature of the problem, especially when there is a power inequality between those defining it and those experiencing it. As greater interconnection emerges, the frame put around the canvas is shown to be arbitrary and the hope of identifying leverage points begins to look reductive. One person’s solution turns out to be another’s problem. Unable to predict how change might occur, foundations shift towards more exploratory and inquiring approaches. Rather than funding programmes or institutions, they seek to influence the conditions of change, focusing on collaborations, place-based approaches, collective impact, amplifying lesser heard voices, building skills and capacities, and reframing the narratives people hold.

3.Seeing yourself in the system
As appreciation of interconnection deepens, the way foundations earn money, how they make decisions, the people they choose to include in (and exclude from) their work, how they specify success, all come into play as parts of the system that need to change. These foundations realise that they aren’t just looking at a canvas, they are part of it. At Lankelly Chase, we now view our position as fundamentally paradoxical, given that we are seeking to tackle inequality by holding accumulated wealth. We have sought to model the behaviours of healthier systems, including delegated decision-making, mutual accountability, trust-based relationships, promoting equality of voice. By aiming for congruence between means and ends, we and our peers contend that effective practice and ethical practice become the same.

4.Beyond systems
There comes a point when the idea of systems itself can feel reductive. Different values are invoked, those of kindness and solidarity. The basis on which humans relate to each other becomes the core concern. Inspiration is sought in other histories and forms of spiritualty, as suppressed narratives are surfaced. The frame of philanthropy itself is no longer a given, with mutuality and even reparation becoming the basis of an alternative paradigm.

….Foundations can be viewed as both ‘of’ and ‘outside’ any system. This is a tension that isn’t resolvable, but if handled with sufficient self-awareness could make foundations powerful systems practitioners….(More)”.


Seeing and Being Seen


Russell C. Bogue in The Hedgehog Review: “On May 20, 2013, a pale, nervous American landed in Hong Kong and made his way to the Mira Hotel. Once there, he met with reporters from The Guardian and the Washington Post and turned over thousands of documents his high-level security clearance had enabled him to acquire while working as a contractor for the National Security Agency. Soon after this exchange, the world learned about PRISM, a top-secret NSA program that granted (court-ordered) direct access to Facebook, Apple, Google, and other US Internet giants, including users’ search histories, e-mails, file transfers, and live chats.1 Additionally, Verizon had been providing information to the NSA on an “ongoing, daily basis” about customers’ telephone calls, including location data and call duration (although not the content of conversations).2 Everyone, in short, was being monitored. Glenn Greenwald, one of the first journalists to meet with Edward Snowden, and one of his most vocal supporters, wrote later that “the NSA is collecting all forms of electronic communications between Americans…and thereby attempting by definition to destroy any remnants of privacy both in the US and globally.”3

According to a 2014 Pew Research Center poll, fully 91 percent of Americans believe they have lost control over their personal information.4 What is such a public to do? Anxious computer owners have taken to covering their devices’ built-in cameras with bits of tape.5Messaging services tout their end-to-end encryption.6 Researchers from Harvard Business School have started investigating the effectiveness of those creepy online ads that seem to know a little too much about your preferences.7

For some, this pushback has come far too late to be of any use. In a recent article in The Atlantic depressingly titled “Welcome to the Age of Privacy Nihilism,” Ian Bogost observes that we have already become unduly reliant on services that ask us to relinquish personal data in exchange for convenience. To reassert control over one’s privacy, one would have to abstain from credit card activity and use the Internet only sparingly. The worst part? We don’t get the simple pleasure of blaming this state of affairs on Big Government or the tech giants. Instead, our enemy is, as Bogost intones, “a hazy murk, a chilling, Lovecraftian murmur that can’t be seen, let alone touched, let alone vanquished.”8

The enemy may be a bit closer to home, however. While we fear being surveilled, recorded, and watched, especially when we are unaware, we also compulsively expose ourselves to others….(More)”.

Is Ethical A.I. Even Possible?


Cade Metz at The New York Times: ” When a news article revealed that Clarifaiwas working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

“Clarifai’s mission is to accelerate the progress of humanity with continually improving A.I.,” read a blog post from Matt Zeiler, the company’s founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation....

As companies and governments deploy these A.I. technologies, researchers are also realizing that some systems are woefully biased. Facial recognition services, for instance, can be significantly less accurate when trying to identify women or someone with darker skin. Other systems may include security holes unlike any seen in the past. Researchers have shown that driverless cars can be fooled into seeing things that are not really there.

All this means that building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.

As some Microsoft employees protest the company’s military contracts, Mr. Smith said that American tech companies had long supported the military and that they must continue to do so. “The U.S. military is charged with protecting the freedoms of this country,” he told the conference. “We have to stand by the people who are risking their lives.”

Though some Clarifai employees draw an ethical line at autonomous weapons, others do not. Mr. Zeiler argued that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators. “A.I. is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents,” he said in a statement.

Google worked on the same Pentagon project as Clarifai, and after a protest from company employees, the tech giant ultimately ended its involvement. But like Clarifai, as many as 20 other companies have worked on the project without bowing to ethical concerns.

After the controversy over its Pentagon work, Google laid down a set of “A.I. principles” meant as a guide for future projects. But even with the corporate rules in place, some employees left the company in protest. The new principles are open to interpretation. And they are overseen by executives who must also protect the company’s financial interests….

In their open letter, the Clarifai employees said they were unsure whether regulation was the answer to the many ethical questions swirling around A.I. technology, arguing that the immediate responsibility rested with the company itself….(More)”.

Can Data Save U.N. Peacekeeping?


Adam Day at World Policy Review: “Does international peacekeeping protect civilians caught up in civil wars? Do the 16,000 United Nations peacekeepers deployed in the Democratic Republic of the Congo actually save lives, and if so how many? Did the 9,000 patrols conducted by the U.N. Mission in South Sudan in the past three months protect civilians there? 

The answer is a dissatisfying “maybe.” Without a convincing story of saving lives, the U.N. is open to attacks by the likes of White House national security adviser John Bolton, who call peacekeeping “unproductive” and push for further cuts to the organization’s already diminished budget. But peacekeeping can—and must—make a case for its own utility, using data already at its fingertips. …(More)”.

Whose Rules? The Quest for Digital Standards


Stephanie Segal at CSIS: “Prime Minister Shinzo Abe of Japan made news at the World Economic Forum in Davos last month when he announced Japan’s aspiration to make the G20 summit in Osaka a launch pad for “world-wide data governance.” This is not the first time in recent memory that Japan has taken a leadership role on an issue of keen economic importance. Most notably, the Trans-Pacific Partnership (TPP) lives on as the Comprehensive and Progressive Agreement on Trans-Pacific Partnership (CPTPP), thanks in large part to Japan’s efforts to keep the trading bloc together after President Trump announced U.S. withdrawal from the TPP. But it’s in the area of data and digital governance that Japan’s efforts will perhaps be most consequential for future economic growth.

Data has famously been called “the new oil” in the global economy. A 2016 report by the McKinsey Global Institute estimated that global data flows contributed $2.8 trillion in value to the global economy back in 2014, while cross-border data flows and digital trade continue to be key drivers of global trade and economic growth. Japan’s focus on data and digital governance is therefore consistent with its recent efforts to support global growth, deepen global trade linkages, and advance regional and global standards.

Data governance refers to the rules directing the collection, processing, storage, and use of data. The proliferation of smart devices and the emergence of a data-driven Internet of Things portends an exponential growth in digital data. At the same time, recent reporting on overly aggressive commercial practices of personal data collection, as well as the separate topic of illegal data breaches, have elevated public awareness and interest in the laws and policies that govern the treatment of data, and personal data in particular. Finally, a growing appreciation of data’s central role in driving innovation and future technological and economic leadership is generating concern in many capitals that different data and digital governance standards and regimes will convey a competitive (dis)advantage to certain countries.

Bringing these various threads together—the inevitable explosion of digital data; the need to protect an individual’s right to privacy; and the appreciation that data has economic value and conveys economic advantage—is precisely why Japan’s initiative is both timely and likely to face significant challenges….(More)”.

The privacy threat posed by detailed census data


Gillian Tett at the Financial Times: “Wilbur Ross suffered the political equivalent of a small(ish) black eye last month: a federal judge blocked the US commerce secretary’s attempts to insert a question about citizenship into the 2020 census and accused him of committing “egregious” legal violations.

The Supreme Court has agreed to hear the administration’s appeal in April. But while this high-profile fight unfolds, there is a second, less noticed, census issue about data privacy emerging that could have big implications for businesses (and citizens). Last weekend John Abowd, the Census Bureau’s chief scientist, told an academic gathering that statisticians had uncovered shortcomings in the protection of personal data in past censuses. There is no public evidence that anyone has actually used these weaknesses to hack records, and Mr Abowd insisted that the bureau is using cutting-edge tools to fight back. But, if nothing else, this revelation shows the mounting problem around data privacy. Or, as Mr Abowd, noted: “These developments are sobering to everyone.” These flaws are “not just a challenge for statistical agencies or internet giants,” he added, but affect any institution engaged in internet commerce and “bioinformatics”, as well as commercial lenders and non-profit survey groups. Bluntly, this includes most companies and banks.

The crucial problem revolves around what is known as “re-identification” risk. When companies and government institutions amass sensitive information about individuals, they typically protect privacy in two ways: they hide the full data set from outside eyes or they release it in an “anonymous” manner, stripped of identifying details. The census bureau does both: it is required by law to publish detailed data and protect confidentiality. Since 1990, it has tried to resolve these contradictory mandates by using “household-level swapping” — moving some households from one geographic location to another to generate enough uncertainty to prevent re-identification. This used to work. But today there are so many commercially-available data sets and computers are so powerful that it is possible to re-identify “anonymous” data by combining data sets. …

Thankfully, statisticians think there is a solution. The Census Bureau now plans to use a technique known as “differential privacy” which would introduce “noise” into the public statistics, using complex algorithms. This technique is expected to create just enough statistical fog to protect personal confidentiality in published data — while also preserving information in an encrypted form that statisticians can later unscramble, as needed. Companies such as Google, Microsoft and Apple have already used variants of this technique for several years, seemingly successfully. However, nobody has employed this system on the scale that the Census Bureau needs — or in relation to such a high stakes event. And the idea has sparked some controversy because some statisticians fear that even “differential privacy” tools can be hacked — and others fret it makes data too “noisy” to be useful….(More)”.

Tomorrow’s Data Heroes


Article by Florian GrönePierre Péladeau, and Rawia Abdel Samad: “Telecom companies are struggling to find a profitable identity in today’s digital sphere. What about helping customers control their information?…

By 2025, Alex had had enough. There no longer seemed to be any distinction between her analog and digital lives. Everywhere she went, every purchase she completed, and just about every move she made, from exercising at the gym to idly surfing the Web, triggered a vast flow of data. That in turn meant she was bombarded with personalized advertising messages, targeted more and more eerily to her. As she walked down the street, messages appeared on her phone about the stores she was passing. Ads popped up on her all-purpose tablet–computer–phone pushing drugs for minor health problems she didn’t know she had — until the symptoms appeared the next day. Worse, she had recently learned that she was being reassigned at work. An AI machine had mastered her current job by analyzing her use of the firm’s productivity software.

It was as if the algorithms of global companies knew more about her than she knew herself — and they probably did. How was it that her every action and conversation, even her thoughts, added to the store of data held about her? After all, it was her data: her preferences, dislikes, interests, friendships, consumer choices, activities, and whereabouts — her very identity — that was being collected, analyzed, profited from, and even used to manage her. All these companies seemed to be making money buying and selling this information. Why shouldn’t she gain some control over the data she generated, and maybe earn some cash by selling it to the companies that had long collected it free of charge?

So Alex signed up for the “personal data manager,” a new service that promised to give her control over her privacy and identity. It was offered by her U.S.-based connectivity company (in this article, we’ll call it DigiLife, but it could be one of many former telephone companies providing Internet services in 2025). During the previous few years, DigiLife had transformed itself into a connectivity hub: a platform that made it easier for customers to join, manage, and track interactions with media and software entities across the online world. Thanks to recently passed laws regarding digital identity and data management, including the “right to be forgotten,” the DigiLife data manager was more than window dressing. It laid out easy-to-follow choices that all Web-based service providers were required by law to honor….

Today, in 2019, personal data management applications like the one Alex used exist only in nascent form, and consumers have yet to demonstrate that they trust these services. Nor can they yet profit by selling their data. But the need is great, and so is the opportunity for companies that fulfill it. By 2025, the total value of the data economy as currently structured will rise to more than US$400 billion, and by monetizing the vast amounts of data they produce, consumers can potentially recapture as much as a quarter of that total.

Given the critical role of telecom operating companies within the digital economy — the central position of their data networks, their networking capabilities, their customer relationships, and their experience in government affairs — they are in a good position to seize this business opportunity. They might not do it alone; they are likely to form consortia with software companies or other digital partners. Nonetheless, for legacy connectivity companies, providing this type of service may be the most sustainable business option. It may also be the best option for the rest of us, as we try to maintain control in a digital world flooded with our personal data….(More)”.