A Better Reykjavik and a stronger community: The benefits of crowdsourcing and e-democracy


Dyfrig Williams at Medium: “2008 was a difficult time in Iceland. All three of the country’s major privately owned banks went under, which prompted a financial crisis that enveloped the country and even reached local authorities in Wales.

The Better Reykjavik website was launched before the municipal elections and became a hub for online participation.

  • 70,000 people participated out of a population of 120,000
  • 12,000 registered users submitted over 3,300 ideas and5,500 points for and against
  • 257 ideas were formally reviewed, and 165 have been accepted since 2011

As an external not-for-profit website, Better Reykjavik was better able to involve people because it wasn’t perceived to be part of pre-existing political structures.

Elected members

In the run up to the elections, the soon to be Mayor Jón Gnarr championed the platform at every opportunity. This buy-in from a prominent figure was key, as it publicised the site and showed that there was buy-in for the work at the highest level.

How does it work?

The website enables people to have a direct say in the democratic process. The website gives the space for people to propose, debate and rate ways that their community can be improved. Every month the council is obliged to discuss the 10–15 highest rated ideas from the website….(More)

How to Hold Algorithms Accountable


Nicholas Diakopoulos and Sorelle Friedler at MIT Technology Review:  Algorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold them accountable.

Various industry efforts, including a consortium of Silicon Valley behemoths, are beginning to grapple with the ethics of deploying algorithms that can have unanticipated effects on society. Algorithm developers and product managers need new ways to think about, design, and implement algorithmic systems in publicly accountable ways. Over the past several months, we and some colleagues have been trying to address these goals by crafting a set of principles for accountable algorithms….

Accountability implies an obligation to report and justify algorithmic decision-making, and to mitigate any negative social impacts or potential harms. We’ll consider accountability through the lens of five core principles: responsibility, explainability, accuracy, auditability, and fairness.

Responsibility. For any algorithmic system, there needs to be a person with the authority to deal with its adverse individual or societal effects in a timely fashion. This is not a statement about legal responsibility but, rather, a focus on avenues for redress, public dialogue, and internal authority for change. This could be as straightforward as giving someone on your technical team the internal power and resources to change the system, making sure that person’s contact information is publicly available.

Explainability. Any decisions produced by an algorithmic system should be explainable to the people affected by those decisions. These explanations must be accessible and understandable to the target audience; purely technical descriptions are not appropriate for the general public. Explaining risk assessment scores to defendants and their legal counsel would promote greater understanding and help them challenge apparent mistakes or faulty data. Some machine-learning models are more explainable than others, but just because there’s a fancy neural net involved doesn’t mean that a meaningful explanationcan’t be produced.

Accuracy. Algorithms make mistakes, whether because of data errors in their inputs (garbage in, garbage out) or statistical uncertainty in their outputs. The principle of accuracy suggests that sources of error and uncertainty throughout an algorithm and its data sources need to be identified, logged, and benchmarked. Understanding the nature of errors produced by an algorithmic system can inform mitigation procedures.

Auditability. The principle of auditability states that algorithms should be developed to enable third parties to probe and review the behavior of an algorithm. Enabling algorithms to be monitored, checked, and criticized would lead to more conscious design and course correction in the event of failure. While there may be technical challenges in allowing public auditing while protecting proprietary information, private auditing (as in accounting) could provide some public assurance. Where possible, even limited access (e.g., via an API) would allow the public a valuable chance to audit these socially significant algorithms.

Fairness. As algorithms increasingly make decisions based on historical and societal data, existing biases and historically discriminatory human decisions risk being “baked in” to automated decisions. All algorithms making decisions about individuals should be evaluated for discriminatory effects. The results of the evaluation and the criteria used should be publicly released and explained….(More)”

Big data promise exponential change in healthcare


Gonzalo Viña in the Financial Times (Special Report: ): “When a top Formula One team is using pit stop data-gathering technology to help a drugmaker improve the way it makes ventilators for asthma sufferers, there can be few doubts that big data are transforming pharmaceutical and healthcare systems.

GlaxoSmithKline employs online technology and a data algorithm developed by F1’s elite McLaren Applied Technologies team to minimise the risk of leakage from its best-selling Ventolin (salbutamol) bronchodilator drug.

Using multiple sensors and hundreds of thousands of readings, the potential for leakage is coming down to “close to zero”, says Brian Neill, diagnostics director in GSK’s programme and risk management division.

This apparently unlikely venture for McLaren, known more as the team of such star drivers as Fernando Alonso and Jenson Button, extends beyond the work it does with GSK. It has partnered with Birmingham Children’s hospital in a £1.8m project utilising McLaren’s expertise in analysing data during a motor race to collect such information from patients as their heart and breathing rates and oxygen levels. Imperial College London, meanwhile, is making use of F1 sensor technology to detect neurological dysfunction….

Big data analysis is already helping to reshape sales and marketing within the pharmaceuticals business. Great potential, however, lies in its ability to fine tune research and clinical trials, as well as providing new measurement capabilities for doctors, insurers and regulators and even patients themselves. Its applications seem infinite….

The OECD last year said governments needed better data governance rules given the “high variability” among OECD countries about protecting patient privacy. Recently, DeepMind, the artificial intelligence company owned by Google, signed a deal with a UK NHS trust to process, via a mobile app, medical data relating to 1.6m patients. Privacy advocates say this as “worrying”. Julia Powles, a University of Cambridge technology law expert, asks if the company is being given “a free pass” on the back of “unproven promises of efficiency and innovation”.

Brian Hengesbaugh, partner at law firm Baker & McKenzie in Chicago, says the process of solving such problems remains “under-developed”… (More)

The Journal of Interrupted Studies


“…The Journal of Interrupted Studies is an interdisciplinary journal dedicated to the work of academics whose work has been interrupted by forced migration. Publishing both complete and incomplete articles the Journal is currently accepting submissions in the sciences and humanities….

By embracing a multidisciplinary approach the journal offers a platform for all academic endeavours thwarted by forced migration. Especially with regards to the ongoing crises in Syria, Afghanistan and Eritrea. We invite any and all students and academics who were interrupted in their studies and are now considered refugees to submit work.

Engaging in this process, we hope to create a conversation in which all participants can shape the discourse, on terms of dignity and mutual respect. We believe academia allows us to to to initiate such a dialogue and in the process create something of value for all parties.

Refugees status according to the European Union’s directive 2013/32/EU and 2013/33/EU is by no means a requirement for submitting to the Journal. We also wish to attract exiled academics who cannot return to their countries and universities without putting their lives at risk.

We believe that when academic voices are silenced by adversity it is not only the intellectual community that suffers…(More)

Co-Creating the Cities of the Future


Essay by Luis Muñoz in the Special Issue of “Sensors” on Smart City: Vision and Reality : “In recent years, the evolution of urban environments, jointly with the progress of the Information and Communication sector, have enabled the rapid adoption of new solutions that contribute to the growth in popularity of Smart Cities. Currently, the majority of the world population lives in cities encouraging different stakeholders within these innovative ecosystems to seek new solutions guaranteeing the sustainability and efficiency of such complex environments. In this work, it is discussed how the experimentation with IoT technologies and other data sources form the cities can be utilized to co-create in the OrganiCity project, where key actors like citizens, researchers and other stakeholders shape smart city services and applications in a collaborative fashion. Furthermore, a novel architecture is proposed that enables this organic growth of the future cities, facilitating the experimentation that tailors the adoption of new technologies and services for a better quality of life, as well as agile and dynamic mechanisms for managing cities. In this work, the different components and enablers of the OrganiCity platform are presented and discussed in detail and include, among others, a portal to manage the experiment life cycle, an Urban Data Observatory to explore data assets, and an annotations component to indicate quality of data, with a particular focus on the city-scale opportunistic data collection service operating as an alternative to traditional communications. (View Full-Text)”

The Data Visualisation Catalogue


The Data Visualisation Catalogue is an on-going project developed by Severino Ribecca.

Originally, this project was a way for me to develop my own knowledge of data visualisation and create a reference tool for me to use in the future for my own work. However, I felt it would also be useful to both designers and also anyone in a field that requires the use of data visualisation regularly.

Although there have been a few attempts in the past to catalogue some of the established data visualisation methods, there is no website that is really comprehensive, detailed or helps you decide the right method for your needs.

I will be adding in new visualisation methods, bit-by-bit, as I research each method to find the best way to explain how it works and what it is best suited for.

Most of the data visualised in the website’s example images is dummy data….(More)”

Africa’s health won’t improve without reliable data and collaboration


 and  at the Conversation: “…Africa has a data problem. This is true in many sectors. When it comes to health there’s both a lack of basic population data about disease and an absence of information about what impact, if any, interventions involving social determinants of health – housing, nutrition and the like – are having.

Simply put, researchers often don’t know who is sick or what people are being exposed to that, if addressed, could prevent disease and improve health. They cannot say if poor sanitation is the biggest culprit, or if substandard housing in a particular region is to blame. They don’t have the data that explains which populations are most vulnerable.

These data are required to inform development of innovative interventions that apply a “Health in All Policies” approach to address social determinants of health and improve health equity.

To address this, health data need to be integrated with social determinant data about areas like food, housing, and physical activity or mobility. Even where population data are available, they are not always reliable. There’s often an issue of compatability: different sectors collect different kinds of information using varying methodologies.

Different sectors also use different indicators to collect information on the same social determinant of health. This makes data integration challenging.

Without clear, focused, reliable data it’s difficult to understand what a society’s problems are and what specific solutions – which may lie outside the health sector – might be suitable for that unique context.

Scaling up innovations

Some remarkable work is being done to tackle Africa’s health problems. This ranges from technological innovations to harnessing indigenous knowledge for change. Both approaches are vital. But it’s hard for these to be scaled up either in terms of numbers or reach.

This boils down to a lack of funding or a lack of access to funding. Too many potentially excellent projects remain stuck at the pilot phase, which has limited value for ordinary people…..

Governments need to develop health equity surveillance systems to overcome the current lack of data. It’s also crucial that governments integrate and monitor health and social determinants of health indicators in one central system. This would provide a better understanding of health inequity in a given context.

For this to happen, governments must work with public and private sector stakeholders and nongovernmental organisations – not just in health, but beyond it so that social determinants of health can be better measured and captured.

The data that already exists at sub-national, national, regional and continental level mustn’t just be brushed aside. It should be archived and digitised so that it isn’t lost.

Researchers have a role to play here. They have to harmonise and be innovative in the methodologies they use for data collection. If researchers can work together across the breadth of sectors and disciplines that influence health, important information won’t slip through the cracks.

When it comes to scaling up innovation, governments need to step up to the plate. It’s crucial that they support successful health innovations, whether these are rooted in indigenous knowledge or are new technologies. And since – as we’ve already shown – health issues aren’t the exclusive preserve of the health sector, governments should look to different sectors and innovative partnerships to generate support and funding….(More)”

Talent Gap Is a Main Roadblock as Agencies Eye Emerging Tech


Theo Douglas in GovTech: “U.S. public service agencies are closely eyeing emerging technologies, chiefly advanced analytics and predictive modeling, according to a new report from Accenture, but like their counterparts globally they must address talent and complexity issues before adoption rates will rise.

The report, Emerging Technologies in Public Service, compiled a nine-nation survey of IT officials across all levels of government in policing and justice, health and social services, revenue, border services, pension/Social Security and administration, and was released earlier this week.

It revealed a deep interest in emerging tech from the public sector, finding 70 percent of agencies are evaluating their potential — but a much lower adoption level, with just 25 percent going beyond piloting to implementation….

The revenue and tax industries have been early adopters of advanced analytics and predictive modeling, he said, while biometrics and video analytics are resonating with police agencies.

In Australia, the tax office found using voiceprint technology could save 75,000 work hours annually.

Closer to home, Utah Chief Technology Officer Dave Fletcher told Accenture that consolidating data centers into a virtualized infrastructure improved speed and flexibility, so some processes that once took weeks or months can now happen in minutes or hours.

Nationally, 70 percent of agencies have either piloted or implemented an advanced analytics or predictive modeling program. Biometrics and identity analytics were the next most popular technologies, with 29 percent piloting or implementing, followed by machine learning at 22 percent.

Those numbers contrast globally with Australia, where 68 percent of government agencies have charged into piloting and implementing biometric and identity analytics programs; and Germany and Singapore, where 27 percent and 57 percent of agencies respectively have piloted or adopted video analytic programs.

Overall, 78 percent of respondents said they were either underway or had implemented some machine-learning technologies.

The benefits of embracing emerging tech that were identified ranged from finding better ways of working through automation to innovating and developing new services and reducing costs.

Agencies told Accenture their No. 1 objective was increasing customer satisfaction. But 89 percent said they’d expect a return on implementing intelligent technology within two years. Four-fifths, or 80 percent, agreed intelligent tech would improve employees’ job satisfaction….(More).

Council on Community Solutions


Fact Sheet by The White House on “Establishing a Council on Community Solutions to Align Federal Efforts with Local Priorities and Citizens’ Needs”: “Today, building on the Administration’s efforts to modernize the way the Federal Government works with cities, counties, and communities — rural, tribal, urban, and sub-urban – the President signed an Executive Order establishing a Community Solutions Council. The Council will provide a lasting structure for Federal agencies to strengthen partnerships with communities and improve coordination across the Federal Government in order to more efficiently deliver assistance and maximize impact.

Across the country, citizens and local leaders need a Federal Government that is more effective, responsive, and collaborative in addressing their needs and challenges. Far too often, the Federal Government has taken a “one-size-fits-all” approach to working with communities and left local leaders on their own to find Federal resources and navigate disparate programs. Responding to the call for change from local officials and leaders nationwide, and grounded in the belief that the best solutions come from the bottom up, not from the top down, Federal agencies have increasingly taken on a different approach to working with communities to deliver better outcomes in more than 1,800 cities, towns, regions, and tribal communities nationwide.

As a part of this new way of working, Federal agencies are partnering with local officials to support local plans and visions. They are crossing agency and program silos to support cities, towns, counties and tribes in implementing locally-developed plans for improvement – from re-lighting city streets to breathing new life into half-empty rural main streets.  And by using data to measure success and harnessing technology, Federal agencies are focusing on community-driven solutions and what works, while monitoring progress to make investments that have a strong base of evidence behind them.

Building on this success, the President today signed an Executive Order (EO) that will continue to make government work better for the American people. The EO establishes a Council for Community Solutions to streamline and improve the way the Federal Government works with cities, counties, and communities – rural, tribal, urban and sub-urban – to improve outcomes. The Council includes leadership from agencies, departments and offices across the Federal Government and the White House, who together will develop and implement policy that puts local priorities first, highlights successful solutions based on best practices, and streamlines Federal support for communities.  Further, the Council, where appropriate, will engage with representatives and leaders of organizations, businesses and communities to expand and improve partnerships that address the most pressing challenges communities face….

  • Harnessing Data and Technology to Improve Outcomes for Communities: The Federal government is working to foster collaborations between communities and the tech sector, non-profits and citizens to help communities develop new ways to use both Federal and local data to address challenges with greater precision and innovation. As a result, new digital tools are helping citizens find affordable housing near jobs and transportation, matching unemployed Americans with jobs that meet their skills, enabling local leaders to use data to better target investments, and more…(More)”

What’s wrong with big data?


James Bridle in the New Humanist: “In a 2008 article in Wired magazine entitled “The End of Theory”, Chris Anderson argued that the vast amounts of data now available to researchers made the traditional scientific process obsolete. No longer would they need to build models of the world and test them against sampled data. Instead, the complexities of huge and totalising datasets would be processed by immense computing clusters to produce truth itself: “With enough data, the numbers speak for themselves.” As an example, Anderson cited Google’s translation algorithms which, with no knowledge of the underlying structures of languages, were capable of inferring the relationship between them using extensive corpora of translated texts. He extended this approach to genomics, neurology and physics, where scientists are increasingly turning to massive computation to make sense of the volumes of information they have gathered about complex systems. In the age of big data, he argued, “Correlation is enough. We can stop looking for models.”

This belief in the power of data, of technology untrammelled by petty human worldviews, is the practical cousin of more metaphysical assertions. A belief in the unquestionability of data leads directly to a belief in the truth of data-derived assertions. And if data contains truth, then it will, without moral intervention, produce better outcomes. Speaking at Google’s private London Zeitgeist conference in 2013, Eric Schmidt, Google Chairman, asserted that “if they had had cellphones in Rwanda in 1994, the genocide would not have happened.” Schmidt’s claim was that technological visibility – the rendering of events and actions legible to everyone – would change the character of those actions. Not only is this statement historically inaccurate (there was plenty of evidence available of what was occurring during the genocide from UN officials, US satellite photographs and other sources), it’s also demonstrably untrue. Analysis of unrest in Kenya in 2007, when over 1,000 people were killed in ethnic conflicts, showed that mobile phones not only spread but accelerated the violence. But you don’t need to look to such extreme examples to see how a belief in technological determinism underlies much of our thinking and reasoning about the world.

“Big data” is not merely a business buzzword, but a way of seeing the world. Driven by technology, markets and politics, it has come to determine much of our thinking, but it is flawed and dangerous. It runs counter to our actual findings when we employ such technologies honestly and with the full understanding of their workings and capabilities. This over-reliance on data, which I call “quantified thinking”, has come to undermine our ability to reason meaningfully about the world, and its effects can be seen across multiple domains.

The assertion is hardly new. Writing in the Dialectic of Enlightenment in 1947, Theodor Adorno and Max Horkheimer decried “the present triumph of the factual mentality” – the predecessor to quantified thinking – and succinctly analysed the big data fallacy, set out by Anderson above. “It does not work by images or concepts, by the fortunate insights, but refers to method, the exploitation of others’ work, and capital … What men want to learn from nature is how to use it in order wholly to dominate it and other men. That is the only aim.” What is different in our own time is that we have built a world-spanning network of communication and computation to test this assertion. While it occasionally engenders entirely new forms of behaviour and interaction, the network most often shows to us with startling clarity the relationships and tendencies which have been latent or occluded until now. In the face of the increased standardisation of knowledge, it becomes harder and harder to argue against quantified thinking, because the advances of technology have been conjoined with the scientific method and social progress. But as I hope to show, technology ultimately reveals its limitations….

“Eroom’s law” – Moore’s law backwards – was recently formulated to describe a problem in pharmacology. Drug discovery has been getting more expensive. Since the 1950s the number of drugs approved for use in human patients per billion US dollars spent on research and development has halved every nine years. This problem has long perplexed researchers. According to the principles of technological growth, the trend should be in the opposite direction. In a 2012 paper in Nature entitled “Diagnosing the decline in pharmaceutical R&D efficiency” the authors propose and investigate several possible causes for this. They begin with social and physical influences, such as increased regulation, increased expectations and the exhaustion of easy targets (the “low hanging fruit” problem). Each of these are – with qualifications – disposed of, leaving open the question of the discovery process itself….(More)