Open Data for Social Change and Sustainable Development


Special issue of the Journal of Community Informatics edited by Raed M. Sharif and Francois Van Schalkwyk: “As the second phase of the Emerging Impacts of Open Data in Developing Countries (ODDC) drew to a close, discussions started on a possible venue for publishing some of the papers that emerged from the research conducted by the project partners. In 2012 the Journal of Community Informatics published a special issue titled ‘Community Informatics and Open Government Data’. Given the journal’s previous interest in the field of open data, its established reputation and the fact that it is a peer-reviewed open access journal, the Journal of Community Informatics was approached and agreed to a second special issue with a focus on open data. A closed call for papers was sent out to the project research partners. Shortly afterwards, the first Open Data Research Symposium was held ahead of the International Open Data Conference 2015 in Ottawa, Canada. For the first time, a forum was provided to academics and researchers to present papers specifically on open data. Again there were discussions about an appropriate venue to publish selected papers from the Symposium. The decision was taken by the Symposium Programme Committee to invite the twenty plus presenters to submit full papers for consideration in the special issue.

The seven papers published in this special issue are those that were selected through a double-blind peer review process. Researchers are often given a rough ride by open data advocates – the research community is accused of taking too long, not being relevant enough and of speaking in tongues unintelligible to social movements and policy-makers. And yet nine years after the ground-breaking meeting in Sebastopol at which the eight principles of open government data were penned, seven after President Obama injected political legitimacy into a movement, and five after eleven nation states formed the global Open Government Partnership (OGP), which has grown six-fold in membership; an email crosses our path in which the authors of a high-level report commit to developing a comprehensive understanding of a continental open data ecosystem through an examination of open data supply. Needless to say, a single example is not necessarily representative of global trends in thinking about open data. Yet, the focus on government and on the supply of open data by open data advocates – with little consideration of open data use, the differentiation of users, intermediaries, power structures or the incentives that propel the evolution of ecosystems – is still all too common. Empirical research has already revealed the limitations of ‘supply it and they will use it’ open data practices, and has started to fill critical knowledge gaps to develop a more holistic understanding of the determinants of effective open data policy and practice. As open data policies and practices evolve, the need to capture the dynamics of this evolution and to trace unfolding outcomes becomes critical to advance a more efficient and progressive field of research and practice. The trajectory of the existing body of literature on open data and the role of public authorities, both local and national, in the provision of open data

As open data policies and practices evolve, the need to capture the dynamics of this evolution and to trace unfolding outcomes becomes critical to advance a more efficient and progressive field of research and practice. The trajectory of the existing body of literature on open data and the role of public authorities, both local and national, in the provision of open data is logical and needed in light of the central role of government in producing a wide range of types and volumes of data. At the same time, the complexity of open data ecosystem and the plethora of actors (local, regional and global suppliers, intermediaries and users) makes a compelling case for opening avenues for more diverse discussion and research beyond the supply of open data. The research presented in this special issue of the Journal of Community Informatics touches on many of these issues, sets the pace and contributes to the much-needed knowledge base required to promote the likelihood of open data living up to its promise. … (More)”

How Medical Crowdsourcing Empowers Patients & Doctors


Rob Stretch at Rendia: “Whether you’re a solo practitioner in a rural area, or a patient who’s bounced from doctor to doctor with adifficult–to-diagnose condition, there are many reasons why you might seek out expert medical advice from a larger group. Fortunately, in 2016, seeking feedback from other physicians or getting a second opinion is as easy as going online.

“Medical crowdsourcing” sites and apps are gathering steam, from provider-only forums likeSERMOsolves and Figure 1, to patient-focused sites like CrowdMed. They share the same mission of empowering doctors and patients, reducing misdiagnosis, and improving medicine. Is crowdsourcing the future of medicine? Read on to find out more.

Fixing misdiagnosis

An estimated 10 percent to 20 percent of medical cases are misdiagnosed, even more than drug errors and surgery on the wrong patient or body part, according to the National Center for Policy Analysis. And diagnostic errors are the leading cause of malpractice litigation. Doctors often report that with many of their patient cases, they would benefit from the support and advice of their peers.

The photo-sharing app for health professionals, Figure 1, is filling that need. Since we reported on it last year, the app has reached 1 million users and added a direct-messaging feature. The app is geared towards verified medical professionals, and goes to great lengths to protect patient privacy in keeping with HIPAAlaws. According to co-founder and CEO Gregory Levey, an average of 10,000 unique users check in toFigure 1 every hour, and medical professionals and students in 190 countries currently use the app.

Using Figure 1 to crowdsource advice from the medical community has saved at least one life. EmilyNayar, a physician assistant in rural Oklahoma and a self-proclaimed “Figure 1 addict,” told Wired magazine that because of photos she’d seen on the app, she was able to correctly diagnose a patient with shingles meningitis. Another doctor had misdiagnosed him previously, and the wrong medication could have killed him.

Collective knowledge at zero cost

In addition to serving as “virtual colleagues” for isolated medical providers, crowdsourcing forums can pool knowledge from an unprecedented number of doctors in different specialties and even countries,and can do so very quickly.

When we first reported on SERMO, the company billed itself as a “virtual doctors’ lounge.” Now, the global social network with 600,000 verified, credentialed physician members has pivoted to medical crowdsourcing with SERMOsolves, one of its most popular features, according to CEO Peter Kirk.

“Crowdsourcing patient cases through SERMOsolves is an ideal way for physicians to gain valuable information from the collective knowledge of hundreds of physicians instantly,” he said in a press release.According to SERMO, 3,500 challenging patient cases were posted in 2014, viewed 700,000 times, and received 50,000 comments. Most posted cases received responses within 1.5 hours and were resolved within a day. “We have physicians from more than 96 specialties and subspecialties posting on the platform, working together to share their valuable insights at zero cost to the healthcare system.”

While one early user of SERMO wrote on KevinMD.com that he felt the site’s potential was overshadowed by the anonymous rants and complaining, other users have noted that the medical crowdsourcing site has,like Figure 1, directly benefitted patients.

In an article on PhysiciansPractice.com, Richard Armstrong, M.D., cites the example of a family physician in Canada who posted a case of a young girl with an E. coli infection. “Physicians from around the world immediately began to comment and the recommendations resulted in a positive outcome for the patient.This instance offered cross-border learning experiences for the participating doctors, not only regarding the specific medical issue but also about how things are managed in different health systems,” wrote Dr.Armstrong.

Patients get proactive

While patients have long turned to social media to (questionably) crowdsource their medical queries, there are now more reputable sources than Facebook.

Tech entrepreneur Jared Heyman launched the health startup CrowdMed in 2013 after his sister endured a “terrible, undiagnosed medical condition that could have killed her,” he told the Wall Street Journal. She saw about 20 doctors over three years, racking up six-figure medical bills. The NIH Undiagnosed DiseaseProgram finally gave her a diagnosis: fragile X-associated primary ovarian insufficiency, a rare disease that affects just 1 in 15,000 women. A hormone patch resolved her debilitating symptoms….(More)”

How Technology Can Restore Our Trust in Democracy


Cenk Sidar in Foreign Policy: “The travails of the Arab Spring, the rise of the Islamic State, and the upsurge of right-wing populism throughout the countries of West all demonstrate a rising frustration with the liberal democratic order in the years since the 2008 financial crisis. There is a growing intellectual consensus that the world is sailing into uncharted territory: a realm marked by authoritarianism, shallow populism, and extremism.

One way to overcome this global resentment is to use the best tools we have to build a more inclusive and direct democracy. Could new technologies such as Augmented Reality (AR), Virtual Reality (VR), data analytics, crowdsourcing, and Blockchain help to restore meaningful dialogue and win back people’s hearts and minds?

Underpinning our unsettling current environment is an irony: Thanks to modern communication technology, the world is more connected than ever — but average people feel more disconnected. In the United States, polls show that trust in government is at a 50-year low. Frustrated Trump supporters and the Britons who voted for Brexit both have a sense of having “lost out” as the global elite consolidates its power and becomes less responsive to the rest of society. This is not an irrational belief: Branko Milanovic, a leading inequality scholar, has found that people in the lower and middle parts of rich countries’ income distributions have been the losers of the last 15 years of globalization.

The same 15 years have also brought astounding advances in technology, from the rise of the Internet to the growing ubiquity of smartphones. And Western society has, to some extent, struggled to find its bearings amid this transition. Militant groups seduce young people through social media. The Internet enables consumers to choose only the news that matches their preconceived beliefs, offering a bottomless well of partisan fury and conspiracy theories. Cable news airing 24/7 keeps viewers in a state of agitation. In short, communication technologies that are meant to bring us together end up dividing us instead (and not least because our politicians have chosen to game these tools for their own advantage).

It is time to make technology part of the solution. More urgently than ever, leaders, innovators, and activists need to open up the political marketplace to allow technology to realize its potential for enabling direct citizen participation. This is an ideal way to restore trust in the democratic process.

As the London School of Economics’ Mary Kaldor put it recently: “The task of global governance has to be reconceptualized to make it possible for citizens to influence the decisions that affect their lives — to reclaim substantive democracy.” One notable exception to the technological disconnect has been fundraising, as candidates have tapped into the Internet to enable millions of average voters to donate small sums. With the right vision, however, technological innovation in politics could go well beyond asking people for money….(More)”

Make Algorithms Accountable


Julia Angwin in The New York Times: “Algorithms are ubiquitous in our lives. They map out the best route to our destination and help us find new music based on what we listen to now. But they are also being employed to inform fundamental decisions about our lives.

Companies use them to sort through stacks of résumés from job seekers. Credit agencies use them to determine our credit scores. And the criminal justice system is increasingly using algorithms to predict a defendant’s future criminality.
Those computer-generated criminal “risk scores” were at the center of a recent Wisconsin Supreme Court decision that set the first significant limits on the use of risk algorithms in sentencing.
The court ruled that while judges could use these risk scores, the scores could not be a “determinative” factor in whether a defendant was jailed or placed on probation. And, most important, the court stipulated that a pre sentence report submitted to the judge must include a warning about the limits of the algorithm’s accuracy.

This warning requirement is an important milestone in the debate over how our data-driven society should hold decision-making software accountable.But advocates for big data due process argue that much more must be done to assure the appropriateness and accuracy of algorithm results.

An algorithm is a procedure or set of instructions often used by a computer to solve a problem. Many algorithms are secret. In Wisconsin, for instance,the risk-score formula was developed by a private company and has never been publicly disclosed because it is considered proprietary. This secrecy has made it difficult for lawyers to challenge a result.

 The credit score is the lone algorithm in which consumers have a legal right to examine and challenge the underlying data used to generate it. In 1970,President Richard M. Nixon signed the Fair Credit Reporting Act. It gave people the right to see the data in their credit reports and to challenge and delete data that was inaccurate.

For most other algorithms, people are expected to read fine-print privacy policies, in the hopes of determining whether their data might be used against them in a way that they wouldn’t expect.

 “We urgently need more due process with the algorithmic systems influencing our lives,” says Kate Crawford, a principal researcher atMicrosoft Research who has called for big data due process requirements.“If you are given a score that jeopardizes your ability to get a job, housing or education, you should have the right to see that data, know how it was generated, and be able to correct errors and contest the decision.”

The European Union has recently adopted a due process requirement for data-driven decisions based “solely on automated processing” that“significantly affect” citizens. The new rules, which are set to go into effect in May 2018, give European Union citizens the right to obtain an explanation of automated decisions and to challenge those decisions. However, since the European regulations apply only to situations that don’t involve human judgment “such as automatic refusal of an online credit application or e-recruiting practices without any human intervention,” they are likely to affect a narrow class of automated decisions. …More recently, the White House has suggested that algorithm makers police themselves. In a recent report, the administration called for automated decision-making tools to be tested for fairness, and for the development of“algorithmic auditing.”

But algorithmic auditing is not yet common. In 2014, Eric H. Holder Jr.,then the attorney general, called for the United States SentencingCommission to study whether risk assessments used in sentencing were reinforcing unjust disparities in the criminal justice system. No study was done….(More)”

Are Crowds Wise? Engagement Over Reliance


Bruce Muirhead at Mindhive: “Crowdsourcing is developing into a mega-trend. It has begun an inexorable shift from the periphery to the mainstream of policy and problem solving methodology. We’ve heard countless times the virtue of crowds and the inherent advantages regarding access to knowledge, transparency, accountability and efficiency – yet all of these advantages rest on the simple assumption that the crowd is wise.

In the fast growing industry of crowdsourcing platforms and in society more generally we can see a growing acceptance by organisations and users alike that the crowds they are engaging with have some common failings. For instance, when addressing a specific problem there is need to consider and discount alternatives before a solution can be arrived at. In a crowd of one it is quite simple to assess the value of each competing solution and evaluate relative to these assessment the most appropriate response. Crowds are obviously not a homogenous grouping capable of relative comparison to the same degree an individual or small group can due to the fact they lack an objective set of priorities or objectives to evaluate them against. A diverse crowd from varied backgrounds will pull the preference of solution in many different directions, In the same way a machine with many moving parts is more likely to fail, a crowd with high levels of expertise, diversity of preference and variance of background is more likely to fail to reach consensus or compromise through logic and reasoning. This presents an interesting catch-22 as many crowdsourcing methodologies recommend involving a large number of varied opinions and backgrounds to enhance the originality and disruptiveness of a solution. However, such levels of disruption also imbalance the internal reasoning of the crowd and make it difficult to develop a nuanced, targeted solution to a challenge. Of course, organisations that seek to engage with crowds can mitigate these risks by developing clear objective standards of reference and outlining and priorities available to the crowd.

Additionally, in a year where the force of a crowd has propelled a man such as Donald Trump to a position that may feasible see him elected President of the United States – how can any argue that crowds are wise? Stephen Walt of Foreign Policy argues that such crowds act as such in a political context due a failing of trusting, in turn resulting from a failure of accountability. ….While crowds don’t always make wise choices, they are neither inherently wise nor unwise groups. There is doubtless intelligence in crowds – what we need to figure out and continue to develop is the process through which we can leverage it to develop more targeted solutions and involving the crowd more effectively….(More)”

From killing machines to agents of hope: the future of drones in Africa


 in The Guardian: “Some are killing machines. Others are pesky passions of the weekend hobbyist. As such, drones have not always been welcomed in our skies.

Across Africa, however, projects are being launched that could revolutionise medical supply chains and commercial deliveries, combat poaching and provide other solutions for an overburdened, underdeveloped continent.

In Rwanda, as in many other African countries, the rainy season makes already difficult roads between smaller towns and villages all but impassable. Battered trucks struggle through the mud, and in some cases even more agile motorbikes and foot traffic are unable get through.

“Rwanda is essentially a rural country. Lots of blood products cannot be stocked at every health centre. At best it can take four to six hours to get supplies through,” says the technology minister, Jean Philbert Nsengimana.

“For mothers giving birth, postpartum haemorrhaging, or bleeding post-delivery, happens quite often. It may not be possible to prevent. Then what is needed is a quick and rapid intervention.”

“This technology has the potential to erase barriers to access for countless critical medicines and save lives on a scale not previously possible,” says Keller Rinaudo, Zipline’s chief executive, which is staffed by experienced aerospace engineers including those who have worked at SpaceX, Boeing and Nasa.

“While there are a number of potential applications for this technology, we’re keenly focused on using it to save lives.”…

Drones are being tested in other emerging economies. Matternet, another Silicon Valley startup, has run pilots moving samples from rural clinics to a laboratory inPapua New Guinea and is launching a small medical delivery network inDominican Republic.

The company is also working with Unicef in Malawi to develop a project using UAVs to carry blood samples from infants born to HIV-positive parents, underscoring the physical and geographical challenges that are present across much of the continent.

Some frontline health workers are supportive….(More)”

Does Crime-Predicting Software Bias Judges? Unfortunately, There’s No Data


Rose Eveleth at Motherboard: “For centuries judges have had to make guesses about the people in front of them.Will this person commit a crime again? Or is this punishment enough to deter them?Do they have the support they need at home to stay safe and healthy and away from crime? Or will they be thrust back into a situation that drives them to their old ways? Ultimately, judges have to guess.

But recently, judges in states including California and Florida have been given a new piece of information to aid in that guess work: a “risk assessment score” determined by an algorithm. These algorithms take a whole suite of variables into account, and spit out a number (usually between 1 and 10) that estimates the risk that the person in question will wind up back in jail.

If you’ve read this column before, you probably know where this is going. Algorithms aren’t unbiased, and a recent ProPublica investigation suggests what researchers have long been worried about: that these algorithms might contain latent racial prejudice. According to ProPublica’s evaluation of a particular scoring method called the COMPAS system, which was created by a company called Northpointe, people of color are more likely to get higher scores than white people for essentially the same crimes.

Bias against folks of color isn’t a new phenomenon in the judicial system. (This might be the understatement of the year.) There’s a huge body of research that shows that judges, like all humans, are biased. Plenty of studies have shown that for the same crime, judges are more likely to sentence a black person more harshly than a white person. It’s important to question biases of all kinds, both human and algorithmic, but it’s also important to question them in relation to one another. And nobody has done that.

I’ve been doing some research of my own into these recidivism algorithms, and whenI read the ProPublica story, I came out with the same question I’ve had since I started looking into this: these algorithms are likely biased against people of color. But so are judges. So how do they compare? How does the bias present in humans stack up against the bias programmed into algorithms?

This shouldn’t be hard to find out: ideally you would divide judges in a single county in half, and give one half access to a scoring system, and have the other half carry on as usual. If you don’t want to A/B test within a county—and there are some questions about whether that’s an ethical thing to do—then simply compare two counties with similar crime rates, in which one county uses rating systems and the other doesn’t. In either case, it’s essential to test whether these algorithmic recidivism scores exacerbate, reduce, or otherwise change existing bias.

Most of the stories I’ve read about these sentencing algorithms don’t mention any such studies. But I assumed that they existed, they just didn’t make the cut in editing.

I was wrong. As far as I can find, and according to everybody I’ve talked to in the field,nobody has done this work, or anything like it. These scores are being used by judges to help them sentence defendants and nobody knows whether the scores exacerbate existing racial bias or not….(More)”

Nudging patients into clinical trials


Bradley J. Fikes in the San Diego Union Tribune: “The modern era’s dramatic advances in medical care relies on more than scientists, doctors and biomedical companies. None of it could come to fruition without patients willing to risk trying experimental therapies to see if they are safe and effective.

More than 220,000 clinical trials are taking place worldwide, with more than 81,000 of them in the United States, according to the federal government’s registry, clinicaltrials.gov. That poses a huge challenge for recruitment.

Companies are offering a variety of inducements to coax patients into taking part. Some rely on that good old standby, cash. Others remove obstacles. Axovant Sciences, which is preparing to test an Alzheimer’s drug, is offering patients transportation from the ridesharing service Lyft.

In addition, non-cash rewards such as iPads, opt-out enrollment in low-risk trials or even guaranteeing patients they will be informed about the clinical trial results should be considered, say a group of researchers who suggest testing these incentives scientifically.

In an article published Wednesday in Science Translational Medicine, the researchers present a matrix of these options, their benefits, and potential drawbacks. They urge companies to track the outcomes of these incentives to find out what works best.

The goal, the article states, is to “nudge” patients into participating, but not so far as to turn the nudge into a coercive shove. Go to j.mp/nudgeclin for the article.

For a nudge, the researchers suggest the wording of a consent form could include options such as a choice of preferred appointment times, such as “Yes, morning appointments,” with a number of similarly worded statements. That wording would “imply that enrollment is normative,” or customary, the article stated.

Researchers could go so far as to vary the offers to patients in a single clinical trial and measure which incentives produce the best responses, said Eric M. Van Epps, one of the researchers. In effect, that would provide a clinical trial of clinical trial incentives.

As part of that tracking, companies need to gain insight into why some patients are reluctant to take part, and those reasons vary, said Van Epps, of the Michael J. Crescenz Veterans Affairs Medical Center in Philadelphia.

“Sometimes they’re not made aware of the clinical trials, they might not understand how clinical trials work, they might want more control over their medication regimen or how they’re going to proceed,” Van Epps said.

At other times, patients may be overwhelmed by the volume of paperwork required. Some paperwork is necessary for legal and ethical reasons. Patients must be informed about the trial’s purpose, how it might help them, and what harm might happen. However, it could be possible to simplify the informed consent paperwork to make it more understandable and less intimidating….(More)”

Big data for government good: using analytics for policymaking


Kent Smetters in The Hill: ” Big Data and analytics are driving advancements that touch nearly every part of our lives. From improving disaster relief efforts following a storm, to enhancing patient response to specific medications to criminal justice reform and real-time traffic reporting, Big Data is saving lives, reducing costs and improving productivity across the private and the public sector.Yet when our elected officials draft policy they lack access to advanced data and analytics that would help them understand the economic implications of proposed legislation. Instead of using Big Data to inform and shape vital policy questions, Members of Congress typically don’t receive a detailed analysis of a bill until after it has been written, and after they have sought support for it. That’s when a policy typically undergoes a detailed budgetary analysis. And even then, these assessments often ignore the broader impact on jobs and the economy.

Yet when our elected officials draft policy they lack access to advanced data and analytics that would help them understand the economic implications of proposed legislation. Instead of using Big Data to inform and shape vital policy questions, Members of Congress typically don’t receive a detailed analysis of a bill until after it has been written, and after they have sought support for it. That’s when a policy typically undergoes a detailed budgetary analysis. And even then, these assessments often ignore the broader impact on jobs and the economy.

We must do better. Just as modern marketing firms use deep analytical tools to make smart business decisions, policymakers in Washington should similarly have access to modern tools for analyzing important policy questions.
Will Social Security be solvent for our grandchildren? How will changes to immigration policy influence the number of jobs and the GDP? How will tax reform impact the budget, economic growth and the income distribution? What is the impact of new investments in health care, education and roads? These are big questions that must be answered with reliable data and analysis while legislation is being written, not afterwards. The absence leaves us with ideology-driven partisanship.

Simply put, Washington needs better tools to evaluate these complex factors. Imagine the productive conversations we could have if we applied the kinds of tools that are commonplace in the business world to help Washington make more informed choices.

For example, with the help of a nonpartisan budget model from the Wharton School of the University of Pennsylvania, policymakers and the public can uncover some valuable—and even surprising—information about our choices surrounding Social Security, immigration and other issues.

By analyzing more than 4,000 different Social Security policy options, for example, the model projects that the Social Security Trust Fund will be depleted three years earlier than the Social Security Administration’s projections, barring any changes in current law. The tool’s projected shortfalls are larger than the SSA’s, in fact—because it takes into account how changes over time will affect the outcome. We also learn that many standard policy options fail to significantly move the Trust Fund exhaustion date, as these policies phase in too slowly or are too small. Securing Social Security, we now know, requires a range of policy combinations and potentially larger changes than we may have been considering.

Immigration policy, too, is an area where we could all benefit from greater understanding. The political left argues that legalizing undocumented workers will have a positive impact on jobs and the economy. The political right argues for just the opposite—deportation of undocumented workers—for many of the same reasons. But, it turns out, the numbers don’t offer much support to either side.

On one hand, legalization actually slightly reduces the number of jobs. The reason is simple: legal immigrants have better access to school and college, and they can spend more time looking for the best job match. However, because legal immigrants can gain more skills, the actual impact on GDP from legalization alone is basically a wash.

The other option being discussed, deportation, also reduces jobs, in this case because the number of native-born workers can’t rise enough to absorb the job losses caused by deportation. GDP also declines. Calculations based on 125 different immigration policy combinations show that increasing the total amount of legal immigrants—especially those with higher skills—is the most effective policy for increasing employment rates and GDP….(More)”

Building a Civic Tech Sector to Last: Design Principles to Generate a Civic Tech Movement


Stefaan G. Verhulst at Positive Returns (Medium): “Over the last few years we have seen growing recognition of the potential of “civic tech,” or the use of technology that “empowers citizens to make government more accessible, efficient and effective (definition provided in “Engines of Change”)”. One commentator recently described “civic tech as the next big thing.” At the same time, we are yet to witness a true tech-enabled transformation of how government works and how citizens engage with institutions and with each other to solve societal problems. In many ways, civic tech still operates under the radar screen and often lacks broad acceptance. So how do we accelerate and expand the civic tech sector? How can we build a civic tech field that can last and stand the test of time?

The “Engines of Change” report written for Omidyar Network by Purpose seeks to provide an answer to these questions in the context of the United States….

Given the new insights gained from the report, how to move forward? How to translate its findings into a strategy that seeks to improve people’s lives and addresses societal problems by leveraging technology? What emerges from reading the report, and reflecting on how fields and movements have been built in other areas (e.g., the digital learning movement by theMacArthur Foundation or the Hewlett Foundation’s efforts to build a conflict resolution field), are a set of design principles that, when applied consistently, may generate a true lasting civic tech movement. These principles include:

  • Define a common problem that matters enough to work on collectively and identify a unique opportunity to solve it. Most successful movements seek to solve hard problems. So what is the problem that civic tech seeks to address? …
  • Encourage experimentation. As it stands, there is no shortage of experimentation with new platforms and tools in the civic tech space.What is missing, however, is the type of assessment that uncovers whether or not such efforts are actually working, and why or why not. Rather than viewing experimentation as simply “trying new things,” the field could embrace “fast-cycle action research” to understand both more quickly, and more precisely, when an innovation works, for whom, and under what conditions.
  • Establish an evidence base and a common set of metrics. While there is good reason to believe that breakthrough solutions may come from using technology, there are still too little studies measuring exactly how impactful civic tech is. Without a deeper understanding of whether, when, why and to what extent an intervention has made an impact, the civic tech movement will lack credibility. To accelerate the rate of experimentation and create more agile institutions capable of piloting civic tech solutions, we need research that will enable the sector to move away from “faith-based” initiatives toward “evidence-based” ones. The TicTec conference, the Opening Governance Research Network and the recently launched Open Governance Research Exchange are some initiatives that seek to address this shortcoming. Yet more analysis and translation of current findings into clear baselines of impact against common metrics is needed to make the sector more reliable.
  • Develop a Network Infrastructure…
  • Identify the signal…

As every engineer knows, building engines requires a set of basic design principles. Similarly, transforming the civic tech sector into a sustainable engine of change may require the implementation of the principles outlined above. Let’s build a civic tech sector to last….(More)”