Foundation Openness: A Critical Component of Foundation Effectiveness


Lindsay Louie at PhilanthroFiles: “We created the Fund for Shared Insight—a funder collaborative with diverse support from 30 different funders—to increase foundation openness. We believe that if foundations are more open—which we define as how they share about their goals and strategies; make decisions and measure progress; listen and engage in dialogue with others; act on what they hear; and share what they themselves have learned—they will be more effective.

WPhilanthropy Lessonse were so pleased to support Exponent Philanthropy’s video series featuring philanthropists being more open about their work: Philanthropy Lessons. To date, Exponent Philanthropy has released 5 of the total 9 videos, including:

Future video releases include:

  • Who Knows More? (expected 4/27/16)
  • Being Transparent (expected 4/27/16)
  • Value Beyond Dollars (expected 5/25/16)
  • Getting Out of the Office (expected 6/22/16)

We would love to see many more foundations make videos like these; engage in conversation with each other about these philanthropy lessons online and in person; share their experiences live at regional grantmaker association meetings or a national conferences like those Exponent Philanthropy hosts; and find other ways to be more open.

Why is this so important?

Recent research from the Center for Effective Philanthropy (report on CEP’s website here, full disclosure we funded this research) found that foundation CEOs see grantees, nonprofits that are considering applying for a grant, and other foundations working on similar issues as the top three audiences who benefit from a foundation being open about its work. Further, 86% of foundation CEOs who responded to the survey said they believe transparency is necessary for building strong relationships with grantees.

It was great to learn from this research that many foundations are open about their criteria for nonprofits seeking funding, their programmatic goals, and their strategies; and share about who makes decisions about the grantee selection process. Yet the research also found that foundations are not as open about sharing what they are achieving, how they assess their work, and their experiences with what has and hasn’t worked—and that foundation CEOs believe it would be beneficial for foundations to share more in these specific areas….(More)”

The Alberta CoLab Story: Redesigning the policy development process in government


Alex Ryan at Medium: “Alberta CoLab is an evolving experiment built on three counter-intuitive ideas:

1. Culture shifts faster through collaborative project work than through a culture change initiative.

2. The way to accelerate policy development is to engage more perspectives and more complexity.

3. The best place to put a cross-ministry design team is in a line ministry.

I want to explain what CoLab is and why it has evolved the way it has. We don’t view CoLab as a best practice to be replicated, since our model is tailored to the specific culture and context of Alberta. Perhaps you are also trying to catalyze innovation inside a large bureaucratic organization. I hope you can learn something from our journey so far,….

….Both the successes and frustrations of Alberta CoLab are consequences of the way that we have mediated some key tensions and tradeoffs involved with setting up a public sector innovation lab. Practitioners in other labs will likely recognize these tensions and tradeoffs, although your successes and frustrations will be different depending on how your business model reconciles them.

  1. Where should the lab be? Public innovation labs can exist inside, outside, or on the edge of government. Dubai The Model Centre and Alberta CoLab operate inside government. Inside labs have the best access to senior decision makers and the authority to convene whole of government collaborations, but may find it harder to engage openly with citizens and stakeholders. Unicef Innovation Labs and NouLab exist outside of government. Outside labs have more freedom in who they convene, the kind of container they can create, and timelines to impact, but find it harder to connect with and affect policy change. MindLab and MaRS Solutions Lab are examples of labs on the edge of government. This positioning can offer the best of both worlds. However, edge labs are vulnerable to fluctuations in their relationship with government. Surviving and thriving on the edge means continually walking a tightrope between autonomy and integration. Labs can change their positioning. Alberta CoLab began as an external consulting project. The Behavioural Insights Team is a social purpose company that was spun-off from a lab inside the U.K. government. The location of the lab is unlikely to change often, so it is an important strategic choice.
  2. How deep should the lab go? Here the tension is between taking on small, tactical improvement projects that deliver tangible results, or tackling the big, strategic systems changes that will take years to manifest. Public sector innovation labs are a reaction to the almost total failure of traditional approaches to move the needle on systems change.Therefore, most labs have aspirations to the strategic and the systemic. Yet most labs are also operating in a dominant culture that demands quick wins and measures success by linear progress against a simple logic model theory of change. We believe that operating at either extreme of this spectrum is equally misguided. We use a portfolio approach and a barbell strategy to mediate this tension. Having a portfolio of projects allows us to invest energy in systems change and generate immediate value. It allows us to balance our projects across three horizons of innovation: sustaining innovations; disruptive innovations; and transformative innovations. A barbell strategy means avoiding the middle of the bell curve. We maintain a small number of long-term, flagship initiatives, combined with a rapid turnover of quick-win projects. This allows us to remind the organization of our immediate value without sacrificing long-term commitment to systems change.
  3. What relationship should the lab have with government? Even an inside lab must create some distance between itself and the broader government culture if it is to provide a safe space for innovation. There is a tension between being separate and being integrated. Developing novel ideas that get implemented requires the lab to be both separate and integrated at the same time. You need to decouple from regular policy cycles to enable divergence and creativity, yet provide input into key decisions at the right time. Sometimes these decision points are known in advance, but more often this means sensing and responding to a dynamic decision landscape. Underneath any effective lab is a powerful social network, which needs to cut across government silos and stratas and draw in external perspectives. I think of a lab as having a kind of respiratory rhythm. It starts by bringing fresh ideas into the organization, like a deep breath that provides the oxygen for new thinking. But new ideas are rarely welcome in old organizations. When the lab communicates outwards, these new ideas should be translated into familiar language and concepts, and then given a subtle twist. Often labs believe they have to differentiate their innovations — to emphasize novelty — to justify their existence as an innovation lab. But the more the output of the lab resembles the institutional culture, the more it appears obvious and familiar, the more likely it will be accepted and integrated into the mainstream.
  4. What relationship should the lab have with clients? Alberta CoLab is a kind of in-house consultancy that provides services to clients across all ministries. There is a tension in the nature of the relationship, which can span from consulting problem-solver to co-design facilitator to teacher. The main problem with a consulting model is it often builds dependency rather than capacity. The challenge with an educational relationship is that clients struggle to apply theory that is disconnected from practice. We often use facilitation as a ‘cover’ for our practice, because it allows us to design a process that enables both reflective practice and situated learning. By teaching systemic design and strategic foresight approaches through taking on live projects, we build capacity while doing the work our clients need to do anyway. This helps to break down barriers between theory and practice, learning and doing. Another tension is between doing what the client says she wants and what she needs but does not articulate. Unlike a customer, who is always right, the designer has a duty of care to their client. This involves pushing back when the client demands are unreasonable, reframing the challenge when the problem received is a symptom of a deeper issue, and clearly communicating the risks and potential side effects of policy options. As Denys Lasdun has said about designers: “Our job is to give the client, on time and on cost, not what he wants, but what he never dreamed he wanted; and when he gets it, he recognizes it as something he wanted all the time.”

Lessons Learned

These are our top lessons learned from our journey to date that may have broader applicability.

  1. Recruit outsiders and insiders. Bringing in outside experts elevates the lab’s status. Outsiders are essential to question and challenge organizational patterns that insiders take as given. Insiders bring an understanding of organizational culture. They know how to move files through the bureaucracy and they know where the landmines are.
  2. Show don’t tell. As lab practitioners, we tend to be process geeks with a strong belief in the superiority of our own methods. There is a temptation to cast oneself in the role of the missionary bringing the good word to the unwashed masses. Not only is this arrogant, it’s counter-productive. It’s much more effective to show your clients how your approach adds value by starting with a small collaborative project. If your approach really is as good as you believe it is, the results will speak for themselves. Once people are envious of the results you have achieved, they will be curious and open to learning how you did it, and they will demand more of it.
  3. Be a catalyst, not a bottleneck. Jess McMullin gave us this advice when we founded CoLab. It’s why we developed a six day training course to train over 80 systemic designers across the government. It’s why we run communities of practice on systemic design and strategic foresight. And it’s why we publish about our experiences and share the toolkits we develop. If the innovation lab is an ivory tower, it will not change the way government works. Think instead of the lab as the headquarters of a democratic grassroots movement.
  4. Select projects based on the potential for reframing. There are many criteria we apply when we decide whether to take on a new project. Is it a strategic priority? Is there commitment to implement? Are the client expectations realistic? Can our contribution have a positive impact? These are useful but apply to almost any service offering. The unique value a social innovation lab offers is discontinuous improvement. The source of discontinuous improvement is reframing — seeing a familiar challenge with new eyes, from a different perspective that opens up new potential for positive change. If a project ticks all the boxes, except that the client is certain they already know what the problem is, then that already limits the kind of solutions they will consider. Unless they are open to reframing, they will likely be frustrated by a lab approach, and would be better served by traditional facilitation or good project management.
  5. Prototyping is just the end of the beginning. After one year, we went around and interviewed the first 40 clients of Alberta CoLab. We wanted to know what they had achieved since our co-design sessions. Unfortunately, for most of them, the answer was “not much.” They were very happy with the quality of the ideas and prototypes generated while working with CoLab and were hopeful that the ideas would eventually see the light of day. But they also noted that once participants left the lab and went back to their desks, they found it difficult to sustain the momentum and excitement of the lab, and easy to snap back to business as usual. We had to pivot our strategy to take on fewer projects, but take on a greater stewardship role through to implementation.
  6. Find a rhythm. It’s not useful to create a traditional project plan with phases and milestones for a non-linear and open-ended discovery process like a lab. Yet without some kind of structure, it’s easy to lose momentum or become lost. The best projects I have participated in create a rhythm: an alternating movement between open collaboration and focused delivery. The lab opens up every few months to engage widely on what needs to be done and why. A core team then works between collaborative workshops on how to make it happen. Each cycle allows the group to frame key challenges, make progress, and receive feedback, which builds momentum and commitment.
  7. Be a good gardener. Most of the participants of our workshops arrive with a full plate. They are already 100% committed in their day jobs. Even when they are enthusiastic to ideate, they will be reluctant to take on any additional work. If we want our organizations to innovate, first we have to create the space for new work. We need to prune those projects that we have kept on life support — not yet declared dead but not priorities. This often means making difficult decisions. The flip side of pruning is to actively search for positive deviance and help it to grow. When you find something that’s already working, you just need to turn up the good…..(More)”

The sharing economy comes to scientific research


 at the Conversation: “…to perform top-quality and cost-effective research, scientists need these technologies and the technical knowledge of experts to run them. When money is tight, where can scientists turn for the tools they need to complete their projects?

Sharing resources

An early solution to this problem was to create what the academic world calls “resource labs” that specialize in one or more specific type of science experiments (e.g., genomics, cell culture, proteomics). Researchers can then order and pay for that type of experiment from the resource lab instead of doing it on their own.

By focusing on one area of science, resource labs become the experts in that area and do the experiments better, faster and cheaper than most scientists could do in their own labs. Scientists no longer stumble through failed experiments trying to learn a new technique when a resource lab can do it correctly from the start.

The pooled funds from many research projects allow resource labs to buy better and faster equipment than any individual scientist could afford. This provides more researchers access to better technology at lower costs – which also saves taxpayers money, since many grants are government-backed….

Connecting people on a scientific Craigslist

This is a common paradox, with several efforts under way to address it. For example, MIT has created several “remote online laboratories” running experiments that can be controlled via the internet, to help enrich teaching in places that can’t afford advanced equipment. Harvard’s eagle-i system is a directory where researchers can list information, data and equipment they are willing to share with others – including cell lines, research mice, and equipment. Different services work for different institutions.

In 2011, Dr. Elizabeth Iorns, a breast cancer researcher, developed a mouse model to study how breast cancer spreads, but her institution didn’t have the equipment to finish one part of her study. My resource lab could complete the project, but despite significant searching, Dr. Iorns did not have an effective way to find labs like mine.

Actively connecting scientists with resource labs, and helping resource labs keep their equipment optimally busy, is a model Iorns and cofounder Dan Knox have developed into a business, called Science Exchange. (I am on its Lab Advisory Board, but have no financial interest in the company.) A little bit Craigslist and Travelocity for science rolled into one, Science Exchange provides scientists and expert resource labs a way to find each other to keep research progressing.

Unlike Starbucks, resource labs are not found on every corner and can be difficult for scientists to find. Now a simple search provides scientists a list of multiple resource labs that could do the experiments, including estimated costs and speed – and even previous users’ reviews of the choices.

I signed onto Science Exchange soon after it went live and Iorns immediately sent her project to my lab. We completed the project quickly, resulting in the first peer-reviewed publication made possible through Science Exchange….(More).

OpenTrials: towards a collaborative open database of all available information on all clinical trials


Paper Ben Goldacre and Jonathan Gray at BioMed Central: “OpenTrials is a collaborative and open database for all available structured data and documents on all clinical trials, threaded together by individual trial. With a versatile and expandable data schema, it is initially designed to host and match the following documents and data for each trial: registry entries; links, abstracts, or texts of academic journal papers; portions of regulatory documents describing individual trials; structured data on methods and results extracted by systematic reviewers or other researchers; clinical study reports; and additional documents such as blank consent forms, blank case report forms, and protocols. The intention is to create an open, freely re-usable index of all such information and to increase discoverability, facilitate research, identify inconsistent data, enable audits on the availability and completeness of this information, support advocacy for better data and drive up standards around open data in evidence-based medicine. The project has phase I funding. This will allow us to create a practical data schema and populate the database initially through web-scraping, basic record linkage techniques, crowd-sourced curation around selected drug areas, and import of existing sources of structured and documents. It will also allow us to create user-friendly web interfaces onto the data and conduct user engagement workshops to optimise the database and interface designs. Where other projects have set out to manually and perfectly curate a narrow range of information on a smaller number of trials, we aim to use a broader range of techniques and attempt to match a very large quantity of information on all trials. We are currently seeking feedback and additional sources of structured data….(More)”

How Big Data Harms Poor Communities


Kaveh Waddell in the Atlantic: “Big data can help solve problems that are too big for one person to wrap their head around. It’s helped businesses cut costs, cities plan new developments, intelligence agencies discover connections between terrorists, health officials predict outbreaks, and police forces get ahead of crime. Decision-makers are increasingly told to “listen to the data,” and make choices informed by the outputs of complex algorithms.

But when the data is about humans—especially those who lack a strong voice—those algorithms can become oppressive rather than liberating. For many poor people in the U.S., the data that’s gathered about them at every turn can obstruct attempts to escape poverty.

Low-income communities are among the most surveilled communities in America. And it’s not just the police that are watching, says Michele Gilman, a law professor at the University of Baltimore and a former civil-rights attorney at the Department of Justice. Public-benefits programs, child-welfare systems, and monitoring programs for domestic-abuse offenders all gather large amounts of data on their users, who are disproportionately poor.
In certain places, in order to qualify for public benefits like food stamps, applicants have to undergo fingerprinting and drug testing. Once people start receiving the benefits, officials regularly monitor them to see how they spend the money, and sometimes check in on them in their homes.

Data gathered from those sources can end up feeding back into police systems, leading to a cycle of surveillance. “It becomes part of these big-data information flows that most people aren’t aware they’re captured in, but that can have really concrete impacts on opportunities,” Gilman says.

Once an arrest crops up on a person’s record, for example, it becomes much more difficult for that person to find a job, secure a loan, or rent a home. And that’s not necessarily because loan officers or hiring managers pass over applicants with arrest records—computer systems that whittle down tall stacks of resumes or loan applications will often weed some out based on run-ins with the police.

When big-data systems make predictions that cut people off from meaningful opportunities like these, they can violate the legal principle of presumed innocence, according to Ian Kerr, a professor and researcher of ethics, law, and technology at the University of Ottawa.

Outside the court system, “innocent until proven guilty” is upheld by people’s due-process rights, Kerr says: “A right to be heard, a right to participate in one’s hearing, a right to know what information is collected about me, and a right to challenge that information.” But when opaque data-driven decision-making takes over—what Kerr calls “algorithmic justice”—some of those rights begin to erode….(More)”

Innovation and Its Enemies: Why People Resist New Technologies


]Book by Calestous Juma: “The rise of artificial intelligence has rekindled a long-standing debate regarding the impact of technology on employment. This is just one of many areas where exponential advances in technology signal both hope and fear, leading to public controversy. This book shows that many debates over new technologies are framed in the context of risks to moral values, human health, and environmental safety. But it argues that behind these legitimate concerns often lie deeper, but unacknowledged, socioeconomic considerations. Technological tensions are often heightened by perceptions that the benefits of new technologies will accrue only to small sections of society while the risks will be more widely distributed. Similarly, innovations that threaten to alter cultural identities tend to generate intense social concern. As such, societies that exhibit great economic and political inequities are likely to experience heightened technological controversies.

Drawing from nearly 600 years of technology history, Innovation and Its Enemies identifies the tension between the need for innovation and the pressure to maintain continuity, social order, and stability as one of today’s biggest policy challenges. It reveals the extent to which modern technological controversies grow out of distrust in public and private institutions. Using detailed case studies of coffee, the printing press, margarine, farm mechanization, electricity, mechanical refrigeration, recorded music, transgenic crops, and transgenic animals, it shows how new technologies emerge, take root, and create new institutional ecologies that favor their establishment in the marketplace. The book uses these lessons from history to contextualize contemporary debates surrounding technologies such as artificial intelligence, online learning, 3D printing, gene editing, robotics, drones, and renewable energy. It ultimately makes the case for shifting greater responsibility to public leaders to work with scientists, engineers, and entrepreneurs to manage technological change, make associated institutional adjustments, and expand public engagement on scientific and technological matters….(More)”

Big Data in the Public Sector


Chapter by Ricard Munné in New Horizons for a Data-Driven Economy: “The public sector is becoming increasingly aware of the potential value to be gained from big data, as governments generate and collect vast quantities of data through their everyday activities.

The benefits of big data in the public sector can be grouped into three major areas, based on a classification of the types of benefits: advanced analytics, through automated algorithms; improvements in effectiveness, providing greater internal transparency; improvements in efficiency, where better services can be provided based on the personalization of services; and learning from the performance of such services.

The chapter examined several drivers and constraints that have been identified, which can boost or stop the development of big data in the sector depending on how they are addressed. The findings, after analysing the requirements and the technologies currently available, show that there are open research questions to be addressed in order to develop such technologies so competitive and effective solutions can be built. The main developments are required in the fields of scalability of data analysis, pattern discovery, and real-time applications. Also required are improvements in provenance for the sharing and integration of data from the public sector. It is also extremely important to provide integrated security and privacy mechanisms in big data applications, as public sector collects vast amounts of sensitive data. Finally, respecting the privacy of citizens is a mandatory obligation in the European Union….(More)”

First, design for data sharing


John Wilbanks & Stephen H Friend in Nature Biotechnology: “To upend current barriers to sharing clinical data and insights, we need a framework that not only accounts for choices made by trial participants but also qualifies researchers wishing to access and analyze the data.

This March, Sage Bionetworks (Seattle) began sharing curated data collected from >9,000 participants of mPower, a smartphone-enabled health research study for Parkinson’s disease. The mPower study is notable as one of the first observational assessments of human health to rapidly achieve scale as a result of its design and execution purely through a smartphone interface. To support this unique study design, we developed a novel electronic informed consent process that includes participant-determined data-sharing preferences. It is through these preferences that the new data—including self-reported outcomes and quantitative sensor data—are shared broadly for secondary analysis. Our hope is that by sharing these data immediately, prior even to our own complete analysis, we will shorten the time to harnessing any utility that this study’s data may hold to improve the condition of patients who suffer from this disease.

Turbulent times for data sharing

Our release of mPower comes at a turbulent time in data sharing. The power of data for secondary research is top of mind for many these days. Vice President Joe Biden, in heading President Barack Obama’s ambitious cancer ‘moonshot’, describes data sharing as second only to funding to the success of the effort. However, this powerful support for data sharing stands in opposition to the opinions of many within the research establishment. To wit, the august New England Journal of Medicine (NEJM)’s recent editorial suggesting that those who wish to reuse clinical trial data without the direct participation and approval of the original study team are “research parasites”4. In the wake of colliding perspectives on data sharing, we must not lose sight of the scientific and societal ends served by such efforts.

It is important to acknowledge that meaningful data sharing is a nontrivial process that can require substantial investment to ensure that data are shared with sufficient context to guide data users. When data analysis is narrowly targeted to answer a specific and straightforward question—as with many clinical trials—this added effort might not result in improved insights. However, many areas of science, such as genomics, astronomy and high-energy physics, have moved to data collection methods in which large amounts of raw data are potentially of relevance to a wide variety of research questions, but the methodology of moving from raw data to interpretation is itself a subject of active research….(More)”

Wiki-fishing


The Economist: “….Mr Rhoads is a member of a network started by the Alaska Longline Fishermen’s Association (ALFA), which aims to do something about this and to reduce by-catch of sensitive species such as rockfish at the same time. Network fishermen, who numbered only 20 at the project’s start, agreed to share data on where and what they were catching in order to create maps that highlighted areas of high by-catch. Within two years they had reduced accidental rockfish harvest by as much as 20%.

The rockfish mapping project expanded to create detailed maps of the sea floor, pooling data gathered by transducers fixed to the bottoms of boats. By combining thousands of data points as vessels traverse the fishing grounds, these “wikimaps”—created and updated through crowdsourcing—show gravel beds where bottom-dwelling halibut are likely to linger, craggy terrain where rockfish tend to lurk, and outcrops that could snag gear.

Public charts are imprecise, and equipment with the capability to sense this level of detail could cost a fisherman more than $70,000. Skippers join ALFA for as little as $250, invest a couple of thousand dollars in computers and software and enter into an agreement to turn over fishing data and not to share the information outside the network, which now includes 85 fishermen.

Skippers say the project makes them more efficient, better able to find the sort of fish they want and avoid squandering time on lost or tangled gear. It also means fewer hooks in the water and fewer hours at sea to catch the same amount of fish….(More)”

Data and Humanitarian Response


The GovLab: “As part of an ongoing effort to build a knowledge base for the field of opening governance by organizing and disseminating its learnings, the GovLab Selected Readings series provides an annotated and curated collection of recommended works on key opening governance topics. In this edition, we explore the literature on Data and Humanitarian Response. To suggest additional readings on this or any other topic, please email [email protected]. All our Selected Readings can be found here.

Data and its uses for Governance

Context

Data, when used well in a trusted manner , allows humanitarian organizations to innovate how to respond to emergency events, including better coordination of post-disaster relief efforts, the ability to harness local knowledge to create more targeted relief strategies, and tools to predict and monitor disasters in real time. Consequently, in recent years both multinational groups and community-based advocates have begun to integrate data collection and evaluation strategies into their humanitarian operations, to better and more quickly respond to emergencies. However, this movement poses a number of challenges. Compared to the private sector, humanitarian organizations are often less equipped to successfully analyze and manage big data, which pose a number of risks related to the security of victims’ data. Furthermore, complex power dynamics which exist within humanitarian spaces may be further exacerbated through the introduction of new technologies and big data collection mechanisms. In the below we share:

  • Selected Reading List (summaries and hyperlinks)
  • Annotated Selected Reading List
  • Additional Readings….(More)”