The Flow of Technology Talent into Government and Civil Society


Report for the Ford Foundation and the MacArthur Foundation: “As information technology further suffuses every aspect of our lives, government will inevitably have a role to play in ensuring that technology serves the public interest. The ability for government to improve operations and provide services to citizens more efficiently through the effective use of technology is among the greatest contemporary opportunities for the public sector…
Among the key findings of this report:

  • The Current Pipeline Is Insufficient: the vast majority of interviewees indicated that there is a severe paucity of individuals with technical skills in computer science, data science, and the Internet or other information technology expertise in civil society and government. In particular, many of those interviewed noted that existing talent levels fail to meet current needs to develop, leverage, or understand technology.
  • Barriers to Recruitment and Retention Are Acute: many of those interviewed said that substantial barriers thwart the effective recruitment and retention of individuals with the requisite skills in government and civil society. Among the most common barriers mentioned were those of compensation, an inability to pursue groundbreaking work, and a culture that is averse to hiring and utilizing potentially disruptive innovators
  • A Major Gap between the Public-Interest and For-Profit Sectors Persists: as a related matter, interviewees discussed superior for-profit recruitment and retention models. Specifically, the for-profit sector was perceived as providing both more attractive compensation (especially to young talent) and fostering a culture of innovation, openness, and creativity that was seen as more appealing to technologists and innovators.
  • A Need to Examine Models from Other Fields: interviewees noted significant space to develop new models to improve the robustness of the talent pipeline; in part, many existing models were regarded as unsustainable or incomplete. Interviewees did, however, highlight approaches from other fields that could provide relevant lessons to help guide investments in improving this pipeline.
  • Significant Opportunity for Connection and Training: despite consonance among those interviewed that the pipeline was incomplete, many individuals indicated the possibility for improved and more systematic efforts to expose young technologists to public interest issues and connect them to government and civil society careers through internships, fellowships, and other training and recruitment tools.
  • Culture Change Necessary: the culture of government and civil society –and its effects on recruitment and other bureaucratic processes –was seen as a vital challenge that would need to be addressed to improve the pipeline. This view manifested through comments that government and civil society organizations needed to become more open to utilizing technology and adopting a mindset of experimentation….”

Policy making 2.0: From theory to practice


Paper by E. Ferro, EN Loukis, Y. Charalabidis, and M. Osella in Government Information Quarterly: “Government agencies are gradually moving from simpler towards more sophisticated and complex practices of social media use, which are characterized by important innovations at the technological, political and organizational level. This paper intends to provide two contributions to the current discourse about such advanced approaches to social media exploitation. The first is of practical nature and has to do with assessing the potential and the challenges of a centralized cross-platform approach to social media by government agencies in their policy making processes. The second contribution is of theoretical nature and consists in the development of a multi-dimensional framework for an integrated evaluation of such advanced practices of social media exploitation in public policy making from technological, political and organizational perspectives, drawing from theoretical constructs from different domains. The proposed framework is applied for the evaluation of a pilot consultation campaign conducted in Italy using multiple social media and concerning the large scale application of a telemedicine program.”

Big Data needs Big Theory


Geoffrey West, former President of the Santa Fe Institute: “As the world becomes increasingly complex and interconnected, some of our biggest challenges have begun to seem intractable. What should we do about uncertainty in the financial markets? How can we predict energy supply and demand? How will climate change play out? How do we cope with rapid urbanization? Our traditional approaches to these problems are often qualitative and disjointed and lead to unintended consequences. To bring scientific rigor to the challenges of our time, we need to develop a deeper understanding of complexity itself….
The digital revolution is driving much of the increasing complexity and pace of life we are now seeing, but this technology also presents an opportunity. The ubiquity of cell phones and electronic transactions, the increasing use of personal medical probes, and the concept of the electronically wired “smart city” are already providing us with enormous amounts of data. With new computational tools and techniques to digest vast, interrelated databases, researchers and practitioners in science, technology, business and government have begun to bring large-scale simulations and models to bear on questions formerly out of reach of quantitative analysis, such as how cooperation emerges in society, what conditions promote innovation, and how conflicts spread and grow.
The trouble is, we don’t have a unified, conceptual framework for addressing questions of complexity. We don’t know what kind of data we need, nor how much, or what critical questions we should be asking. “Big data” without a “big theory” to go with it loses much of its potency and usefulness, potentially generating new unintended consequences.
When the industrial age focused society’s attention on energy in its many manifestations—steam, chemical, mechanical, and so on—the universal laws of thermodynamics came as a response. We now need to ask if our age can produce universal laws of complexity that integrate energy with information. What are the underlying principles that transcend the extraordinary diversity and historical contingency and interconnectivity of financial markets, populations, ecosystems, war and conflict, pandemics and cancer? An overarching predictive, mathematical framework for complex systems would, in principle, incorporate the dynamics and organization of any complex system in a quantitative, computable framework.
We will probably never make detailed predictions of complex systems, but coarse-grained descriptions that lead to quantitative predictions for essential features are within our grasp. We won’t predict when the next financial crash will occur, but we ought to be able to assign a probability of one occurring in the next few years. The field is in the midst of a broad synthesis of scientific disciplines, helping reverse the trend toward fragmentation and specialization, and is groping toward a more unified, holistic framework for tackling society’s big questions. The future of the human enterprise may well depend on it.”

Open Data: From ‘Platform’ to ‘Program’


Engaging Cities: “A few months ago, Dutch designer Mark van der Net launched OSCity.nl, a highly interesting example of what can be done with open data. At first, it looks like a mapping tool. The interface shows a – beautifully designed – map of The Netherlands, color coded according to whatever open data set the user selects, varying from geographical height to the location of empty office buildings. As such it is an example of a broader current in which artists, citizens, ngos and business actors have build online tools to visualize all kinds of data, varying from open government data to collaboratively produced data sets focused on issues like environmental pollution.
What makes OSCity interesting is that it allows users to intuitively map various datasets in combination with each other in so called ‘map stories’. For instance, a map of empty office space can be combined with maps of urban growth and decline, the average renting price per square meter of office space, as well as map that displays the prices of houses for sale. The intersection of those maps shows you where empty office spaces are offered at or below half the price of regular houses and apartments. The result is thus not just an aesthetically pleasing state of affairs, but an action map. Policy makers, developers and citizens can use the insights produced by the map to find empty offices that are worthwhile to turn into houses.
There are two important lessons we can learn from this project. First, it shows the importance of programs like OSCity to make open data platforms operationable for various actors. Over the last few years governments and other organizations have started to open up their datasets, often accompanied with high expectations of citizen empowerment and greater transparency of governments. However, case studies have showed that opening up data and building an open platform is only a first step. Dawes and Helbig have shown that various stakeholders have various needs in terms of standards and protocols, whereas both citizens and government officials need the relevant skills to be able to understand and operate upon the data. ‘Vast amounts of useful information are contained in government data systems’, they write, ‘but the systems themselves are seldom designed for use beyond the collecting agency’s own needs.’ In other words: what is needed to deliver on the expectations of open data, is not only a platform – a publicly available database – but also what I have called ‘programs’ – online tools with intuitive interfaces that make this data intelligible and actionable in concert with the needs of the public.
There is a second issue that OSCity raises. As Jo Bates has pointed out, the main question is: who exactly is empowered through programs like this? Will ‘programs’ that make data operationable work for citizens? Or will their procedures, standards and access be organized to benefit corporate interests? These do not have to be necessarily contradicting, but if the goal is to empower citizens, it is important to engage them as stakeholders in the design of these programs.”

The Good Judgment Project: Harnessing the Wisdom of the Crowd to Forecast World Events


The Economist: “But then comes the challenge of generating real insight into forecasting accuracy. How can one compare forecasting ability?
The only reliable method is to conduct a forecasting tournament in which independent judges ask all participants to make the same forecasts in the same timeframes. And forecasts must be expressed numerically, so there can be no hiding behind vague verbiage. Words like “may” or “possible” can mean anything from probabilities as low as 0.001% to as high as 60% or 70%. But 80% always and only means 80%.
In the late 1980s one of us (Philip Tetlock) launched such a tournament. It involved 284 economists, political scientists, intelligence analysts and journalists and collected almost 28,000 predictions. The results were startling. The average expert did only slightly better than random guessing. Even more disconcerting, experts with the most inflated views of their own batting averages tended to attract the most media attention. Their more self-effacing colleagues, the ones we should be heeding, often don’t get on to our radar screens.
That project proved to be a pilot for a far more ambitious tournament currently sponsored by the Intelligence Advanced Research Projects Activity (IARPA), part of the American intelligence world. Over 5,000 forecasters have made more than 1m forecasts on more than 250 questions, from euro-zone exits to the Syrian civil war. Results are pouring in and they are revealing. We can discover who has better batting averages, not take it on faith; discover which methods of training promote accuracy, not just track the latest gurus and fads; and discover methods of distilling the wisdom of the crowd.
The big surprise has been the support for the unabashedly elitist “super-forecaster” hypothesis. The top 2% of forecasters in Year 1 showed that there is more than luck at play. If it were just luck, the “supers” would regress to the mean: yesterday’s champs would be today’s chumps. But they actually got better. When we randomly assigned “supers” into elite teams, they blew the lid off IARPA’s performance goals. They beat the unweighted average (wisdom-of-overall-crowd) by 65%; beat the best algorithms of four competitor institutions by 35-60%; and beat two prediction markets by 20-35%.
Over to you
To avoid slipping back to business as usual—believing we know things that we don’t—more tournaments in more fields are needed, and more forecasters. So we invite you, our readers, to join the 2014-15 round of the IARPA tournament. Current questions include: Will America and the EU reach a trade deal? Will Turkey get a new constitution? Will talks on North Korea’s nuclear programme resume? To volunteer, go to the tournament’s website at www.goodjudgmentproject.com. We predict with 80% confidence that at least 70% of you will enjoy it—and we are 90% confident that at least 50% of you will beat our dart-throwing chimps.”
See also https://web.archive.org/web/2013/http://www.iarpa.gov/Programs/ia/ACE/ace.html
 

Neighborhood Buzz: What people are tweeting in your city.


“Neighborhood Buzz is an experimental system that lets you find out what people in your neighborhood, and neighborhoods in cities around the country, are talking about on Twitter. When you select a neighborhood from a city map, Neighborhood Buzz displays the main topics that people in that neighborhood are discussing — politics, sports, food, etc. — and then lets you drill down to look at the individual tweets in those categories.
The system also lets you see, at a glance, how much people in different neighborhoods in a city are talking about a given topic through a “heat map” overlay on the city’s geographical map.
Neighborhood Buzz uses geo-located tweets as input. Only a small fraction of tweets currently have location tags, but the number is sufficient to provide tens or hundreds of tweets per neighborhood per day.
The topical categorizer that the system uses is statistical — which means that even though we show only the tweets we are most confident the system is categorizing correctly, it still sometimes makes mistakes. You can let us know when the system has incorrectly categorized a tweet, and eventually that will help us to improve the system.
Neighborhood Buzz was originally developed at Northwestern University Knight Lab in our joint projects class in technology and journalism, involving students and faculty from the Medill School of Journalism and the McCormick School of Engineering, Dept. of Electrical Engineering and Computer Science, at Northwestern. It was then re-architected and further developed at the Knight Lab.”

BBC throws weight behind open data movement


The Telegraph: “The BBC has signed Memoranda of Understanding (MoUs) with the Europeana Foundation, the Open Data Institute, the Open Knowledge Foundation and the Mozilla Foundation, supporting free and open internet technologies…

The agreements will enable closer collaboration between the BBC and each of the four organisations on a range of mutual interests, including the release of structured open data and the use of open standards in web development, according to the BBC.
One aim of the agreement is to give clear technical standards and models to organisations who want to work with the BBC, and give those using the internet a deeper understanding of the technologies involved.
The MoUs also bring together several existing areas of research and provide a framework to explore future opportunities. Through this and other initiatives, the BBC hopes to become a catalyst for open innovation by publishing clear technical standards, models, expertise and – where feasible – data.
The BBC has been publishing linked open data for some time, most notably as part of the /programmes service, where machine-readable information about the programme schedule is made available online.
It also helped to deliver the Olympics Data Service, which underpinned 10,490 athlete pages on the BBC sport website during the 2012 Olympics….

“The BBC has been at the forefront of technological innovation around broadcasting and online for many years delivering the benefits of new technologies to licence fee payers, offering new services and products to audiences around the world, and creating public value in the digital economy,” said James Purnell, BBC Director of Strategy and Digital.”

Can These Entrepreneurs Solve The Intractable Problems Of City Government?


FastCoExist: “In case the recent Obamacare debacle didn’t make it clear enough, the government has some serious problems getting technology to work correctly. This is something that President Obama has recognized in the past. In July, he made this statement: “I’m going to be asking more people around the country–more inventors and entrepreneurs and visionaries–to sign up to serve. We’ve got to have the brightest minds to help solve our biggest challenges.”
In San Francisco, that request has been taken on by the newly minted Entrepreneur-in-Residence (EIR) program–the first ever government-run program that helps startups to develop technologies that can be used to deal with pressing government issues. It’s kind of like a government startup incubator. This week, the EIR program announced 11 finalists for the program, which received 200 applications from startups across the world. Three to five startups will ultimately be chosen for the opportunity….
The 11 finalists range from small startups with just a handful of people doing cutting-edge work to companies valued at over $1 billion. Some of the highlights:

  • Arrive Labs, a company that crowdsources public transit data and combines it with algorithms and external conditions (like the weather) to predict congestion, and to offer riders faster alternatives.
  • A startup called Regroup that offers group messaging through a number of channels, including email, text, Facebook, Twitter, and digital signs.
  • Smart waste management company Compology, which is working on a wireless waste monitoring system to tell officials what’s inside city dumpsters and when they are full.
  • Birdi, a startup developing smart air quality, carbon monoxide, and smoke detectors that send alerts to your smartphone. The company also has an open API so that developers can pull in public outdoor air quality data.
  • Synthicity’s 3-D digital city simulation (think “real-life Simcity”), which is based on urban datasets. The simulation is geared towards transportation planners, urban designers, and others who rely on city data to make decisions…”

6 Projects That Make Data More Accessible Win $100,000 Each From Gates


Chronicle of Philanthropy: “Six nonprofit projects that aim to combine multiple sets of data to help solve social problems have each won $100,000 grants from the Bill & Melinda Gates Foundation…The winners:
• Pushpa Aman Singh, who founded GuideStar India as an effort of the Civil Society Information Services India. GuideStar India is the most comprehensive database of India’s registered charities. It has profiles of more than 4,000 organizations, and Ms. Singh plans to expand that number and the types of information included.
• Development Initiatives, an international aid organization, to support its partnership with the Ugandan nonprofit Development Research and Training. Together, they are trying to help residents of two districts in Uganda identify a key problem the communities face and use existing data sets to build both online and offline tools to help tackle that challenge…
• H.V. Jagadish, at the University of Michigan, to develop a prototype that will merge sets of incompatible geographic data to make them comparable. Mr. Jagadish, a professor of electrical engineering and computer science, points to crime precincts and school districts as an example. “We want to understand the impact of education on crime, but the districts don’t quite overlap with the precincts,” he says. “This tool will address the lack of overlap.”
• Vijay Modi, at Columbia University, to work with government agencies and charities in Nigeria on a tool similar to Foursquare, the social network that allows people to share their location with friends. Mr. Modi, a mechanical-engineering professor and faculty member of the university’s Earth Institute, envisions a tool that will help people find important resources more easily…
• Gisli Olafsson and his team at NetHope, a network of aid organizations. The group is building a tool to help humanitarian charities share their data more widely and in real time—potentially saving more lives during disasters…
• Development Gateway, a nonprofit that assists international development charities with technology, and GroundTruth Initiative, a nonprofit that helps residents of communities learn mapping and media skills. The two groups want to give people living in the slums of Nairobi, Kenya, more detailed information about local schools…”

Generating Value from Open Government Data


New paper by Thorhildur JetzekMichel Avital, and Niels Bjørn-Andersen: “A driving force for change in society is the trend towards Open Government Data (OGD). While the value generated by OGD has been widely discussed by public bodies and other stakeholders, little attention has been paid to this phenomenon in the academic literature. Hence, we developed a conceptual model portraying how data as a resource can be transformed to value. We show the causal relationships between four contextual, enabling factors, four types of value generation mechanisms and value. We use empirical data from 61 countries to test these relationships, using the PLS method. The results mostly support the hypothesized relationships. Our conclusion is that if openness is complemented with resource governance, capabilities in society and technical connectivity, use of OGD will stimulate the generation of economic and social value through four different archetypical mechanisms: Efficiency, Innovation, Transparency and Participation.”