Making cities smarter through citizen engagement


Vaidehi Shah at Eco-Business: “Rapidly progressing information communications technology (ICT) is giving rise to an almost infinite range of innovations that can be implemented in cities to make them more efficient and better connected. However, in order for technology to yield sustainable solutions, planners must prioritise citizen engagement and strong leadership.
This was the consensus on Tuesday at the World Cities Summit 2014, where representatives from city and national governments, technology firms and private sector organisations gathered in Singapore to discuss strategies and challenges to achieving sustainable cities in the future.
Laura Ipsen, Microsoft corporate vice president for worldwide public sector, identified globalisation, social media, big data, and mobility as the four major technological trends prevailing in cities today, as she spoke at the plenary session with a theme on “The next urban decade: critical challenges and opportunities”.
Despite these increasing trends, she cautioned, “technology does not build infrastructure, but it does help better engage citizens and businesses through public-private partnerships”.
For example, “LoveCleanStreets”, an online tool developed by Microsoft and partners, enables London residents to report infrastructure problems such as damaged roads or signs, shared Ipsen.
“By engaging citizens through this application, cities can fix problems early, before they get worse,” she said.
In Singapore, the ‘MyWaters’ app of PUB, Singapore’s national water agency, is also a key tool for the government to keep citizens up-to-date of water quality and safety issues in the country, she added.
Even if governments did not actively develop solutions themselves, simply making the immense amounts of data collected by the city open to businesses and citizens could make a big difference to urban liveability, Mark Chandler, director of the San Francisco Mayor’s Office of International Trade and Commerce, pointed out.
Opening up all of the data collected by San Francisco, for instance, yielded 60 free mobile applications that allow residents to access urban solutions related to public transport, parking, and electricity, among others, he explained. This easy and convenient access to infrastructure and amenities, which are a daily necessity, is integral to “a quality of life that keeps the talented workforce in the city,” Chandler said….”

Open Government Data: Helping Parents to find the Best School for their Kids


Radu Cucos at the Open Government Partnership blog: “…This challenge – finding the right school – is probably one of the most important decisions in many parents’ lives.  Parents are looking for answers to questions such as which schools are located in safe neighborhoods, which ones have the highest teacher – students’ ratio, which schools have the best funding, which schools have the best premises or which ones have the highest grades average.
It is rarely an easy decision, but is made doubly difficult in the case of migrants.  People residing in the same location for a long time know, more or less, which are the best education institutions in their city, town or village. For migrants, the situation is absolutely the opposite. They have to spend extra time and resources in identifying relevant information about schools.
Open Government Data is an effective solution which can ease the problem of a lack of accessible information about existing schools in a particular country or location. By adopting the Open Government Data policy in the educational field, governments release data about grades, funding, student and teacher numbers, data generated throughout time by schools, colleges, universities and other educational settings.
Developers then use this data for creating applications which portray information in easy accessible formats. Three of the best apps which I have come across are highlighted below:

  • Discover Your School, developed under the Province of British Columbia of Canada Open Data Initiative, is a platform for parents who are interested in finding a school for their kids, learning about the school districts or comparing schools in the same area. The application provides comprehensive information, such as the number of students enrolled in schools each year, class sizes, teaching language, disaster readiness, results of skills assessment, and student and parent satisfaction. Information and data can be viewed in interactive formats, including maps. On top of that, Discover Your School engages parents in policy making and initiatives such as Erase Bullying or British Columbia Education Plan.
  • The School Portal, developed under the Moldova Open Data Initiative, uses data made public by the Ministry of Education of Moldova to offer comprehensive information about 1529 educational institutions in the Republic of Moldova. Users of the portal can access information about schools yearly budgets, budget implementation, expenditures, school rating, students’ grades, schools’ infrastructure and communications. The School Portal has a tool which allows visitors to compare schools based on different criteria – infrastructure, students’ performance or annual budgets. The additional value of the portal is the fact that it serves as a platform for private sector entities which sell school supplies to advertise their products. The School Portal also allows parents to virtually interact with the Ministry of Education of Moldova or with a psychologist in case they need additional information or have concerns regarding the education of their children.
  • RomaScuola, developed under the umbrella of the Italian Open Data Initiative, allows visitors to obtain valuable information about all schools in the Rome region. Distinguishing it from the two listed above is the ability to compare schools depending on such facets as frequency of teacher absence, internet connectivity, use of IT equipment for teaching, frequency of students’ transfer to other schools and quality of education in accordance with the percentage of issued diplomas.

Open data on schools has great value not only for parents but also for the educational system in general. Each country has its own school market, if education is considered as a product in this market. Perfect information about products is one of the main characteristics of competitive markets. From this perspective, giving parents the opportunity to have access to information about schools characteristics will contribute to the increase in the competitiveness of the schools market. Educational institutions will have incentives to improve their performance in order to attract more students…”

Twitter releasing trove of user data to scientists for research


Joe Silver at ArsTechnica: “Twitter has a 200-million-strong and ever-growing user base that broadcasts 500 million updates daily. It has been lauded for its ability to unsettle repressive political regimes, bring much-needed accountability to corporations that mistreat their customers, and combat other societal ills (whether such characterizations are, in fact, accurate). Now, the company has taken aim at disrupting another important sphere of human society: the scientific research community.
Back in February, the site announced its plan—in collaboration with Gnip—to provide a handful of research institutions with free access to its data sets from 2006 to the present. It’s a pilot program called “Twitter Data Grants,” with the hashtag #DataGrants. At the time, Twitter’s engineering blog explained the plan to enlist grant applications to access its treasure trove of user data:

Twitter has an expansive set of data from which we can glean insights and learn about a variety of topics, from health-related information such as when and where the flu may hit to global events like ringing in the new year. To date, it has been challenging for researchers outside the company who are tackling big questions to collaborate with us to access our public, historical data. Our Data Grants program aims to change that by connecting research institutions and academics with the data they need.

In April, Twitter announced that, after reviewing the more than 1,300 proposals submitted from more than 60 different countries, it had selected six institutions to provide with data access. Projects approved included a study of foodborne gastrointestinal illnesses, a study measuring happiness levels in cities based on images shared on Twitter, and a study using geosocial intelligence to model urban flooding in Jakarta, Indonesia. There’s even a project exploring the relationship between tweets and sports team performance.
Twitter did not directly respond to our questions on Tuesday afternoon regarding the specific amount and types of data the company is providing to the six institutions. But in its privacy policy, Twitter explains that most user information is intended to be broadcast widely. As a result, the company likely believes that sharing such information with scientific researchers is well within its rights, as its services “are primarily designed to help you share information with the world,” Twitter says. “Most of the information you provide us is information you are asking us to make public.”
While mining such data sets will undoubtedly aid scientists in conducting experiments for which similar data was previously either unavailable or quite limited, these applications raise some legal and ethical questions. For example, Scientific American has asked whether Twitter will be able to retain any legal rights to scientific findings and whether mining tweets (many of which are not publicly accessible) for scientific research when Twitter users have not agreed to such uses is ethically sound.
In response, computational epidemiologists Caitlin Rivers and Bryan Lewis have proposed guidelines for ethical research practices when using social media data, such as avoiding personally identifiable information and making all the results publicly available….”

The Trend towards “Smart Cities”


Chien-Chu Chen in the International Journal of Automation and Smart Technology (AUSMT): “Looking back over the past century, the steady pace of development in many of the world’s cities has resulted in a situation where a high percentage of these cities are now faced with the problem of aging, decrepit urban infrastructure; a considerable number of cities are having to undertake large-scale infrastructure renewal projects. While creating new opportunities in the area of infrastructure, ongoing urbanization is also creating problems, such as excessive consumption of water, electric power and heat energy, environmental pollution, increased greenhouse gas emissions, traffic jams, and the aging of the existing residential housing stock, etc. All of these problems present a challenge to cities’ ability to achieve sustainable development. In response to these issues, the concept of the “smart city” has grown in popularity throughout the world. The aim of smart city initiatives is to make the city a vehicle for “smartification” through the integration of different industries and sectors. As initiatives of this kind move beyond basic automation into the realm of real “smartification,” the smart city concept is beginning to take concrete form….”

HHS releases new data and tools to increase transparency on hospital utilization and other trends


Pressrelease: “With more than 2,000 entrepreneurs, investors, data scientists, researchers, policy experts, government employees and more in attendance, the Department of Health and Human Services (HHS) is releasing new data and launching new initiatives at the annual Health Datapalooza conference in Washington, D.C.
Today, the Centers for Medicare & Medicaid Services (CMS) is releasing its first annual update to the Medicare hospital charge data, or information comparing the average amount a hospital bills for services that may be provided in connection with a similar inpatient stay or outpatient visit. CMS is also releasing a suite of other data products and tools aimed to increase transparency about Medicare payments. The data trove on CMS’s website now includes inpatient and outpatient hospital charge data for 2012, and new interactive dashboards for the CMS Chronic Conditions Data Warehouse and geographic variation data. Also today, the Food and Drug Administration (FDA) will launch a new open data initiative. And before the end of the conference, the Office of the National Coordinator for Health Information Technology (ONC) will announce the winners of two data challenges.
“The release of these data sets furthers the administration’s efforts to increase transparency and support data-driven decision making which is essential for health care transformation,” said HHS Secretary Kathleen Sebelius.
“These public data resources provide a better understanding of Medicare utilization, the burden of chronic conditions among beneficiaries and the implications for our health care system and how this varies by where beneficiaries are located,” said Bryan Sivak, HHS chief technology officer. “This information can be used to improve care coordination and health outcomes for Medicare beneficiaries nationwide, and we are looking forward to seeing what the community will do with these releases. Additionally, the openFDA initiative being launched today will for the first time enable a new generation of consumer facing and research applications to embed relevant and timely data in machine-readable, API-based formats.”
2012 Inpatient and Outpatient Hospital Charge Data
The data posted today on the CMS website provide the first annual update of the hospital inpatient and outpatient data released by the agency last spring. The data include information comparing the average charges for services that may be provided in connection with the 100 most common Medicare inpatient stays at over 3,000 hospitals in all 50 states and Washington, D.C. Hospitals determine what they will charge for items and services provided to patients and these “charges” are the amount the hospital generally bills for those items or services.
With two years of data now available, researchers can begin to look at trends in hospital charges. For example, average charges for medical back problems increased nine percent from $23,000 to $25,000, but the total number of discharges decreased by nearly 7,000 from 2011 to 2012.
In April, ONC launched a challenge – the Code-a-Palooza challenge – calling on developers to create tools that will help patients use the Medicare data to make health care choices. Fifty-six innovators submitted proposals and 10 finalists are presenting their applications during Datapalooza. The winning products will be announced before the end of the conference.
Chronic Conditions Warehouse and Dashboard
CMS recently released new and updated information on chronic conditions among Medicare fee-for-service beneficiaries, including:

  • Geographic data summarized to national, state, county, and hospital referral regions levels for the years 2008-2012;
  • Data for examining disparities among specific Medicare populations, such as beneficiaries with disabilities, dual-eligible beneficiaries, and race/ethnic groups;
  • Data on prevalence, utilization of select Medicare services, and Medicare spending;
  • Interactive dashboards that provide customizable information about Medicare beneficiaries with chronic conditions at state, county, and hospital referral regions levels for 2012; and
  • Chartbooks and maps.

These public data resources support the HHS Initiative on Multiple Chronic Conditions by providing researchers and policymakers a better understanding of the burden of chronic conditions among beneficiaries and the implications for our health care system.
Geographic Variation Dashboard
The Geographic Variation Dashboards present Medicare fee-for-service per-capita spending at the state and county levels in interactive formats. CMS calculated the spending figures in these dashboards using standardized dollars that remove the effects of the geographic adjustments that Medicare makes for many of its payment rates. The dashboards include total standardized per capita spending, as well as standardized per capita spending by type of service. Users can select the indicator and year they want to display. Users can also compare data for a given state or county to the national average. All of the information presented in the dashboards is also available for download from the Geographic Variation Public Use File.
Research Cohort Estimate Tool
CMS also released a new tool that will help researchers and other stakeholders estimate the number of Medicare beneficiaries with certain demographic profiles or health conditions. This tool can assist a variety of stakeholders interested in specific figures on Medicare enrollment. Researchers can also use this tool to estimate the size of their proposed research cohort and the cost of requesting CMS data to support their study.
Digital Privacy Notice Challenge
ONC, with the HHS Office of Civil Rights, will be awarding the winner of the Digital Privacy Notice Challenge during the conference. The winning products will help consumers get notices of privacy practices from their health care providers or health plans directly in their personal health records or from their providers’ patient portals.
OpenFDA
The FDA’s new initiative, openFDA, is designed to facilitate easier access to large, important public health datasets collected by the agency. OpenFDA will make FDA’s publicly available data accessible in a structured, computer readable format that will make it possible for technology specialists, such as mobile application creators, web developers, data visualization artists and researchers to quickly search, query, or pull massive amounts of information on an as needed basis. The initiative is the result of extensive research to identify FDA’s publicly available datasets that are often in demand, but traditionally difficult to use. Based on this research, openFDA is beginning with a pilot program involving millions of reports of drug adverse events and medication errors submitted to the FDA from 2004 to 2013. The pilot will later be expanded to include the FDA’s databases on product recalls and product labeling.
For more information about CMS data products, please visit http://www.cms.gov/Research-Statistics-Data-and-Systems/Research-Statistics-Data-and-Systems.html.
For more information about today’s FDA announcement visit: http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/UCM399335 or http://open.fda.gov/

Estonian plan for 'data embassies' overseas to back up government databases


Graeme Burton in Computing: “Estonia is planning to open “data embassies” overseas to back up government databases and to operate government “in the cloud“.
The aim is partly to improve efficiency, but driven largely by fear of invasion and occupation, Jaan Priisalu, the director general of Estonian Information System Authority, told Sky News.
He said: “We are planning to actually operate our government in the cloud. It’s clear also how it helps to protect the country, the territory. Usually when you are the military planner and you are planning the occupation of the territory, then one of the rules is suppress the existing institutions.
“And if you are not able to do it, it means that this political price of occupying the country will simply rise for planners.”
Part of the rationale for the plan, he continued, was fear of attack from Russia in particular, which has been heightened following the occupation of Crimea, formerly in Ukraine.
“It’s quite clear that you can have problems with your neighbours. And our biggest neighbour is Russia, and nowadays it’s quite aggressive. This is clear.”
The plan is to back up critical government databases outside of Estonia so that affairs of state can be conducted in the cloud, even if the country is invaded. It would also have the benefit of keeping government information out of invaders’ hands – provided it can keep its government cloud secure.
According to Sky News, the UK is already in advanced talks about hosting the Estonian government databases and may make the UK the first of Estonia’s data embassies.
Having wrested independence from the Soviet Union in 1991, Estonia has experienced frequent tension with its much bigger neighbour. In 2007, for example, after the relocation of the “Bronze Soldier of Tallinn” and the exhumation of the soldiers buried in a square in the centre of the capital to a military cemetery in April 2007, the country was subject to a prolonged cyber-attack sourced to Russia.
Russian hacker “Sp0Raw” said that the most efficient of the online attacks on Estonia could not have been carried out without the approval of Russian authorities and added that the hackers seemed to act under “recommendations” from parties in government. However, claims by Estonia that the Russian government was directly involved in the attacks were “empty words, not supported by technical data”.
Mike Witt, deputy director of the US Computer Emergency Response Team (CERT), suggested that the distributed denial-of-service (DDOS) attacks, while crippling to the Estonian government at the time, were not significant in scale from a technical standpoint. However, the Estonian government was forced to shut down many of its online operations in response.
At the same time, the Estonian government has been accused of implementing anti-Russian laws and discriminating against its large ethnic Russian population.
Last week, the Estonian government unveiled a plan to allow anyone in the world to apply for “digital citizenship of the country, enabling them to use Estonian online services, open bank accounts, and start companies without having to physically reside in the country.”

How open data can help shape the way we analyse electoral behaviour


Harvey Lewis (Deloitte), Ulrich Atz, Gianfranco Cecconi, Tom Heath (ODI) in The Guardian: Even after the local council elections in England and Northern Ireland on 22 May, which coincided with polling for the European Parliament, the next 12 months remain a busy time for the democratic process in the UK.
In September, the people of Scotland make their choice in a referendum on the future of the Union. Finally, the first fixed-term parliament in Westminster comes to an end with a general election in all areas of Great Britain and Northern Ireland in May 2015.
To ensure that as many people as possible are eligible and able to vote, the government is launching an ambitious programme of Individual Electoral Registration (IER) this summer. This will mean that the traditional, paper-based approach to household registration will shift to a tailored and largely digital process more in-keeping with the data-driven demands of the twenty-first century.
Under IER, citizens will need to provide ‘identifying information’, such as date of birth or national insurance number, when applying to register.

Ballots: stuck in the past?

However, despite the government’s attempts through IER to improve the veracity of information captured prior to ballots being posted, little has changed in terms of the vision for capturing, distributing and analysing digital data from election day itself.

Advertisement

Indeed, paper is still the chosen medium for data collection.
Digitising elections is fraught with difficulty, though. In the US, for example, the introduction of new voting machines created much controversy even though they are capable of providing ‘near-perfect’ ballot data.
The UK’s democratic process is not completely blind, though. Numerous opinion surveys are conducted both before and after polling, including the long-running British Election Study, to understand the shifting attitudes of a representative cross-section of the electorate.
But if the government does not retain in sufficient geographic detail digital information on the number of people who vote, then how can it learn what is necessary to reverse the long-running decline in turnout?

The effects of lack of data

To add to the debate around democratic engagement, a joint research team, with data scientists from Deloitte and the Open Data Institute (ODI), have been attempting to understand what makes voters tick.
Our research has been hampered by a significant lack of relevant data describing voter behaviour at electoral ward level, as well as difficulties in matching what little data is available to other open data sources, such as demographic data from the 2011 Census.
Even though individual ballot papers are collected and verified for counting the number of votes per candidate – the primary aim of elections, after all – the only recent elections for which aggregate turnout statistics have been published at ward level are the 2012 local council elections in England and Wales. In these elections, approximately 3,000 wards from a total of over 8,000 voted.
Data published by the Electoral Commission for the 2013 local council elections in England and Wales purports to be at ward level but is, in fact, for ‘county electoral divisions’, as explained by the Office for National Statistics.
Moreover, important factors related to the accessibility of polling stations – such as the distance from main population centres – could not be assessed because the location of polling stations remains the responsibility of individual local authorities – and only eight of these have so far published their data as open data.
Given these fundamental limitations, drawing any robust conclusions is difficult. Nevertheless, our research shows the potential for forecasting electoral turnout with relatively few census variables, the most significant of which are age and the size of the electorate in each ward.

What role can open data play?

The limited results described above provide a tantalising glimpse into a possible future scenario: where open data provides a deeper and more granular understanding of electoral behaviour.
On the back of more sophisticated analyses, policies for improving democratic engagement – particularly among young people – have the potential to become focused and evidence-driven.
And, although the data captured on election day will always remain primarily for the use of electing the public’s preferred candidate, an important secondary consideration is aggregating and publishing data that can be used more widely.
This may have been prohibitively expensive or too complex in the past but as storage and processing costs continue to fall, and the appetite for such knowledge grows, there is a compelling business case.
The benefits of this future scenario potentially include:

  • tailoring awareness and marketing campaigns to wards and other segments of the electorate most likely to respond positively and subsequently turn out to vote
  • increasing the efficiency with which European, general and local elections are held in the UK
  • improving transparency around the electoral process and stimulating increased democratic engagement
  • enhancing links to the Government’s other significant data collection activities, including the Census.

Achieving these benefits requires commitment to electoral data being collected and published in a systematic fashion at least at ward level. This would link work currently undertaken by the Electoral Commission, the ONS, Plymouth University’s Election Centre, the British Election Study and the more than 400 local authorities across the UK.”

How to treat government like an open source project


Ben Balter in OpenSource.com: “Open government is great. At least, it was a few election cycles ago. FOIA requests, open data, seeing how your government works—it’s arguably brought light to a lot of not-so-great practices, and in many cases, has spurred citizen-centric innovation not otherwise imagined before the information’s release.
It used to be that sharing information was really, really hard. Open government wasn’t even a possibility a few hundred years ago. Throughout the history of communication tools—be it the printing press, fax machine, or floppy disks—new tools have generally done three things: lowered the cost to transmit information, increased who that information could be made available to, and increase how quickly that information could be distributed. But, printing presses and fax machines have two limitations: they are one way and asynchronous. They let you more easily request, and eventually see how the sausage was made but don’t let you actually take part in the sausage-making. You may be able to see what’s wrong, but you don’t have the chance to make it better. By the time you find out, it’s already too late.
As technology allows us to communicate with greater frequency and greater fidelity, we have the chance to make our government not only transparent, but truly collaborative.

So, how do we encourage policy makers and bureaucrats to move from open government to collaborative government, to learn open source’s lessons about openness and collaboration at scale?
For one, we geeks can help to create a culture of transparency and openness within government by driving up the demand side of the equation. Be vocal, demand data, expect to see process, and once released, help build lightweight apps. Show potential change agents in government that their efforts will be rewarded.
Second, it’s a matter of tooling. We’ve got great tools out there—things like Git that can track who made what change when and open standards like CSV or JSON that don’t require proprietary software—but by-and-large they’re a foreign concept in government, at least among those empowered to make change. Command line interfaces with black background and green text can be intimidating to government bureaucrats used to desktop publishing tools. Make it easier for government to do the right thing and choose open standards over proprietary tooling.”
Last, be a good open source ambassador. Help your home city or state get involved with open source. Encourage them to take their first step (be it consuming open source, publishing, or collaborating with the public), teach them what it means to do things in the open, And when they do push code outside the firewall, above all, be supportive. We’re in this together.
As technology makes it easier to work together, geeks can help make our government not just open, but in fact collaborative. Government is the world’s largest and longest running open source project (bugs, trolls, and all). It’s time we start treating it like one.

Open government: getting beyond impenetrable online data


Jed Miller in The Guardian: “Mathematician Blaise Pascal famously closed a long letter by apologising that he hadn’t had time to make it shorter. Unfortunately, his pithy point about “download time” is regularly attributed to Mark Twain and Henry David Thoreau, probably because the public loves writers more than it loves statisticians. Scientists may make things provable, but writers make them memorable.
The World Bank confronted a similar reality of data journalism earlier this month when it revealed that, of the 1,600 bank reports posted online on from 2008 to 2012, 32% had never been downloaded at all and another 40% were downloaded under 100 times each.
Taken together, these cobwebbed documents represent millions of dollars in World Bank funds and hundreds of thousands of person-hours, spent by professionals who themselves represent millions of dollars in university degrees. It’s difficult to see the return on investment in producing expert research and organising it into searchable web libraries when almost three quarters of the output goes largely unseen.
The World Bank works at a scale unheard of by most organisations, but expert groups everywhere face the same challenges. Too much knowledge gets trapped in multi-page pdf files that are slow to download (especially in low-bandwidth areas), costly to print, and unavailable for computer analysis until someone manually or automatically extracts the raw data.
Even those who brave the progress bar find too often that urgent, incisive findings about poverty, health, discrimination, conflict or social change are presented in prose written by and for high-level experts, rendering it impenetrable to almost everyone else. Information isn’t just trapped in pdfs; it’s trapped in PhDs.
Governments and NGOs are beginning to realise that digital strategy means more than posting a document online, but what will it take for these groups to change not just their tools, but their thinking? It won’t be enough to partner with WhatsApp or hire GrumpyCat.
I asked strategists from the development, communications and social media fields to offer simple, “Tweetable” suggestions for how the policy community can become better communicators.

For nonprofits and governments that still publish 100-page pdfs on their websites and do not optimise the content to share in other channels such as social: it is a huge waste of time and ineffective. Stop it now.

– Beth Kanter, author and speaker. Beth’s Blog: How Nonprofits Can Use Social Media

Treat text as #opendata so infomediaries can mash it up and make it more accessible (see, for example federalregister.gov) and don’t just post and blast: distribute information in a targeted way to those most likely to be interested.

– Beth Noveck, director at the Governance Lab and former director at White House Open Government Initiative

Don’t be boring. Sounds easy, actually quite hard, super-important.

– Eli Pariser, CEO of Upworthy

Surprise me. Uncover the key finding that inspired you, rather than trying to tell it all at once and show me how the world could change because of it.

– Jay Golden, co-founder of Wakingstar Storyworks

For the Bank or anyone who is generating policy information they actually want people to use, they must actually write it for the user, not for themselves. As Steve Jobs said, ‘Simple can be harder than complex’.

– Kristen Grimm, founder and president at Spitfire Strategies

The way to reach the widest audience is to think beyond content format and focus on content strategy.

– Laura Silber, director of public affairs at Open Society Foundations

Open the door to policy work with short, accessible pieces – a blog post, a video take, infographics – that deliver the ‘so what’ succinctly.

– Robert McMahon, editor at Council on Foreign Relations

Policy information is more usable if it’s linked to corresponding actions one can take, or if it helps stir debate.  Also, whichever way you slice it, there will always be a narrow market for raw policy reports … that’s why explainer sites, listicles and talking heads exist.

– Ory Okolloh, director of investments at Omidyar Network and former public policy and government relations manager at Google Africa
Ms Okolloh, who helped found the citizen reporting platform Ushahidi, also offered a simple reminder about policy reports: “‘Never gets downloaded’ doesn’t mean ‘never gets read’.” Just as we shouldn’t mistake posting for dissemination, we shouldn’t confuse popularity with influence….”

How The Right People Analyzing The Best Data Are Transforming Government


NextGov: “Analytics is often touted as a new weapon in the technology arsenal of bleeding-edge organizations willing to spend lots of money to combat problems.
In reality, that’s not the case at all. Certainly, there are complex big data analytics tools that will analyze massive data sets to look for the proverbial needle in a haystack, but analytics 101 also includes smarter ways to look at existing data sets.
In this arena, government is making serious strides, according to Kathryn Stack, advisor for evidence-based innovation at the Office of Management and Budget. Speaking in Washington on Thursday at an analytics conference hosted by IBM, Stack provided an outline for agencies to spur innovation and improve mission by making smarter use of the data they already produce.
Interestingly, the first step has nothing to do with technology and everything to do with people. Get “the right people in the room,” Stack said, and make sure they value learning.
“One thing I have learned in my career is that if you really want transformative change, it’s important to bring the right players together across organizations – from your own department and different parts of government,” Stack said. “Too often, we lose a lot of money when siloed organizations lose sight of what the problem really is and spend a bunch of money, and at the end of the day we have invested in the wrong thing that doesn’t address the problem.”
The Department of Labor provides a great example for how to change a static organizational culture into one that integrates performance management, evaluation- and innovation-based processes. The department, she said, created a chief evaluation office and set up evaluation offices for each of its bureaus. These offices were tasked with focusing on important questions to improve performance, going inside programs to learn what is and isn’t working and identifying barriers that impeded experimentation and learning. At the same time, they helped develop partnerships across the agency – a major importance for any organization looking to make drastic changes.
Don’t overlook experimentation either, Stack said. Citing innovation leaders in the private sector such as Google, which runs 12,000 randomized experiments per year, Stack said agencies should not be afraid to get out and run with ideas. Not all of them will be good – only about 10 percent of Google’s experiments usher in new business changes – but even failures can bring meaningful value to the mission.
Stack used an experiment conducted by the United Kingdom’s Behavioral Insights Team as evidence.
The team continually tweaked language to tax compliance letters sent to individuals delinquent on their taxes. Significant experimentation ushered in lots of data, and the team analyzed it to find that one phrase, “Nine out of ten Britains pay their taxes on time,” improved collected revenue by five percent. That case shows how failures can bring about important successes.
“If you want to succeed, you’ve got to be willing to fail and test things out,” Stack said.
Any successful analytics effort in government is going to employ the right people, the best data – Stack said it’s not a secret that the government collects both useful and not-so-useful, “crappy” data – as well as the right technology and processes, too. For instance, there are numerous ways to measure return on investment, including dollars per customer served or costs per successful outcome.
“What is the total investment you have to make in a certain strategy in order to get a successful outcome?” Stack said. “Think about cost per outcome and how you do those calculations.”…”