The Social Intranet: Insights on Managing and Sharing Knowledge Internally


Paper by Ines Mergel for IBM Center for the Business of Government: “While much of the federal government lags behind, some agencies are pioneers in the internal use of social media tools.  What lessons and effective practices do they have to offer other agencies?

Social intranets,” Dr. Mergel writes, “are in-house social networks that use technologies – such as automated newsfeeds, wikis, chats, or blogs – to create engagement opportunities among employees.”  They also include the use of internal profile pages that help people identify expertise and interest (similar to Facebook or LinkedIn profiles), and that are used in combination with other social Intranet tools such as on-line communities or newsfeeds.

The report documents four case studies of government use of social intranets – two federal government agencies (the Department of State and the National Aeronautics and Space Administration) and two cross-agency networks (the U.S. Intelligence Community and the Government of Canada).

The author observes: “Most enterprise social networking platforms fail,” but identifies what causes these failures and how successful social intranet initiatives can avoid that fate and thrive.  She offers a series of insights for successfully implementing social intranets in the public sector, based on her observations and case studies. …(More)”

“Streetfight” by Janette Sadik-Khan


Review by Amrita Gupta in Policy Innovations of the book Streetfight: “Janette Sadik-Khan was New York City’s transportation commissioner from 2007-2013. Under her watch, the city’s streets were reimagined and redesigned to include more than 60 plazas and 400 miles of bike lanes—radical changes designed to improve traffic and make spaces safer for everybody. Over seven years, New York City underwent a transportation transformation, played out avenue by avenue, block by block. Times Square went from being the city’s worst traffic nightmare to a two-and-a-half acre outdoor sitting area in 2009. Citi Bike, arguably her biggest success, is the nation’s largest bike share system, and the city’s first new transit system in over half a century. In Streetfight, Sadik-Khan breaks down her achievements into replicable ideas for urban planners and traffic engineers everywhere, and she also reminds us that the fight isn’t over. As part of Bloomberg Associates, she now takes the lessons from New York City’s streets to metropolises around the world.

The old order vs the new order

The crux of the problem, she explains, is that until recently, cities of the future were thought to be cities built for cars, not cities that encouraged human activity on the street.

Understanding what city-building used to mean is key to understanding how our cities are failing us today. Sadik-Khan offers a quick recap of New York City through recent decades. The historical lesson holds; in many ways, cities in every continent grew along a similar trajectory. Streets were designed to keep traffic moving, but not to support the life alongside it. The old order—which Sadik-Khan writes is typified by Robert Moses—took the auto-centric view that pedestrians, public transit, and bike riders were all hindrances in the path of cars.

Sadik-Khan calls for a more equitable and relevant city, one that prioritizes accessibility and convenience for everybody. Her generation of planners aims to transform roads, tunnels, and rail tracks—the legacy hardware of their predecessors—and repurpose them into public spaces to walk, bike, and play.

The strength of the book lies in just how effectively it dispels the misconceptions that most citizens, and indeed, urban planners, have held onto for decades. There are plenty of surprises in Streetfight.

Sadik-Khan shows that people’s ideas about safety can be obsolete. For instance, bike lanes don’t make accidents more likely, they make the streets safer. The statistics show that bike riders actually protect pedestrians by altering the behavior of drivers. Sadik-Khan states that bike ridership in New York City quadrupled from 2000-2012; and as the number of riders increases, so too does the safety of the street.

The assumption, for instance, that reducing lanes or closing them entirely creates gridlock, is entirely wrong. Sadik-Khan’s interventions in New York City —providing pedestrian space and creating fewer but more orderly lanes for vehicles—actually improved traffic. And she uses taxi GPS data to prove it….

In fact, Sadik-Khan makes the claim that the economic power of sustainable streets is probably one of the strongest arguments for implementing dramatic change. Cities need data—think retail rents, shop sales, travel speeds, vehicle counts—to defend their interventions, and then to measure their effectiveness. Yet, she writes, unfortunately there are few cities anywhere that have access to reliable numbers of this kind….

Sadik-Khan emphasizes time and again that change can happen quickly and affordably. She didn’t have to bulldoze neighborhoods, or build new bridges and highways to transform the transportation network of New York City. Planners can reorder a street without destroying a single building.

Streetfight is a handbook that prioritizes paint, planters, signs, and signals over mega-infrastructure projects. We are told that small-scale interventions can have transformative large-scale impacts. Sadik-Khan’s pocket parks, plazas, pedestrian-friendly road redesigns, and parking-protected bike lanes are all the proof we need. For planners in developing countries, this should serve as both guide and encouragement.

Innovation doesn’t need big dollars behind it to be effective, and most ideas are scalable for cities big and small. What it does need, however, is street smarts. Sadik-Khan makes it clear that the key to getting projects to move ahead is support on the ground, and enough political capital to pave the way. Planners everywhere need to encourage participation, invite ideas, and be more transparent about proposals, she writes. But they also need to be willing to put up a fight….(More)”

Access to Government Information in the United States: A Primer


Wendy Ginsberg and Michael Greene at Congressional Research Service: “No provision in the U.S. Constitution expressly establishes a procedure for public access to executive branch records or meetings. Congress, however, has legislated various public access laws. Among these laws are two records access statutes,

  • the Freedom of Information Act (FOIA; 5 U.S.C. §552), and
  • the Privacy Act (5 U.S.C. §552a),

and two meetings access statutes,

  •  the Federal Advisory Committee Act (FACA; 5 U.S.C. App.), and
  • the Government in the Sunshine Act (5 U.S.C. §552b).

These four laws provide the foundation for access to executive branch information in the American federal government. The records-access statutes provide the public with a variety of methods to examine how executive branch departments and agencies execute their missions. The meeting-access statutes provide the public the opportunity to participate in and inform the policy process. These four laws are also among the most used and most litigated federal access laws.

While the four statutes provide the public with access to executive branch federal records and meetings, they do not apply to the legislative or judicial branches of the U.S. government. The American separation of powers model of government provides a collection of formal and informal methods that the branches can use to provide information to one another. Moreover, the separation of powers anticipates conflicts over the accessibility of information. These conflicts are neither unexpected nor necessarily destructive. Although there is considerable interbranch cooperation in the sharing of information and records, such conflicts over access may continue on occasion.

This report offers an introduction to the four access laws and provides citations to additional resources related to these statutes. This report includes statistics on the use of FOIA and FACA and on litigation related to FOIA. The 114th Congress may have an interest in overseeing the implementation of these laws or may consider amending the laws. In addition, this report provides some examples of the methods Congress, the President, and the courts have employed to provide or require the provision of information to one another. This report is a primer on information access in the U.S. federal government and provides a list of resources related to transparency, secrecy, access, and nondisclosure….(More)”

States’ using iwaspoisoned.com for outbreak alerts


Dan Flynn at Food Safety News: “The crowdsourcing site iwaspoisoned.com has collected thousands of reports of foodborne illnesses from individuals across the United States since 2009 and is expanding with a custom alert service for state health departments.

“There are now 26 states signed up, allowing government (health) officials and epidemiologists to receive real time, customized alerts for reported foodborne illness incidents,” said iwaspoisoned.com founder Patrick Quade.
Quade said he wanted to make iwaspoisoned.com data more accessible to health departments and experts in each state.

“This real time information provides a wider range of information data to help local agencies better manage food illness outbreaks,” he said. “It also supplements existing reporting channels and serves to corroborate their own reporting systems.”

The Florida Department of Health, Food and Waterborne Disease Program (FWDP) began receiving iwaspoisoned.com alerts beginning in December 2015.

“The FWDP has had an online complaint form for individuals to report food and waterborne illnesses,” a spokesman said. “However, the program has been looking for ways to expand their reach to ensure they are investigating all incidents. Partnering with iwaspoisoned.com was a logical choice for this expansion.”…

Quade established iwaspoisoned.com in New York City seven years ago to give people a place to report their experiences of being sickened by restaurant food. It gives such people a place to report the restaurants, locations, symptoms and other details and permits others to comment on the report….

The crowdsourcing site has played an increasing role in recent nationally known outbreaks, including those associated with Chipotle Mexican Grill in the last half of 2015. For example, CBS News in Los Angeles first reported on the Simi Valley, Calif., norovirus outbreak after noticing that about a dozen Chipotle customers had logged their illness reports on iwaspoisoned.com.

Eventually, health officials confirmed at least 234 norovirus illnesses associated with a Chipotle location in Simi Valley…(More)”

It’s not big data that discriminates – it’s the people that use it


 in the Conversation: “Data can’t be racist or sexist, but the way it is used can help reinforce discrimination. The internet means more data is collected about us than ever before and it is used to make automatic decisions that can hugely affect our lives, from our credit scores to our employment opportunities.

If that data reflects unfair social biases against sensitive attributes, such as our race or gender, the conclusions drawn from that data might also be based on those biases.

But this era of “big data” doesn’t need to to entrench inequality in this way. If we build smarter algorithms to analyse our information and ensure we’re aware of how discrimination and injustice may be at work, we can actually use big data to counter our human prejudices.

This kind of problem can arise when computer models are used to make predictions in areas such as insurance, financial loans and policing. If members of a certain racial group have historically been more likely to default on their loans, or been more likely to be convicted of a crime, then the model can deem these people more risky. That doesn’t necessarily mean that these people actually engage in more criminal behaviour or are worse at managing their money. They may just be disproportionately targeted by police and sub-prime mortgage salesmen.

Excluding sensitive attributes

Data scientist Cathy O’Neil has written about her experience of developing models for homeless services in New York City. The models were used to predict how long homeless clients would be in the system and to match them with appropriate services. She argues that including race in the analysis would have been unethical.

If the data showed white clients were more likely to find a job than black ones, the argument goes, then staff might focus their limited resources on those white clients that would more likely have a positive outcome. While sociological research has unveiled the ways that racial disparities in homelessness and unemployment are the result of unjust discrimination, algorithms can’t tell the difference between just and unjust patterns. And so datasets should exclude characteristics that may be used to reinforce the bias, such as race.

But this simple response isn’t necessarily the answer. For one thing, machine learning algorithms can often infer sensitive attributes from a combination of other, non-sensitive facts. People of a particular race may be more likely to live in a certain area, for example. So excluding those attributes may not be enough to remove the bias….

An enlightened service provider might, upon seeing the results of the analysis, investigate whether and how racism is a barrier to their black clients getting hired. Equipped with this knowledge they could begin to do something about it. For instance, they could ensure that local employers’ hiring practices are fair and provide additional help to those applicants more likely to face discrimination. The moral responsibility lies with those responsible for interpreting and acting on the model, not the model itself.

So the argument that sensitive attributes should be stripped from the datasets we use to train predictive models is too simple. Of course, collecting sensitive data should be carefully regulated because it can easily be misused. But misuse is not inevitable, and in some cases, collecting sensitive attributes could prove absolutely essential in uncovering, predicting, and correcting unjust discrimination. For example, in the case of homeless services discussed above, the city would need to collect data on ethnicity in order to discover potential biases in employment practices….(More)

A new data viz tool shows what stories are being undercovered in countries around the world


Jospeh Lichterman at NiemanLab: “It’s a common lament: Though the Internet provides us access to a nearly unlimited number of sources for news, most of us rarely venture beyond the same few sources or topics. And as news consumption shifts to our phones, people are using even fewer sources: On average, consumers access 1.52 trusted news sources on their phones, according to the 2015 Reuters Digital News Report, which studied news consumption across several countries.

To try and diversify people’s perspectives on the news, Jigsaw — the techincubator, formerly known as Google Ideas, that’s run by Google’s parentcompany Alphabet — this week launched Unfiltered.News, an experimentalsite that uses Google News data to show users what topics are beingunderreported or are popular in regions around the world.

Screen Shot 2016-03-18 at 11.45.09 AM

Unfiltered.News’ main data visualization shows which topics are most reported in countries around the world. A column on the right side of the page highlights stories that are being reported widely elsewhere in the world, but aren’t in the top 100 stories on Google News in the selected country. In the United States yesterday, five of the top 10 underreported topics, unsurprisingly, dealt with soccer. In China, Barack Obama was the most undercovered topic….(More)”

To Make Cities More Efficient, Fix Procurement To Welcome Startups


Jay Nath and Jeremy M. Goldberg at the Aspen Journal of Ideas: “In 2014, an amazing thing happened in government: In just 16 weeks, a new system to help guide visually impaired travelers through San Francisco International Airport was developed, going from a rough idea to ready-to-go-status, through a city program that brings startups and agencies together. Yet two and half years later, a request for proposals to expand this ground-breaking, innovative technology is yet to be finalized.

For people in government, that’s an all-too-familiar scenario. While procurement serves an important role in ensuring that government is a responsible steward of taxpayer dollars, there’s tremendous opportunity to improve the way the public sector has traditionally bought goods and services. And the stakes are higher than simply dealing with red tape. By limiting the pool of partners to those who know how to work the system, taxpayers are missing out on low-cost, innovative solutions. Essentially, RFPs are a Do Not Enter sign for startups — the engine of innovation across nearly every industry except the public sector.

 Essentially, RFPs are a Do Not Enter sign for startups — the engine of innovation across nearly every industry except the public sector.

In San Francisco, under our Startup In Residence program, we’re experimenting with how to remove the friction associated with RFPs for both government staff and startups. For government staff, that means publishing an RFP in days, not months. For startups, it means responding to an RFP in hours not weeks.

So what did we learn from our experience with the airport? We combined 17 RFPs into one; utilized general “challenge statements” in place of highly detailed project specifications; leveraged modern technology; and created a simple guide to navigating the process. Here’s a look at how each of those innovations works:

The RFP bus: Today, most RFPs are like a single driver in a car — inefficient and resource-intensive. We should be looking at what might be thought of as mass-transit option, like a bus. By combining a number of RFPs for projects that have characteristics in common into a single procurement vehicle, we can spread the process costs over a number of RFPs.

Challenges, not prescriptions: Under the traditional procurement process, city staffers develop highly prescribed requirements that are often dozens of pages long, a practice that tends to favor existing approaches and established vendors. Shifting to brief challenge statements opens the door for residents, small businesses and entrepreneurs with new ideas. And it reduces the time required by government staff to develop an RFP from weeks or months to days.

 Technology that enables the process: This was critical to enabling San Francisco to combine 17 RFPs into one. Without the right technology, we wouldn’t be able to automatically route bidders’ proposals to the appropriate evaluation committees for online scoring or let bidders easily submit their responses. While this kind of procurement technology is not new, it’s use is still uncommon. That needs to change, and it’s more than a question of efficiency. When citizens and entrepreneurs have a painful experience interacting with government, they wonder how we can address the big challenges if we can’t get the small stuff right…(More)

Crowdlaw and open data policy: A perfect match?


 at Sunlight: “The open government community has long envisioned a future where all public policy is collaboratively drafted online and in the open — a future in which we (the people) don’t just have a say in who writes and votes on the rules that govern our society, but are empowered in a substantive way to participate, annotating or even crafting those rules ourselves. If that future seems far away, it’s because we’ve seen few successful instances of this approach in the United States. But an increasing amount of open and collaborative online approaches to drafting legislation — a set of practices the NYU GovLab and others have called “crowdlaw” — seem to have found their niche in open data policy.

This trend has taken hold at the local level, where multiple cities have employed crowdlaw techniques to draft or revise the regulations which establish and govern open data initiatives. But what explains this trend and the apparent connection between crowdlaw and the proactive release of government information online? Is it simply that both are “open government” practices? Or is there something more fundamental at play here?…

Since 2012, several high-profile U.S. cities have utilized collaborative tools such as Google Docs,GitHub, and Madison to open up the process of open data policymaking. The below chronology of notable instances of open data policy drafted using crowdlaw techniques gives the distinct impression of a good idea spreading in American cities:….

While many cities may not be ready to take their hands off of the wheel and trust the public to help engage in meaningful decisions about public policy, it’s encouraging to see some giving it a try when it comes to open data policy. Even for cities still feeling skeptical, this approach can be applied internally; it allows other departments impacted by changes that come about through an open data policy to weigh in, too. Cities can open up varying degrees of the process, retaining as much autonomy as they feel comfortable with. In the end, utilizing the crowdlaw process with open data legislation can increase its effectiveness and accountability by engaging the public directly — a win-win for governments and their citizens alike….(More)”

Participatory Budgeting


The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections


Robert Epstein and Ronald E. Robertson at PNAS: “Internet search rankings have a significant impact on consumer choices, mainly because users trust and choose higher-ranked results more than lower-ranked results. Given the apparent power of search rankings, we asked whether they could be manipulated to alter the preferences of undecided voters in democratic elections. Here we report the results of five relevant double-blind, randomized controlled experiments, using a total of 4,556 undecided voters representing diverse demographic characteristics of the voting populations of the United States and India. The fifth experiment is especially notable in that it was conducted with eligible voters throughout India in the midst of India’s 2014 Lok Sabha elections just before the final votes were cast. The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation. We call this type of influence, which might be applicable to a variety of attitudes and beliefs, the search engine manipulation effect. Given that many elections are won by small margins, our results suggest that a search engine company has the power to influence the results of a substantial number of elections with impunity. The impact of such manipulations would be especially large in countries dominated by a single search engine company…(More)”