Encouraging and Sustaining Innovation in Government: Technology and Innovation in the Next Administration


New report by Beth Simone Noveck and Stefaan Verhulst: “…With rates of trust in government at an all-time low, technology and innovation will be essential to achieve the next administration’s goals and to deliver services more effectively and efficiently. The next administration must prioritize using technology to improve governing and must develop plans to do so in the transition… This paper provides analysis and a set of concrete recommendations, both for the period of transition before the inauguration, and for the start of the next presidency, to encourage and sustain innovation in government. Leveraging the insights from the experts who participated in a day-long discussion, we endeavor to explain how government can improve its use of using digital technologies to create more effective policies, solve problems faster and deliver services more effectively at the federal, state and local levels….

The broad recommendations are:

  • Scale Data Driven Governance: Platforms such as data.gov represent initial steps in the direction of enabling data-driven governance. Much more can be done, however, to open-up data and for the agencies to become better consumers of data, to improve decision-making and scale up evidence-based governance. This includes better use of predictive analytics, more public engagement; and greater use of cutting-edge methods like machine learning.
  • Scale Collaborative Innovation: Collaborative innovation takes place when government and the public work together, thus widening the pool of expertise and knowledge brought to bear on public problems. The next administration can reach out more effectively, not just to the public at large, but to conduct targeted outreach to public officials and citizens who possess the most relevant skills or expertise for the problems at hand.
  • Promote a Culture of Innovation: Institutionalizing a culture of technology-enabled innovation will require embedding and institutionalizing innovation and technology skills more widely across the federal enterprise. For example, contracting, grants and personnel officials need to have a deeper understanding of how technology can help them do their jobs more efficiently, and more people need to be trained in human-centered design, gamification, data science, data visualization, crowdsourcing and other new ways of working.
  • Utilize Evidence-Based Innovation: In order to better direct government investments, leaders need a much better sense of what works and what doesn’t. The government spends billions on research in the private and university sectors, but very little experimenting with, testing, and evaluating its own programs. The next administration should continue developing an evidence-based approach to governance, including a greater use of methods like A/B testing (a method of comparing two versions of a webpage or app against each other to determine which one performs the best); establishing a clearinghouse for success and failure stories and best practices; and encouraging overseers to be more open to innovation.
  • Make Innovation a Priority in the Transition: The transition period represents a unique opportunity to seed the foundations for long-lasting change. By explicitly incorporating innovation into the structure, goals and activities of the transition teams, the next administration can get a fast start in implementing policy goals and improving government operations through innovation approaches….(More)”

How the Federal Government is thinking about Artificial Intelligence


Mohana Ravindranath at NextGov: “Since May, the White House has been exploring the use of artificial intelligence and machine learning for the public: that is, how the federal government should be investing in the technology to improve its own operations. The technologies, often modeled after the way humans take in, store and use new information, could help researchers find patterns in genetic data or help judges decide sentences for criminals based on their likelihood to end up there again, among other applications. …

Here’s a look at how some federal groups are thinking about the technology:

  • Police data: At a recent White House workshop, Office of Science and Technology Policy Senior Adviser Lynn Overmann said artificial intelligence could help police departments comb through hundreds of thousands of hours of body-worn camera footage, potentially identifying the police officers who are good at de-escalating situations. It also could help cities determine which individuals are likely to end up in jail or prison and officials could rethink programs. For example, if there’s a large overlap between substance abuse and jail time, public health organizations might decide to focus their efforts on helping people reduce their substance abuse to keep them out of jail.
  • Explainable artificial intelligence: The Pentagon’s research and development agency is looking for technology that can explain to analysts how it makes decisions. If people can’t understand how a system works, they’re not likely to use it, according to a broad agency announcement from the Defense Advanced Research Projects Agency. Intelligence analysts who might rely on a computer for recommendations on investigative leads must “understand why the algorithm has recommended certain activity,” as do employees overseeing autonomous drone missions.
  • Weather detection: The Coast Guard recently posted its intent to sole-source a contract for technology that could autonomously gather information about traffic, crosswind, and aircraft emergencies. That technology contains built-in artificial intelligence technology so it can “provide only operational relevant information.”
  • Cybersecurity: The Air Force wants to make cyber defense operations as autonomous as possible, and is looking at artificial intelligence that could potentially identify or block attempts to compromise a system, among others.

While there are endless applications in government, computers won’t completely replace federal employees anytime soon….(More)”

How Tech Giants Are Devising Real Ethics for Artificial Intelligence


For years, science-fiction moviemakers have been making us fear the bad things that artificially intelligent machines might do to their human creators. But for the next decade or two, our biggest concern is more likely to be that robots will take away our jobs or bump into us on the highway.

Now five of the world’s largest tech companies are trying to create a standard of ethics around the creation of artificial intelligence. While science fiction has focused on the existential threat of A.I. to humans,researchers at Google’s parent company, Alphabet, and those from Amazon,Facebook, IBM and Microsoft have been meeting to discuss more tangible issues, such as the impact of A.I. on jobs, transportation and even warfare.

Tech companies have long overpromised what artificially intelligent machines can do. In recent years, however, the A.I. field has made rapid advances in a range of areas, from self-driving cars and machines that understand speech, like Amazon’s Echo device, to a new generation of weapons systems that threaten to automate combat.

The specifics of what the industry group will do or say — even its name —have yet to be hashed out. But the basic intention is clear: to ensure thatA.I. research is focused on benefiting people, not hurting them, according to four people involved in the creation of the industry partnership who are not authorized to speak about it publicly.

The importance of the industry effort is underscored in a report issued onThursday by a Stanford University group funded by Eric Horvitz, a Microsoft researcher who is one of the executives in the industry discussions. The Stanford project, called the One Hundred Year Study onArtificial Intelligence, lays out a plan to produce a detailed report on the impact of A.I. on society every five years for the next century….The Stanford report attempts to define the issues that citizens of a typicalNorth American city will face in computers and robotic systems that mimic human capabilities. The authors explore eight aspects of modern life,including health care, education, entertainment and employment, but specifically do not look at the issue of warfare..(More)”

The risks of relying on robots for fairer staff recruitment


Sarah O’Connor at the Financial Times: “Robots are not just taking people’s jobs away, they are beginning to hand them out, too. Go to any recruitment industry event and you will find the air is thick with terms like “machine learning”, “big data” and “predictive analytics”.

The argument for using these tools in recruitment is simple. Robo-recruiters can sift through thousands of job candidates far more efficiently than humans. They can also do it more fairly. Since they do not harbour conscious or unconscious human biases, they will recruit a more diverse and meritocratic workforce.

This is a seductive idea but it is also dangerous. Algorithms are not inherently neutral just because they see the world in zeros and ones.

For a start, any machine learning algorithm is only as good as the training data from which it learns. Take the PhD thesis of academic researcher Colin Lee, released to the press this year. He analysed data on the success or failure of 441,769 job applications and built a model that could predict with 70 to 80 per cent accuracy which candidates would be invited to interview. The press release plugged this algorithm as a potential tool to screen a large number of CVs while avoiding “human error and unconscious bias”.

But a model like this would absorb any human biases at work in the original recruitment decisions. For example, the research found that age was the biggest predictor of being invited to interview, with the youngest and the oldest applicants least likely to be successful. You might think it fair enough that inexperienced youngsters do badly, but the routine rejection of older candidates seems like something to investigate rather than codify and perpetuate. Mr Lee acknowledges these problems and suggests it would be better to strip the CVs of attributes such as gender, age and ethnicity before using them….(More)”

Technology Is Monitoring the Urban Landscape


Big City is watching you.

It will do it with camera-equipped drones that inspect municipal powerlines and robotic cars that know where people go. Sensor-laden streetlights will change brightness based on danger levels. Technologists and urban planners are working on a major transformation of urban landscapes over the next few decades.

Much of it involves the close monitoring of things and people, thanks to digital technology. To the extent that this makes people’s lives easier, the planners say, they will probably like it. But troubling and knotty questions of privacy and control remain.

A White House report published in February identified advances in transportation, energy and manufacturing, among other developments, that will bring on what it termed “a new era of change.”

Much of the change will also come from the private sector, which is moving faster to reach city dwellers, and is more skilled in collecting and responding to data. That is leading cities everywhere to work more closely than ever with private companies, which may have different priorities than the government.

One of the biggest changes that will hit a digitally aware city, it is widely agreed, is the seemingly prosaic issue of parking. Space given to parking is expected to shrink by half or more, as self-driving cars and drone deliveries lead an overall shift in connected urban transport. That will change or eliminate acres of urban space occupied by raised and underground parking structures.

Shared vehicles are not parked as much, and with more automation, they will know where parking spaces are available, eliminating the need to drive in search of a space.

“Office complexes won’t need parking lots with twice the footprint of their buildings,” said Sebastian Thrun, who led Google’s self-driving car project in its early days and now runs Udacity, an online learning company. “Whenwe started on self-driving cars, we talked all the time about cutting the number of cars in a city by a factor of three,” or a two-thirds reduction.

In addition, police, fire, and even library services will seek greater responsiveness by tracking their own assets, and partly by looking at things like social media. Later, technologies like three-dimensional printing, new materials and robotic construction and demolition will be able to reshape skylines in a matter of weeks.

At least that is the plan. So much change afoot creates confusion….

The new techno-optimism is focused on big data and artificial intelligence.“Futurists used to think everyone would have their own plane,” said ErickGuerra, a professor of city and regional planning at the University ofPennsylvania. “We never have a good understanding of how things will actually turn out.”

He recently surveyed the 25 largest metropolitan planning organizations in the country and found that almost none have solid plans for modernizing their infrastructure. That may be the right way to approach the challenges of cities full of robots, but so far most clues are coming from companies that also sell the technology.

 “There’s a great deal of uncertainty, and a competition to show they’re low on regulation,” Mr. Guerra said. “There is too much potential money for new technology to be regulated out.”

The big tech companies say they are not interested in imposing the sweeping “smart city” projects they used to push, in part because things are changing too quickly. But they still want to build big, and they view digital surveillance as an essential component…(More)”

Can mobile usage predict illiteracy in a developing country?


Pål Sundsøy at arXiv: “The present study provides the first evidence that illiteracy can be reliably predicted from standard mobile phone logs. By deriving a broad set of mobile phone indicators reflecting users financial, social and mobility patterns we show how supervised machine learning can be used to predict individual illiteracy in an Asian developing country, externally validated against a large-scale survey. On average the model performs 10 times better than random guessing with a 70% accuracy. Further we show how individual illiteracy can be aggregated and mapped geographically at cell tower resolution. Geographical mapping of illiteracy is crucial to know where the illiterate people are, and where to put in resources. In underdeveloped countries such mappings are often based on out-dated household surveys with low spatial and temporal resolution. One in five people worldwide struggle with illiteracy, and it is estimated that illiteracy costs the global economy more than 1 trillion dollars each year. These results potentially enable cost-effective, questionnaire-free investigation of illiteracy-related questions on an unprecedented scale…(More)”.

Enablers for Smart Cities


Book by Amal El Fallah Seghrouchni, Fuyuki Ishikawa, Laurent Hérault, and Hideyuki Tokuda: “Smart cities are a new vision for urban development.  They integrate information and communication technology infrastructures – in the domains of artificial intelligence, distributed and cloud computing, and sensor networks – into a city, to facilitate quality of life for its citizens and sustainable growth.  This book explores various concepts for the development of these new technologies (including agent-oriented programming, broadband infrastructures, wireless sensor networks, Internet-based networked applications, open data and open platforms), and how they can provide smart services and enablers in a range of public domains.

The most significant research, both established and emerging, is brought together to enable academics and practitioners to investigate the possibilities of smart cities, and to generate the knowledge and solutions required to develop and maintain them…(More)”

What Governments Can Learn From Airbnb And the Sharing Economy


 in Fortune: “….Despite some regulators’ fears, the sharing economy may not result in the decline of regulation but rather in its opposite, providing a basis upon which society can develop more rational, ethical, and participatory models of regulation. But what regulation looks like, as well as who actually creates and enforce the regulation, is also bound to change.

There are three emerging models – peer regulation, self-regulatory organizations, and data-driven delegation – that promise a regulatory future for the sharing economy best aligned with society’s interests. In the adapted book excerpt that follows, I explain how the third of these approaches, of delegating enforcement of regulations to companies that store critical data on consumers, can help mitigate some of the biases Airbnb guests may face, and why this is a superior alternative to the “open data” approach of transferring consumer information to cities and state regulators.

Consider a different problem — of collecting hotel occupancy taxes from hundreds of thousands of Airbnb hosts rather than from a handful of corporate hotel chains. The delegation of tax collection to Airbnb, something a growing number of cities are experimenting with, has a number of advantages. It is likely to yield higher tax revenues and greater compliance than a system where hosts are required to register directly with the government, which is something occasional hosts seem reluctant to do. It also sidesteps privacy concerns resulting from mandates that digital platforms like Airbnb turn over detailed user data to the government. There is also significant opportunity for the platform to build credibility as it starts to take on quasi governmental roles like this.

There is yet another advantage, and the one I believe will be the most significant in the long-run. It asks a platform to leverage its data to ensure compliance with a set of laws in a manner geared towards delegating responsibility to the platform. You might say that the task in question here — computing tax owed, collecting, and remitting it—is technologically trivial. True. But I like this structure because of the potential it represents. It could be a precursor for much more exciting delegated possibilities.

For a couple of decades now, companies of different kinds have been mining the large sets of “data trails” customers provide through their digital interactions. This generates insights of business and social importance. One such effort we are all familiar with is credit card fraud detection. When an unusual pattern of activity is detected, you get a call from your bank’s security team. Sometimes your card is blocked temporarily. The enthusiasm of these digital security systems is sometimes a nuisance, but it stems from your credit card company using sophisticated machine learning techniques to identify patterns that prior experience has told it are associated with a stolen card. It saves billions of dollars in taxpayer and corporate funds by detecting and blocking fraudulent activity swiftly.

A more recent visible example of the power of mining large data sets of customer interaction came in 2008, when Google engineers announced that they could predict flu outbreaks using data collected from Google searches, and track the spread of flu outbreaks in real time, providing information that was well ahead of the information available using the Center for Disease Control’s (CDC) own tracking systems. The Google system’s performance deteriorated after a couple of years, but its impact on public perception of what might be possible using “big data” was immense.

It seems highly unlikely that such a system would have emerged if Google had been asked to hand over anonymized search data to the CDC. In fact, there would have probably been widespread public backlash to this on privacy grounds. Besides, the reason why this capability emerged organically from within Google is partly as a consequence of Google having one of the highest concentrations of computer science and machine learning talent in the world.

Similar approaches hold great promise as a regulatory approach for sharing economy platforms. Consider the issue of discriminatory practices. There has long been anecdotal evidence that some yellow cabs in New York discriminate against some nonwhite passengers. There have been similar concerns that such behavior may start to manifest on ridesharing platforms and in other peer-to-peer markets for accommodation and labor services.

For example, a 2014 study by Benjamin Edelman and Michael Luca of Harvard suggested that African American hosts might have lower pricing power than white hosts on Airbnb. While the study did not conclusively establish that the difference is due to guests discriminating against African American hosts, a follow-up study suggested that guests with “distinctively African American names” were less likely to receive favorable responses for their requests to Airbnb hosts. This research raises a red flag about the need for vigilance as the lines between personal and professional blur.

One solution would be to apply machine-learning techniques to be able to identify patterns associated with discriminatory behavior. No doubt, many platforms are already using such systems….(More)”

What is Artificial Intelligence?


Report by Mike Loukides and Ben Lorica: “Defining artificial intelligence isn’t just difficult; it’s impossible, not the least because we don’t really understand human intelligence. Paradoxically, advances in AI will help more to define what human intelligence isn’t than what artificial intelligence is.

But whatever AI is, we’ve clearly made a lot of progress in the past few years, in areas ranging from computer vision to game playing. AI is making the transition from a research topic to the early stages of enterprise adoption. Companies such as Google and Facebook have placed huge bets on AI and are already using it in their products. But Google and Facebook are only the beginning: over the next decade, we’ll see AI steadily creep into one product after another. We’ll be communicating with bots, rather than scripted robo-dialers, and not realizing that they aren’t human. We’ll be relying on cars to plan routes and respond to road hazards. It’s a good bet that in the next decades, some features of AI will be incorporated into every application that we touch and that we won’t be able to do anything without touching an application.

Given that our future will inevitably be tied up with AI, it’s imperative that we ask: Where are we now? What is the state of AI? And where are we heading?

Capabilities and Limitations Today

Descriptions of AI span several axes: strength (how intelligent is it?), breadth (does it solve a narrowly defined problem, or is it general?), training (how does it learn?), capabilities (what kinds of problems are we asking it to solve?), and autonomy (are AIs assistive technologies, or do they act on their own?). Each of these axes is a spectrum, and each point in this many-dimensional space represents a different way of understanding the goals and capabilities of an AI system.

On the strength axis, it’s very easy to look at the results of the last 20 years and realize that we’ve made some extremely powerful programs. Deep Blue beat Garry Kasparov in chess; Watson beat the best Jeopardy champions of all time; AlphaGo beat Lee Sedol, arguably the world’s best Go player. But all of these successes are limited. Deep Blue, Watson, and AlphaGo were all highly specialized, single-purpose machines that did one thing extremely well. Deep Blue and Watson can’t play Go, and AlphaGo can’t play chess or Jeopardy, even on a basic level. Their intelligence is very narrow, and can’t be generalized. A lot of work has gone into usingWatson for applications such as medical diagnosis, but it’s still fundamentally a question-and-answer machine that must be tuned for a specific domain. Deep Blue has a lot of specialized knowledge about chess strategy and an encyclopedic knowledge of openings. AlphaGo was built with a more general architecture, but a lot of hand-crafted knowledge still made its way into the code. I don’t mean to trivialize or undervalue their accomplishments, but it’s important to realize what they haven’t done.

We haven’t yet created an artificial general intelligence that can solve a multiplicity of different kinds of problems. We still don’t have a machine that can listen to recordings of humans for a year or two, and start speaking. While AlphaGo “learned” to play Go by analyzing thousands of games, and then playing thousands more against itself, the same software couldn’t be used to master chess. The same general approach? Probably. But our best current efforts are far from a general intelligence that is flexible enough to learn without supervision, or flexible enough to choose what it wants to learn, whether that’s playing board games or designing PC boards.

Toward General Intelligence

How do we get from narrow, domain-specific intelligence to more general intelligence? By “general intelligence,” we don’t necessarily mean human intelligence; but we do want machines that can solve different kinds of problems without being programmed with domain-specific knowledge. We want machines that can make human judgments and decisions. That doesn’t necessarily mean that AI systems will implement concepts like creativity, intuition, or instinct, which may have no digital analogs. A general intelligence would have the ability to follow multiple pursuits and to adapt to unexpected situations. And a general AI would undoubtedly implement concepts like “justice” and “fairness”: we’re already talking about the impact of AI on the legal system….

It’s easier to think of super-intelligence as a matter of scale. If we can create “general intelligence,” it’s easy to assume that it could quickly become thousands of times more powerful than human intelligence. Or, more precisely: either general intelligence will be significantly slower than human thought, and it will be difficult to speed it up either through hardware or software; or it will speed up quickly, through massive parallelism and hardware improvements. We’ll go from thousand-core GPUs to trillions of cores on thousands of chips, with data streaming in from billions of sensors. In the first case, when speedups are slow, general intelligence might not be all that interesting (though it will have been a great ride for the researchers). In the second case, the ramp-up will be very steep and very fast….(More) (Full Report)”

This text-message hotline can predict your risk of depression or stress


Clinton Nguyen for TechInsider: “When counselors are helping someone in the midst of an emotional crisis, they must not only know how to talk – they also must be willing to text.

Crisis Text Line, a non-profit text-message-based counseling service, operates a hotline for people who find it safer or easier to text about their problems than make a phone call or send an instant message. Over 1,500 volunteers are on hand 24/7 to lend support about problems including bullying, isolation, suicidal thoughts, bereavement, self-harm, or even just stress.

But in addition to providing a new outlet for those who prefer to communicate by text, the service is gathering a wellspring of anonymized data.

“We look for patterns in historical conversations that end up being higher risk for self harm and suicide attempts,” Liz Eddy, a Crisis Text Line spokesperson, tells Tech Insider. “By grounding in historical data, we can predict the risk of new texters coming in.crisis-text-line-sms

According to Fortune, the organization is using machine learning to prioritize higher-risk individuals for quicker and more effective responses. But Crisis Text Line is also wielding the data it gathers in other ways – the company has published a page of trends that tells the public which hours or days people are more likely to be affected by certain issues, as well as which US states are most affected by specific crises or psychological states.

According to the data, residents of Alaska reach out to the Text Line for LGBTQ issues more than those in other states, and Maine is one of the most stressed out states. Physical abuse is most commonly reported in North Dakota and Wyoming, while depression is more prevalent in texters from Kentucky and West Virginia.

The research comes at an especially critical time. According to studies from the National Center for Health Statistics, US suicide rates have surged to a 30-year high. The study noted a rise in suicide rates for all demographics except black men over the age of 75. Alarmingly, the suicide rate among 10- to 14-year-old girls has tripled since 1999….(More)”