How the Federal Government is thinking about Artificial Intelligence


Mohana Ravindranath at NextGov: “Since May, the White House has been exploring the use of artificial intelligence and machine learning for the public: that is, how the federal government should be investing in the technology to improve its own operations. The technologies, often modeled after the way humans take in, store and use new information, could help researchers find patterns in genetic data or help judges decide sentences for criminals based on their likelihood to end up there again, among other applications. …

Here’s a look at how some federal groups are thinking about the technology:

  • Police data: At a recent White House workshop, Office of Science and Technology Policy Senior Adviser Lynn Overmann said artificial intelligence could help police departments comb through hundreds of thousands of hours of body-worn camera footage, potentially identifying the police officers who are good at de-escalating situations. It also could help cities determine which individuals are likely to end up in jail or prison and officials could rethink programs. For example, if there’s a large overlap between substance abuse and jail time, public health organizations might decide to focus their efforts on helping people reduce their substance abuse to keep them out of jail.
  • Explainable artificial intelligence: The Pentagon’s research and development agency is looking for technology that can explain to analysts how it makes decisions. If people can’t understand how a system works, they’re not likely to use it, according to a broad agency announcement from the Defense Advanced Research Projects Agency. Intelligence analysts who might rely on a computer for recommendations on investigative leads must “understand why the algorithm has recommended certain activity,” as do employees overseeing autonomous drone missions.
  • Weather detection: The Coast Guard recently posted its intent to sole-source a contract for technology that could autonomously gather information about traffic, crosswind, and aircraft emergencies. That technology contains built-in artificial intelligence technology so it can “provide only operational relevant information.”
  • Cybersecurity: The Air Force wants to make cyber defense operations as autonomous as possible, and is looking at artificial intelligence that could potentially identify or block attempts to compromise a system, among others.

While there are endless applications in government, computers won’t completely replace federal employees anytime soon….(More)”

How Tech Giants Are Devising Real Ethics for Artificial Intelligence


For years, science-fiction moviemakers have been making us fear the bad things that artificially intelligent machines might do to their human creators. But for the next decade or two, our biggest concern is more likely to be that robots will take away our jobs or bump into us on the highway.

Now five of the world’s largest tech companies are trying to create a standard of ethics around the creation of artificial intelligence. While science fiction has focused on the existential threat of A.I. to humans,researchers at Google’s parent company, Alphabet, and those from Amazon,Facebook, IBM and Microsoft have been meeting to discuss more tangible issues, such as the impact of A.I. on jobs, transportation and even warfare.

Tech companies have long overpromised what artificially intelligent machines can do. In recent years, however, the A.I. field has made rapid advances in a range of areas, from self-driving cars and machines that understand speech, like Amazon’s Echo device, to a new generation of weapons systems that threaten to automate combat.

The specifics of what the industry group will do or say — even its name —have yet to be hashed out. But the basic intention is clear: to ensure thatA.I. research is focused on benefiting people, not hurting them, according to four people involved in the creation of the industry partnership who are not authorized to speak about it publicly.

The importance of the industry effort is underscored in a report issued onThursday by a Stanford University group funded by Eric Horvitz, a Microsoft researcher who is one of the executives in the industry discussions. The Stanford project, called the One Hundred Year Study onArtificial Intelligence, lays out a plan to produce a detailed report on the impact of A.I. on society every five years for the next century….The Stanford report attempts to define the issues that citizens of a typicalNorth American city will face in computers and robotic systems that mimic human capabilities. The authors explore eight aspects of modern life,including health care, education, entertainment and employment, but specifically do not look at the issue of warfare..(More)”

The risks of relying on robots for fairer staff recruitment


Sarah O’Connor at the Financial Times: “Robots are not just taking people’s jobs away, they are beginning to hand them out, too. Go to any recruitment industry event and you will find the air is thick with terms like “machine learning”, “big data” and “predictive analytics”.

The argument for using these tools in recruitment is simple. Robo-recruiters can sift through thousands of job candidates far more efficiently than humans. They can also do it more fairly. Since they do not harbour conscious or unconscious human biases, they will recruit a more diverse and meritocratic workforce.

This is a seductive idea but it is also dangerous. Algorithms are not inherently neutral just because they see the world in zeros and ones.

For a start, any machine learning algorithm is only as good as the training data from which it learns. Take the PhD thesis of academic researcher Colin Lee, released to the press this year. He analysed data on the success or failure of 441,769 job applications and built a model that could predict with 70 to 80 per cent accuracy which candidates would be invited to interview. The press release plugged this algorithm as a potential tool to screen a large number of CVs while avoiding “human error and unconscious bias”.

But a model like this would absorb any human biases at work in the original recruitment decisions. For example, the research found that age was the biggest predictor of being invited to interview, with the youngest and the oldest applicants least likely to be successful. You might think it fair enough that inexperienced youngsters do badly, but the routine rejection of older candidates seems like something to investigate rather than codify and perpetuate. Mr Lee acknowledges these problems and suggests it would be better to strip the CVs of attributes such as gender, age and ethnicity before using them….(More)”

White House, Transportation Dept. want help using open data to prevent traffic crashes


Samantha Ehlinger in FedScoop: “The Transportation Department is looking for public input on how to better interpret and use data on fatal crashes after 2015 data revealed a startling spike of 7.2 percent more deaths in traffic accidents that year.

Looking for new solutions that could prevent more deaths on the roads, the department released three months earlier than usual the 2015 open dataset about each fatal crash. With it, the department and the White House announced a call to action for people to use the data set as a jumping off point for a dialogue on how to prevent crashes, as well as understand what might be causing the spike.

“What we’re ultimately looking for is getting more people engaged in the data … matching this with other publicly available data, or data that the private sector might be willing to make available, to dive in and to tell these stories,” said Bryan Thomas, communications director for the National Highway Traffic Safety Administration, to FedScoop.

One striking statistic was that “pedestrian and pedalcyclist fatalities increased to a level not seen in 20 years,” according to a DOT press release. …

“We want folks to be engaged directly with our own data scientists, so we can help people through the dataset and help answer their questions as they work their way through, bounce ideas off of us, etc.,” Thomas said. “We really want to be accessible in that way.”

He added that as ideas “come to fruition,” there will be opportunities to present what people have learned.

“It’s a very, very rich data set, there’s a lot of information there,” Thomas said. “Our own ability is, frankly, limited to investigate all of the questions that you might have of it. And so we want to get the public really diving in as well.”…

Here are the questions “worth exploring,” according to the call to action:

  • How might improving economic conditions around the country change how Americans are getting around? What models can we develop to identify communities that might be at a higher risk for fatal crashes?
  • How might climate change increase the risk of fatal crashes in a community?
  • How might we use studies of attitudes toward speeding, distracted driving, and seat belt use to better target marketing and behavioral change campaigns?
  • How might we monitor public health indicators and behavior risk indicators to target communities that might have a high prevalence of behaviors linked with fatal crashes (drinking, drug use/addiction, etc.)? What countermeasures should we create to address these issues?”…(More)”

Questioning Big Data: Crowdsourcing crisis data towards an inclusive humanitarian response


Femke Mulder, Julie Ferguson, Peter Groenewegen, Kees Boersma, and Jeroen Wolbers in Big Data and Society: “The aim of this paper is to critically explore whether crowdsourced Big Data enables an inclusive humanitarian response at times of crisis. We argue that all data, including Big Data, are socially constructed artefacts that reflect the contexts and processes of their creation. To support our argument, we qualitatively analysed the process of ‘Big Data making’ that occurred by way of crowdsourcing through open data platforms, in the context of two specific humanitarian crises, namely the 2010 earthquake in Haiti and the 2015 earthquake in Nepal. We show that the process of creating Big Data from local and global sources of knowledge entails the transformation of information as it moves from one distinct group of contributors to the next. The implication of this transformation is that locally based, affected people and often the original ‘crowd’ are excluded from the information flow, and from the interpretation process of crowdsourced crisis knowledge, as used by formal responding organizations, and are marginalized in their ability to benefit from Big Data in support of their own means. Our paper contributes a critical perspective to the debate on participatory Big Data, by explaining the process of in and exclusion during data making, towards more responsive humanitarian relief….(More)”.

Smart Economy in Smart Cities


Book edited by Vinod Kumar, T. M.: “The present book highlights studies that show how smart cities promote urban economic development. The book surveys the state of the art of Smart City Economic Development through a literature survey. The book uses 13 in depth city research case studies in 10 countries such as the North America, Europe, Africa and Asia to explain how a smart economy changes the urban spatial system and vice versa. This book focuses on exploratory city studies in different countries, which investigate how urban spatial systems adapt to the specific needs of smart urban economy. The theory of smart city economic development is not yet entirely understood and applied in metropolitan regional plans. Smart urban economies are largely the result of the influence of ICT applications on all aspects of urban economy, which in turn changes the land-use system. It points out that the dynamics of smart city GDP creation takes ‘different paths,’ which need further empirical study, hypothesis testing and mathematical modelling. Although there are hypotheses on how smart cities generate wealth and social benefits for nations, there are no significant empirical studies available on how they generate urban economic development through urban spatial adaptation.  This book with 13 cities research studies is one attempt to fill in the gap in knowledge base….(More)”

Make Data Sharing Routine to Prepare for Public Health Emergencies


Jean-Paul Chretien, Caitlin M. Rivers, and Michael A. Johansson in PLOS Medicine: “In February 2016, Wellcome Trust organized a pledge among leading scientific organizations and health agencies encouraging researchers to release data relevant to the Zika outbreak as rapidly and widely as possible [1]. This initiative echoed a September 2015 World Health Organization (WHO) consultation that assessed data sharing during the recent West Africa Ebola outbreak and called on researchers to make data publicly available during public health emergencies [2]. These statements were necessary because the traditional way of communicating research results—publication in peer-reviewed journals, often months or years after data collection—is too slow during an emergency.

The acute health threat of outbreaks provides a strong argument for more complete, quick, and broad sharing of research data during emergencies. But the Ebola and Zika outbreaks suggest that data sharing cannot be limited to emergencies without compromising emergency preparedness. To prepare for future outbreaks, the scientific community should expand data sharing for all health research….

Open data deserves recognition and support as a key component of emergency preparedness. Initiatives to facilitate discovery of datasets and track their use [4042]; provide measures of academic contribution, including data sharing that enables secondary analysis [43]; establish common platforms for sharing and integrating research data [44]; and improve data-sharing capacity in resource-limited areas [45] are critical to improving preparedness and response.

Research sponsors, scholarly journals, and collaborative research networks can leverage these new opportunities with enhanced data-sharing requirements for both nonemergency and emergency settings. A proposal to amend the International Health Regulations with clear codes of practice for data sharing warrants serious consideration [46]. Any new requirements should allow scientists to conduct and communicate the results of secondary analyses, broadening the scope of inquiry and catalyzing discovery. Publication embargo periods, such as one under consideration for genetic sequences of pandemic-potential influenza viruses [47], may lower barriers to data sharing but may also slow the timely use of data for public health.

Integrating open science approaches into routine research should make data sharing more effective during emergencies, but this evolution is more than just practice for emergencies. The cause and context of the next outbreak are unknowable; research that seems routine now may be critical tomorrow. Establishing openness as the standard will help build the scientific foundation needed to contain the next outbreak.

Recent epidemics were surprises—Zika and chikungunya sweeping through the Americas; an Ebola pandemic with more than 10,000 deaths; the emergence of severe acute respiratory syndrome and Middle East respiratory syndrome, and an influenza pandemic (influenza A[H1N1]pdm09) originating in Mexico—and we can be sure there are more surprises to come. Opening all research provides the best chance to accelerate discovery and development that will help during the next surprise….(More)”

Managing Federal Information as a Strategic Resource


White House: “Today the Office of Management and Budget (OMB) is releasing an update to the Federal Government’s governing document for the management of Federal information resources: Circular A-130, Managing Information as a Strategic Resource.

The way we manage information technology(IT), security, data governance, and privacy has rapidly evolved since A-130 was last updated in 2000.  In today’s digital world, we are creating and collecting large volumes of data to carry out the Federal Government’s various missions to serve the American people.  This data is duplicated, stored, processed, analyzed, and transferred with ease.  As government continues to digitize, we must ensure we manage data to not only keep it secure, but also allow us to harness this information to provide the best possible service to our citizens.

Today’s update to Circular A-130 gathers in one resource a wide range of policy updates for Federal agencies regarding cybersecurity, information governance, privacy, records management, open data, and acquisitions.  It also establishes general policy for IT planning and budgeting through governance, acquisition, and management of Federal information, personnel, equipment, funds, IT resources, and supporting infrastructure and services.  In particular, A-130 focuses on three key elements to help spur innovation throughout the government:

  • Real Time Knowledge of the Environment.  In today’s rapidly changing environment, threats and technology are evolving at previously unimagined speeds.  In such a setting, the Government cannot afford to authorize a system and not look at it again for years at a time.  In order to keep pace, we must move away from periodic, compliance-driven assessment exercises and, instead, continuously assess our systems and build-in security and privacy with every update and re-design.  Throughout the Circular, we make clear the shift away from check-list exercises and toward the ongoing monitoring, assessment, and evaluation of Federal information resources.
  • Proactive Risk ManagementTo keep pace with the needs of citizens, we must constantly innovate.  As part of such efforts, however, the Federal Government must modernize the way it identifies, categorizes, and handles risk to ensure both privacy and security.  Significant increases in the volume of data processed and utilized by Federal resources requires new ways of storing, transferring, and managing it Circular A-130 emphasizes the need for strong data governance that encourages agencies to proactively identify risks, determine practical and implementable solutions to address said risks, and implement and continually test the solutions.  This repeated testing of agency solutions will help to proactively identify additional risks, starting the process anew.
  • Shared ResponsibilityCitizens are connecting with each other in ways never before imagined.  From social media to email, the connectivity we have with one another can lead to tremendous advances.  The updated A-130 helps to ensure everyone remains responsible and accountable for assuring privacy and security of information – from managers to employees to citizens interacting with government services. …(More)”

Democracy Is Getting A Reboot On The Blockchain


Adele Peters in FastCoExist: “In 2013, a group of activists in Buenos Aires attempted an experiment in what they called hacking democracy. Representatives from their new political party would promise to always vote on issues according to the will of citizens online. Using a digital platform, people could tell the legislator what to support, in a hybrid of a direct democracy and representation.

With 1.2% of the vote, the candidate they ran for a seat on the city council didn’t win. But the open-source platform they created for letting citizens vote, called Democracy OS, started getting attention around the world. In Buenos Aires, the government tried using it to get citizen feedback on local issues. Then, when the party attempted to run a candidate a second time, something happened that made them shift course. They were told they’d have to bribe a federal judge to participate.

“When you see that kind of corruption that you think happens in House of Cards—and you suddenly realize that House of Cards is happening all around you—it’s a very shocking thing,” says Santiago Siri, a programmer and one of the founders of the party, called Partido de la Red, or the Net Party. Siri started thinking about how technology could solve the fundamental problem of corruption—and about how democracy should work in the digital age.

The idea morphed into a Y Combinator-backed nonprofit called Democracy Earth Foundation. As the website explains:

The Internet transformed how we share culture, work together—and even fall in love—but governance has remained unchanged for over 200 years. With the rise of open-source software and peer-to-peer networks, political intermediation is no longer necessary. We are building a protocol with smart contracts that allows decentralized governance for any kind of organization.

Their new platform, which the team is working on now as part of the Fast Forward accelerator for tech nonprofits, starts by granting incorruptible identities to each citizen, and then records votes in a similarly incorruptible way.

“If you know anything about democracy, one of the simplest ways of subverting democracy is by faking identity,” says Siri. “This is about opening up the black box that can corrupt the system. In a democracy, that black box is who gets to count the votes, who gets to validate the identities that have the right to vote.”

While some experts argue that Internet voting isn’t secure enough to use yet, Democracy Earth’s new platform uses the blockchain—a decentralized, public ledger that uses encryption. Rather than recording votes in one place, everyone’s votes are recorded across a network of thousands of computers. The system can also validate identities in the same decentralized way….(More)”.

The ‘who’ and ‘what’ of #diabetes on Twitter


Mariano Beguerisse-Díaz, Amy K. McLennan, Guillermo Garduño-Hernández, Mauricio Barahona, and Stanley J. Ulijaszek at arXiv: “Social media are being increasingly used for health promotion. Yet the landscape of users and messages in such public fora is not well understood. So far, studies have typically focused either on people suffering from a disease, or on agencies that address it, but have not looked more broadly at all the participants in the debate and discussions. We study the conversation about diabetes on Twitter through the systematic analysis of a large collection of tweets containing the term ‘diabetes’, as well as the interactions between their authors. We address three questions: (1) what themes arise in these messages?; (2) who talks about diabetes and in what capacity?; and (3) which type of users contribute to which themes? To answer these questions, we employ a mixed-methods approach, using techniques from anthropology, network science and information retrieval. We find that diabetes-related tweets fall within broad thematic groups: health information, news, social interaction, and commercial. Humorous messages and messages with references to popular culture appear constantly over time, more than any other type of tweet in this corpus. Top ‘authorities’ are found consistently across time and comprise bloggers, advocacy groups and NGOs related to diabetes, as well as stockmarket-listed companies with no specific diabetes expertise. These authorities fall into seven interest communities in their Twitter follower network. In contrast, the landscape of ‘hubs’ is diffuse and fluid over time. We discuss the implications of our findings for public health professionals and policy makers. Our methods are generally applicable to investigations where similar data are available….(More)”