Artificial Intelligence can streamline public comment for federal agencies


John Davis at the Hill: “…What became immediately clear to me was that — although not impossible to overcome — the lack of consistency and shared best practices across all federal agencies in accepting and reviewing public comments was a serious impediment. The promise of Natural Language Processing and cognitive computing to make the public comment process light years faster and more transparent becomes that much more difficult without a consensus among federal agencies on what type of data is collected – and how.

“There is a whole bunch of work we have to do around getting government to be more customer friendly and making it at least as easy to file your taxes as it is to order a pizza or buy an airline ticket,” President Obama recently said in an interview with WIRED. “Whether it’s encouraging people to vote or dislodging Big Data so that people can use it more easily, or getting their forms processed online more simply — there’s a huge amount of work to drag the federal government and state governments and local governments into the 21st century.”

…expanding the discussion around Artificial Intelligence and regulatory processes to include how the technology should be leveraged to ensure fairness and responsiveness in the very basic processes of rulemaking – in particular public notices and comments. These technologies could also enable us to consider not just public comments formally submitted to an agency, but the entire universe of statements made through social media posts, blogs, chat boards — and conceivably every other electronic channel of public communication.

Obviously, an anonymous comment on the Internet should not carry the same credibility as a formally submitted, personally signed statement, just as sworn testimony in court holds far greater weight than a grapevine rumor. But so much public discussion today occurs on Facebook pages, in Tweets, on news website comment sections, etc. Anonymous speech enjoys explicit protection under the Constitution, based on a justified expectation that certain sincere statements of sentiment might result in unfair retribution from the government.

Should we simply ignore the valuable insights about actual public sentiment on specific issues made possible through the power of Artificial Intelligence, which can ascertain meaning from an otherwise unfathomable ocean of relevant public conversations? With certain qualifications, I believe Artificial Intelligence, or AI, should absolutely be employed in the critical effort to gain insights from public comments – signed or anonymous.

“In the criminal justice system, some of the biggest concerns with Big Data are the lack of data and the lack of quality data,” the NSTC report authors state. “AI needs good data. If the data is incomplete or biased, AI can exacerbate problems of bias.” As a former federal criminal prosecutor and defense attorney, I am well familiar with the absolute necessity to weigh the relative value of various forms of evidence – or in this case, data…(More)

When the Algorithm Itself is a Racist: Diagnosing Ethical Harm in the Basic Components of Software


Paper by Christian Sandvig et al in Special Issue of the International Journal of Communication on Automation, Algorithms, and Politics: “Computer algorithms organize and select information across a wide range of applications and industries, from search results to social media. Abuses of power by Internet platforms have led to calls for algorithm transparency and regulation. Algorithms have a particularly problematic history of processing information about race. Yet some analysts have warned that foundational computer algorithms are not useful subjects for ethical or normative analysis due to complexity, secrecy, technical character, or generality. We respond by investigating what it is an analyst needs to know to determine whether the algorithm in a computer system is improper, unethical, or illegal in itself. We argue that an “algorithmic ethics” can analyze a particular published algorithm. We explain the importance of developing a practical algorithmic ethics that addresses virtues, consequences, and norms: We increasingly delegate authority to algorithms, and they are fast becoming obscure but important elements of social structure…. (More)”

A decentralized web would give power back to the people online


 at TechCrunch: “…The original purpose of the web and internet, if you recall, was to build a common neural network which everyone can participate in equally for the betterment of humanity.Fortunately, there is an emerging movement to bring the web back to this vision and it even involves some of the key figures from the birth of the web. It’s called the Decentralised Web or Web 3.0, and it describes an emerging trend to build services on the internet which do not depend on any single “central” organisation to function.

So what happened to the initial dream of the web? Much of the altruism faded during the first dot-com bubble, as people realised that an easy way to create value on top of this neutral fabric was to build centralised services which gather, trap and monetise information.

Search Engines (e.g. Google), Social Networks (e.g. Facebook), Chat Apps (e.g. WhatsApp )have grown huge by providing centralised services on the internet. For example, Facebook’s future vision of the internet is to provide access only to the subset of centralised services endorses (Internet.org and Free Basics).

Meanwhile, it disables fundamental internet freedoms such as the ability to link to content via a URL (forcing you to share content only within Facebook) or the ability for search engines to index its contents (other than the Facebook search function).

The Decentralised Web envisions a future world where services such as communication,currency, publishing, social networking, search, archiving etc are provided not by centralised services owned by single organisations, but by technologies which are powered by the people: their own community. Their users.

The core idea of decentralisation is that the operation of a service is not blindly trusted toany single omnipotent company. Instead, responsibility for the service is shared: perhaps by running across multiple federated servers, or perhaps running across client side apps in an entirely “distributed” peer-to-peer model.

Even though the community may be “byzantine” and not have any reason to trust or depend on each other, the rules that describe the decentralised service’s behaviour are designed to force participants to act fairly in order to participate at all, relying heavily on cryptographic techniques such as Merkle trees and digital signatures to allow participants to hold each other accountable.

There are three fundamental areas that the Decentralised Web necessarily champions:privacy, data portability and security.

  • Privacy: Decentralisation forces an increased focus on data privacy. Data is distributed across the network and end-to-end encryption technologies are critical for ensuring that only authorized users can read and write. Access to the data itself is entirely controlled algorithmically by the network as opposed to more centralized networks where typically the owner of that network has full access to data, facilitating  customer profiling and ad targeting.
  • Data Portability: In a decentralized environment, users own their data and choose with whom they share this data. Moreover they retain control of it when they leave a given service provider (assuming the service even has the concept of service providers). This is important. If I want to move from General Motors to BMW today, why should I not be able to take my driving records with me? The same applies to chat platform history or health records.
  • Security: Finally, we live in a world of increased security threats. In a centralized environment, the bigger the silo, the bigger the honeypot is to attract bad actors.Decentralized environments are safer by their general nature against being hacked,infiltrated, acquired, bankrupted or otherwise compromised as they have been built to exist under public scrutiny from the outset….(More)”

Facebook, World Bank and OECD Link Up to Gather Data


Paul Hannon in the Wall Street Journal: “Social media potentially offers cheaper and more timely way to survey firms and gauge the economy…Facebook has teamed up with the World Bank and the OECD to launch a new measure of business sentiment based on questioning companies that use their Facebook pages to connect with customers.

The three partners on Wednesday launched a new measure of business sentiment based on questioning companies that use their Facebook pages to connect with customers. Known as the Future of Business Survey, the report has been in testing since February and received responses to 15 queries from a total of 90,000 small and midsize firms across 22 countries.

Its first public release shows that those businesses are more optimistic about their prospects than other companies surveyed by more traditional means.

But the real interest for the three partners is the potential to drill down into the factors that affect the growth of small businesses, a process that until now has involved great expense and time, since it involves face-to-face interviews by polling professionals that are carried out over many months and are infrequently updated. “What I feel is appealing about this particular survey is that it’s potentially a more powerful tool for getting information more quickly and at a fraction of the cost,” said Augusto Lopez-Claros, director of the Global Indicators Group at the World Bank.

Even in developed countries with well funded and equipped statistics offices, timely information on very small businesses is hard to come by. In developing countries, that scarcity can be more acute. The ability to connect with business owners via Facebook or other social-media platforms could make it possible to gather such information, and acquire a more complete picture of what is happening in those economies.

The new approach to data gathering could even enable some smaller developing countries to skip the process of enlarging their statistics agencies. That is an opportunity Mr. Lopez Claros compares to the advent of mobile telephones, which enabled many African countries to skip the construction of expensive fixed-line infrastructure and improve communications at a fraction of that cost….(More)

See also Entrepreneurship at a Glance 2016 (OECD).

Social Machines: The Coming Collision of Artificial Intelligence, Social Networking, and Humanity


Book by James Hendler and Alice Mulvehill: “Will your next doctor be a human being—or a machine? Will you have a choice? If you do, what should you know before making it?

This book introduces the reader to the pitfalls and promises of artificial intelligence in its modern incarnation and the growing trend of systems to “reach off the Web” into the real world. The convergence of AI, social networking, and modern computing is creating an historic inflection point in the partnership between human beings and machines with potentially profound impacts on the future not only of computing but of our world.

AI experts and researchers James Hendler and Alice Mulvehill explore the social implications of AI systems in the context of a close examination of the technologies that make them possible. The authors critically evaluate the utopian claims and dystopian counterclaims of prognosticators. Social Machines: The Coming Collision of Artificial Intelligence, Social Networking, and Humanity is your richly illustrated field guide to the future of your machine-mediated relationships with other human beings and with increasingly intelligent machines.

What you’ll learn

• What the concept of a social machine is and how the activities of non-programmers are contributing to machine intelligence• How modern artificial intelligence technologies, such as Watson, are evolving and how they process knowledge from both carefully produced information (such as Wikipedia or journal articles) and from big data collections

• The fundamentals of neuromorphic computing

• The fundamentals of knowledge graph search and linked data as well as the basic technology concepts that underlie networking applications such as Facebook and Twitter

• How the change in attitudes towards cooperative work on the Web, especially in the younger demographic, is critical to the future of Web applications…(More)”

Twitter, UN Global Pulse announce data partnership


PressRelease: “Twitter and UN Global Pulse today announced a partnership that will provide the United Nations with access to Twitter’s data tools to support efforts to achieve the Sustainable Development Goals, which were adopted by world leaders last year.

Every day, people around the world send hundreds of millions of Tweets in dozens of languages. This public data contains real-time information on many issues including the cost of food, availability of jobs, access to health care, quality of education, and reports of natural disasters. This partnership will allow the development and humanitarian agencies of the UN to turn these social conversations into actionable information to aid communities around the globe.

“The Sustainable Development Goals are first and foremost about people, and Twitter’s unique data stream can help us truly take a real-time pulse on priorities and concerns — particularly in regions where social media use is common — to strengthen decision-making. Strong public-private partnerships like this show the vast potential of big data to serve the public good,” said Robert Kirkpatrick, Director of UN Global Pulse.

“We are incredibly proud to partner with the UN in support of the Sustainable Development Goals,” said Chris Moody, Twitter’s VP of Data Services. “Twitter data provides a live window into the public conversations that communities around the world are having, and we believe that the increased potential for research and innovation through this partnership will further the UN’s efforts to reach the Sustainable Development Goals.”

Organizations and business around the world currently use Twitter data in many meaningful ways, and this unique data source enables them to leverage public information at scale to better inform their policies and decisions. These partnerships enable innovative uses of Twitter data, while protecting the privacy and safety of Twitter users.

UN Global Pulse’s new collaboration with Twitter builds on existing R&D that has shown the power of social media for social impact, like measuring the impact of public health campaigns, tracking reports of rising food prices, or prioritizing needs after natural disasters….(More)”

Living in the World of Both/And


Essay by Adene Sacks & Heather McLeod Grant  in SSIR: “In 2011, New York Times data scientist Jake Porway wrote a blog post lamenting the fact that most data scientists spend their days creating apps to help users find restaurants, TV shows, or parking spots, rather than addressing complicated social issues like helping identify which teens are at risk of suicide or creating a poverty index of Africa using satellite data.

That post hit a nerve. Data scientists around the world began clamoring for opportunities to “do good with data.” Porway—at the center of this storm—began to convene these scientists and connect them to nonprofits via hackathon-style events called DataDives, designed to solve big social and environmental problems. There was so much interest, he eventually quit his day job at the Times and created the organization DataKind to steward this growing global network of data science do-gooders.

At the same time, in the same city, another movement was taking shape—#GivingTuesday, an annual global giving event fueled by social media. In just five years, #GivingTuesday has reshaped how nonprofits think about fundraising and how donors give. And yet, many don’t know that 92nd Street Y (92Y)—a 140-year-old Jewish community and cultural center in Manhattan, better known for its star-studded speaker series, summer camps, and water aerobics classes—launched it.

What do these two examples have in common? One started as a loose global network that engaged data scientists in solving problems, and then became an organization to help support the larger movement. The other started with a legacy organization, based at a single site, and catalyzed a global movement that has reshaped how we think about philanthropy. In both cases, the founding groups have incorporated the best of both organizations and networks.

Much has been written about the virtues of thinking and acting collectively to solve seemingly intractable challenges. Nonprofit leaders are being implored to put mission above brand, build networks not just programs, and prioritize collaboration over individual interests. And yet, these strategies are often in direct contradiction to the conventional wisdom of organization-building: differentiating your brand, developing unique expertise, and growing a loyal donor base.

A similar tension is emerging among network and movement leaders. These leaders spend their days steering the messy process required to connect, align, and channel the collective efforts of diverse stakeholders. It’s not always easy: Those searching to sustain movements often cite the lost momentum of the Occupy movement as a cautionary note. Increasingly, network leaders are looking at how to adapt the process, structure, and operational expertise more traditionally associated with organizations to their needs—but without co-opting or diminishing the energy and momentum of their self-organizing networks…

Welcome to the World of “Both/And”

Today’s social change leaders—be they from business, government, or nonprofits—must learn to straddle the leadership mindsets and practices of both networks and organizations, and know when to use which approach. Leaders like Porway, and Henry Timms and Asha Curran of 92Y can help show us the way.

How do these leaders work with the “both/and” mindset?

First, they understand and leverage the strengths of both organizations and networks—and anticipate their limitations. As Timms describes it, leaders need to be “bilingual” and embrace what he has called “new power.” Networks can be powerful generators of new talent or innovation around complex multi-sector challenges. It’s useful to take a network approach when innovating new ideas, mobilizing and engaging others in the work, or wanting to expand reach and scale quickly. However, networks can dissipate easily without specific “handrails,” or some structure to guide and support their work. This is where they need some help from the organizational mindset and approach.

On the flip side, organizations are good at creating centralized structures to deliver products or services, manage risk, oversee quality control, and coordinate concrete functions like communications or fundraising. However, often that efficiency and effectiveness can calcify over time, becoming a barrier to new ideas and growth opportunities. When organizational boundaries are too rigid, it is difficult to engage the outside world in ideating or mobilizing on an issue. This is when organizations need an infusion of the “network mindset.”

 

…(More)

Beware of the gaps in Big Data


Edd Gent at E&T: “When the municipal authority in charge of Boston, Massachusetts, was looking for a smarter way to find which roads it needed to repair, it hit on the idea of crowdsourcing the data. The authority released a mobile app called Street Bump in 2011 that employed an elegantly simple idea: use a smartphone’s accelerometer to detect jolts as cars go over potholes and look up the location using the Global Positioning System. But the approach ran into a pothole of its own.The system reported a disproportionate number of potholes in wealthier neighbourhoods. It turned out it was oversampling the younger, more affluent citizens who were digitally clued up enough to download and use the app in the first place. The city reacted quickly, but the incident shows how easy it is to develop a system that can handle large quantities of data but which, through its own design, is still unlikely to have enough data to work as planned.

As we entrust more of our lives to big data analytics, automation problems like this could become increasingly common, with their errors difficult to spot after the fact. Systems that ‘feel like they work’ are where the trouble starts.

Harvard University professor Gary King, who is also founder of social media analytics company Crimson Hexagon, recalls a project that used social media to predict unemployment. The model was built by correlating US unemployment figures with the frequency that people used words like ‘jobs’, ‘unemployment’ and ‘classifieds’. A sudden spike convinced researchers they had predicted a big rise in joblessness, but it turned out Steve Jobs had died and their model was simply picking up posts with his name. “This was an example of really bad analytics and it’s even worse because it’s the kind of thing that feels like it should work and does work a little bit,” says King.

Big data can shed light on areas with historic information deficits, and systems that seem to automatically highlight the best course of action can be seductive for executives and officials. “In the vacuum of no decision any decision is attractive,” says Jim Adler, head of data at Toyota Research Institute in Palo Alto. “Policymakers will say, ‘there’s a decision here let’s take it’, without really looking at what led to it. Was the data trustworthy, clean?”…(More)”

Infostorms. Why do we ‘like’? Explaining individual behavior on the social net.


Book by Hendricks, Vincent F. and  Hansen, Pelle G.: “With points of departure in philosophy, logic, social psychology, economics, and choice and game theory, Infostorms shows how information may be used to improve the quality of personal decision and group thinking but also warns against the informational pitfalls which modern information technology may amplify: From science to reality culture and what it really is, that makes you buy a book like this.

The information society is upon us. New technologies have given us back pocket libraries, online discussion forums, blogs, crowdbased opinion aggregators, social media and breaking news wherever, whenever. But are we more enlightened and rational because of it?

Infostorms provides the nuts and bolts of how irrational group behaviour may get amplified by social media and information technology. If we could be collectively dense before, now we can do it at light speed and with potentially global reach. That’s how things go viral, that is how cyberbullying, rude comments online, opinion bubbles, status bubbles, political polarisation and a host of other everyday unpleasantries start. Infostorms will give the story of the mechanics of these phenomena. This will help you to avoid them if you want or learn to start them if you must. It will allow you to stay sane in an insane world of information….(More)”

Artificial intelligence is hard to see


Kate Crawford and Meredith Whittaker on “Why we urgently need to measure AI’s societal impacts“: “How will artificial intelligence systems change the way we live? This is a tough question: on one hand, AI tools are producing compelling advances in complex tasks, with dramatic improvements in energy consumption, audio processing, and leukemia detection. There is extraordinary potential to do much more in the future. On the other hand, AI systems are already making problematic judgements that are producing significant social, cultural, and economic impacts in people’s everyday lives.

AI and decision-support systems are embedded in a wide array of social institutions, from influencing who is released from jail to shaping the news we see. For example, Facebook’s automated content editing system recently censored the Pulitzer-prize winning image of a nine-year old girl fleeing napalm bombs during the Vietnam War. The girl is naked; to an image processing algorithm, this might appear as a simple violation of the policy against child nudity. But to human eyes, Nick Ut’s photograph, “The Terror of War”, means much more: it is an iconic portrait of the indiscriminate horror of conflict, and it has an assured place in the history of photography and international politics. The removal of the image caused an international outcry before Facebook backed down and restored the image. “What they do by removing such images, no matter what good intentions, is to redact our shared history,” said the Prime Minister of Norway, Erna Solberg.

It’s easy to forget that these high-profile instances are actually the easy cases. As Tarleton Gillespie has observed, hundreds of content reviews are occurring with Facebook images thousand of times per day, and rarely is there a Pulitzer prize to help determine lasting significance. Some of these reviews include human teams, and some do not. In this case, there is alsoconsiderable ambiguity about where the automated process ended and the human review began: which is part of the problem. And Facebook is just one player in complex ecology of algorithmically-supplemented determinations with little external monitoring to see how decisions are made or what the effects might be.

The ‘Terror of War’ case, then, is the tip of the iceberg: a rare visible instance that points to a much larger mass of unseen automated and semi-automated decisions. The concern is that most of these ‘weak AI’ systems are making decisions that don’t garner such attention. They are embedded at the back-end of systems, working at the seams of multiple data sets, with no consumer-facing interface. Their operations are mainly unknown, unseen, and with impacts that take enormous effort to detect.

Sometimes AI techniques get it right, and sometimes they get it wrong. Only rarely will those errors be seen by the public: like the Vietnam war photograph, or when a AI ‘beauty contest’ held this month was called out for being racist for selecting white women as the winners. We can dismiss this latter case as a problem of training data — they simply need a more diverse selection of faces to train their algorithm with, and now that 600,000 people have sent in their selfies, they certainly have better means to do so. But while a beauty contest might seem like a bad joke, or just a really good trick to get people to give up their photos to build a large training data set, it points to a much bigger set of problems. AI and decision-support systems are reaching into everyday life: determining who will be on a predictive policing‘heat list’, who will be hired or promoted, which students will be recruited to universities, or seeking to predict at birth who will become a criminal by the age of 18. So the stakes are high…(More)”