Bangalore Taps Tech Crowdsourcing to Fix ‘Unruly’ Gridlock


Saritha Rai at Bloomberg Technology: “In Bangalore, tech giants and startups typically spend their days fiercely battling each other for customers. Now they are turning their attention to a common enemy: the Indian city’s infernal traffic congestion.

Cross-town commutes that can take hours has inspired Gridlock Hackathon, a contest initiated by Flipkart Online Services Pvt. for technology workers to find solutions to the snarled roads that cost the economy billions of dollars. While the prize totals a mere $5,500, it’s attracting teams from global giants Microsoft Corp., Google and Amazon.com. Inc. to local startups including Ola.

The online contest is crowdsourcing solutions for Bangalore, a city of more than 10 million, as it grapples with inadequate roads, unprecedented growth and overpopulation. The technology industry began booming decades ago and with its base of talent, it continues to attract companies. Just last month, Intel Corp. said it would invest $178 million and add more workers to expand its R&D operations.

The ideas put forward at the hackathon range from using artificial intelligence and big data on traffic flows to true moonshots, such as flying cars.

The gridlock remains a problem for a city dependent on its technology industry and seeking to attract new investment…(More)”.

Lessons from Airbnb and Uber to Open Government as a Platform


Interview by Marquis Cabrera with Sangeet Paul Choudary: “…Platform companies have a very strong core built around data, machine learning, and a central infrastructure. But they rapidly innovate around it to try and test new things in the market and that helps them open themselves for further innovation in the ecosystem. Governments can learn to become more modular and more agile, the way platform companies are. Modularity in architecture is a very fundamental part of being a platform company; both in terms of your organizational architecture, as well as your business model architecture.

The second thing that governments can learn from a platform company is that successful platform companies are created with intent. They are not created by just opening out what you have available. If you look at the current approach of applying platform thinking in government, a common approach is just to take data and open it out to the world. However, successful platform companies first create a shaping strategy to shape-out and craft a direction of vision for the ecosystem in terms of what they can achieve by being on the platform. They then provision the right tools and services that serve the vision to enable success for the ecosystem[1] . And only then do they open up their infrastructure. It’s really important that you craft the right shaping strategy and use that to define the rights tools and services before you start pursuing a platform implementation.

In my work with governments, I regularly find myself stressing the importance of thinking as a market maker rather than as a service provider. Governments have always been market makers but when it comes to technology, they often take the service provider approach.

In your book, you used San Francisco City Government and Data.gov as examples of infusing platform thinking in government. But what are some global examples of governments, countries infusing platform thinking around the world?

One of the best examples is from my home country Singapore, which has been at the forefront of converting the nation into a platform. It has now been pursuing platform strategy both overall as a nation by building a smart nation platform, and also within verticals. If you look particularly at mobility and transportation, it has worked to create a central core platform and then build greater autonomy around how mobility and transportation works in the country. Other good examples of governments applying this are Dubai, South Korea, Barcelona; they are all countries and cities that have applied the concept of platforms very well to create a smart nation platform. India is another example that is applying platform thinking with the creation of the India stack, though the implementation could benefit from better platform governance structures and a more open regulation around participation….(More)”.

Volunteers teach AI to spot slavery sites from satellite images


This data will then be used to train machine learning algorithms to automatically recognise brick kilns in satellite imagery. If computers can pinpoint the location of such possible slavery sites, then the coordinates could be passed to local charities to investigate, says Kevin Bales, the project leader, at the University of Nottingham, UK.

South Asian brick kilns are notorious as modern-day slavery sites. There are an estimated 5 million people working in brick kilns in South Asia, and of those nearly 70 per cent are thought to be working there under duress – often to pay off financial debts.

 However, no one is quite sure how many of these kilns there are in the so-called “Brick Belt”, a region that stretches across parts of Pakistan, India and Nepal. Some estimates put the figure at 20,000, but it may be as high as 50,000.

Bales is hoping that his machine learning approach will produce a more accurate figure and help organisations on the ground know where to direct their anti-slavery efforts.

It’s great to have a tool for identifying possible forced labour sites, says Sasha Jesperson at St Mary’s University in London. But it is just a start – to really find out how many people are being enslaved in the brick kiln industry, investigators still need to visit every site and work out exactly what’s going on there, she says….

So far, volunteers have identified over 4000 potential slavery sites across 400 satellite images taken via Google Earth. Once these have been checked several times by volunteers, Bales plans to use these images to teach the machine learning algorithm what kilns look like, so that it can learn to recognise them in images automatically….(More)”.

AI and the Law: Setting the Stage


Urs Gasser: “Lawmakers and regulators need to look at AI not as a homogenous technology, but a set of techniques and methods that will be deployed in specific and increasingly diversified applications. There is currently no generally agreed-upon definition of AI. What is important to understand from a technical perspective is that AI is not a single, homogenous technology, but a rich set of subdisciplines, methods, and tools that bring together areas such as speech recognition, computer vision, machine translation, reasoning, attention and memory, robotics and control, etc. ….

Given the breadth and scope of application, AI-based technologies are expected to trigger a myriad of legal and regulatory issues not only at the intersections of data and algorithms, but also of infrastructures and humans. …

When considering (or anticipating) possible responses by the law vis-à-vis AI innovation, it might be helpful to differentiate between application-specific and cross-cutting legal and regulatory issues. …

Information asymmetries and high degrees of uncertainty pose particular difficulty to the design of appropriate legal and regulatory responses to AI innovations — and require learning systems. AI-based applications — which are typically perceived as “black boxes” — affect a significant number of people, yet there are nonetheless relatively few people who develop and understand AI-based technologies. ….Approaches such as regulation 2.0, which relies on dynamic, real-time, and data-driven accountability models, might provide interesting starting points.

The responses to a variety of legal and regulatory issues across different areas of distributed applications will likely result in a complex set of sector-specific norms, which are likely to vary across jurisdictions….

Law and regulation may constrain behavior yet also act as enablers and levelers — and are powerful tools as we aim for the development of AI for social good. …

Law is one important approach to the governance of AI-based technologies. But lawmakers and regulators have to consider the full potential of available instruments in the governance toolbox. ….

In a world of advanced AI technologies and new governance approaches towards them, the law, the rule of law, and human rights remain critical bodies of norms. …

As AI applies to the legal system itself, however, the rule of law might have to be re-imagined and the law re-coded in the longer run….(More).

A.I. experiments (with Google)


About: “With all the exciting A.I. stuff happening, there are lots of people eager to start tinkering with machine learning technology. A.I. Experiments is a showcase for simple experiments that let anyone play with this technology in hands-on ways, through pictures, drawings, language, music, and more.

Submit your own

We want to make it easier for any coder – whether you have a machine learning background or not – to create your own experiments. This site includes open-source code and resources to help you get started. If you make something you’d like to share, we’d love to see it and possibly add it to the showcase….(More)”

Big Data: A Twenty-First Century Arms Race


Report by Atlantic Council and Thomson Reuters: “We are living in a world awash in data. Accelerated interconnectivity, driven by the proliferation of internet-connected devices, has led to an explosion of data—big data. A race is now underway to develop new technologies and implement innovative methods that can handle the volume, variety, velocity, and veracity of big data and apply it smartly to provide decisive advantage and help solve major challenges facing companies and governments

For policy makers in government, big data and associated technologies like machine-learning and artificial Intelligence, have the potential to drastically improve their decision-making capabilities. How governments use big data may be a key factor in improved economic performance and national security. This publication looks at how big data can maximize the efficiency and effectiveness of government and business, while minimizing modern risks. Five authors explore big data across three cross-cutting issues: security, finance, and law.

Chapter 1, “The Conflict Between Protecting Privacy and Securing Nations,” Els de Busser
Chapter 2, “Big Data: Exposing the Risks from Within,” Erica Briscoe
Chapter 3, “Big Data: The Latest Tool in Fighting Crime,” Benjamin Dean, Fellow
Chapter 4, “Big Data: Tackling Illicit Financial Flows,” Tatiana Tropina
Chapter 5, “Big Data: Mitigating Financial Crime Risk,” Miren Aparicio….Read the Publication (PDF)

Teaching machines to understand – and summarize – text


 and  in The Conversation: “We humans are swamped with text. It’s not just news and other timely information: Regular people are drowning in legal documents. The problem is so bad we mostly ignore it. Every time a person uses a store’s loyalty rewards card or connects to an online service, his or her activities are governed by the equivalent of hundreds of pages of legalese. Most people pay no attention to these massive documents, often labeled “terms of service,” “user agreement” or “privacy policy.”

These are just part of a much wider societal problem of information overload. There is so much data stored – exabytes of it, as much stored as has ever been spoken by people in all of human history – that it’s humanly impossible to read and interpret everything. Often, we narrow down our pool of information by choosing particular topics or issues to pay attention to. But it’s important to actually know the meaning and contents of the legal documents that govern how our data is stored and who can see it.

As computer science researchers, we are working on ways artificial intelligence algorithms could digest these massive texts and extract their meaning, presenting it in terms regular people can understand….

Examining privacy policies

A modern internet-enabled life today more or less requires trusting for-profit companies with private information (like physical and email addresses, credit card numbers and bank account details) and personal data (photos and videos, email messages and location information).

These companies’ cloud-based systems typically keep multiple copies of users’ data as part of backup plans to prevent service outages. That means there are more potential targets – each data center must be securely protected both physically and electronically. Of course, internet companies recognize customers’ concerns and employ security teams to protect users’ data. But the specific and detailed legal obligations they undertake to do that are found in their impenetrable privacy policies. No regular human – and perhaps even no single attorney – can truly understand them.

In our study, we ask computers to summarize the terms and conditions regular users say they agree to when they click “Accept” or “Agree” buttons for online services. We downloaded the publicly available privacy policies of various internet companies, including Amazon AWS, Facebook, Google, HP, Oracle, PayPal, Salesforce, Snapchat, Twitter and WhatsApp….

Our software examines the text and uses information extraction techniques to identify key information specifying the legal rights, obligations and prohibitions identified in the document. It also uses linguistic analysis to identify whether each rule applies to the service provider, the user or a third-party entity, such as advertisers and marketing companies. Then it presents that information in clear, direct, human-readable statements….(More)”

Artificial intelligence can predict which congressional bills will pass


Other algorithms have predicted whether a bill will survive a congressional committee, or whether the Senate or House of Representatives will vote to approve it—all with varying degrees of success. But John Nay, a computer scientist and co-founder of Skopos Labs, a Nashville-based AI company focused on studying policymaking, wanted to take things one step further. He wanted to predict whether an introduced bill would make it all the way through both chambers—and precisely what its chances were.

Nay started with data on the 103rd Congress (1993–1995) through the 113th Congress (2013–2015), downloaded from a legislation-tracking website call GovTrack. This included the full text of the bills, plus a set of variables, including the number of co-sponsors, the month the bill was introduced, and whether the sponsor was in the majority party of their chamber. Using data on Congresses 103 through 106, he trained machine-learning algorithms—programs that find patterns on their own—to associate bills’ text and contextual variables with their outcomes. He then predicted how each bill would do in the 107th Congress. Then, he trained his algorithms on Congresses 103 through 107 to predict the 108th Congress, and so on.

Nay’s most complex machine-learning algorithm combined several parts. The first part analyzed the language in the bill. It interpreted the meaning of words by how they were embedded in surrounding words. For example, it might see the phrase “obtain a loan for education” and assume “loan” has something to do with “obtain” and “education.” A word’s meaning was then represented as a string of numbers describing its relation to other words. The algorithm combined these numbers to assign each sentence a meaning. Then, it found links between the meanings of sentences and the success of bills that contained them. Three other algorithms found connections between contextual data and bill success. Finally, an umbrella algorithm used the results from those four algorithms to predict what would happen…. his program scored about 65% better than simply guessing that a bill wouldn’t pass, Nay reported last month in PLOS ONE…(More).

AI software created for drones monitors wild animals and poachers


Springwise: “Artificial intelligence software installed into drones is to be used by US tech company Neurala to help protect endangered species from poachers. Working with the region’s Lingbergh Foundation, Neurala is currently helping operations in South Africa, Malawi and Zimbabwe and have had requests from Botswana, Mozambique and Zambia for assistance with combatting poaching.

The software is designed to monitor video as it is streamed back to researchers from unmanned drones that can fly for up to five hours, identifying animals, vehicles and poachers in real time without any human input. It can then alert rangers via the mobile command center if anything out of the ordinary is detected. The software can analyze regular or infrared footage, and therefore works with video taken day or night.

The Lindbergh Foundation will be deploying the technology as part of operation Air Shepherd, which is aimed at protecting elephants and rhinos in Southern Africa from poachers. According to the Foundation, elephants and rhinos are at risk of being extinct in just 10 years if current poaching rates continue, and has logged 5,000 hours of drone flight time over the course of 4,000 missions to date.

The use of drones within business models is proving popular, with recent innovations including a drone painting systemthat created crowdfunded murals and two Swiss hospitals that used a drone to deliver lab samples between them….(More)”.

Big Data, Data Science, and Civil Rights


Paper by Solon Barocas, Elizabeth Bradley, Vasant Honavar, and Foster Provost:  “Advances in data analytics bring with them civil rights implications. Data-driven and algorithmic decision making increasingly determine how businesses target advertisements to consumers, how police departments monitor individuals or groups, how banks decide who gets a loan and who does not, how employers hire, how colleges and universities make admissions and financial aid decisions, and much more. As data-driven decisions increasingly affect every corner of our lives, there is an urgent need to ensure they do not become instruments of discrimination, barriers to equality, threats to social justice, and sources of unfairness. In this paper, we argue for a concrete research agenda aimed at addressing these concerns, comprising five areas of emphasis: (i) Determining if models and modeling procedures exhibit objectionable bias; (ii) Building awareness of fairness into machine learning methods; (iii) Improving the transparency and control of data- and model-driven decision making; (iv) Looking beyond the algorithm(s) for sources of bias and unfairness—in the myriad human decisions made during the problem formulation and modeling process; and (v) Supporting the cross-disciplinary scholarship necessary to do all of that well…(More)”.