Identifying Healthcare Fraud with Open Data


Paper by Xuan Zhang et al: “Health care fraud is a serious problem that impacts every patient and consumer. This fraudulent behavior causes excessive financial losses every year and causes significant patient harm. Healthcare fraud includes health insurance fraud, fraudulent billing of insurers for services not provided, and exaggeration of medical services, etc. To identify healthcare fraud thus becomes an urgent task to avoid the abuse and waste of public funds. Existing methods in this research field usually use classified data from governments, which greatly compromises the generalizability and scope of application. This paper introduces a methodology to use publicly available data sources to identify potentially fraudulent behavior among physicians. The research involved data pairing of multiple datasets, selection of useful features, comparisons of classification models, and analysis of useful predictors. Our performance evaluation results clearly demonstrate the efficacy of the proposed method….(More)”.

Open innovation and the evaluation of internet-enabled public services in smart cities


Krassimira Paskaleva and Ian Cooper in Technovation: This article is focused on public service innovation from an innovation management perspective. It presents research experience gained from a European project for managing social and technological innovation in the production and evaluation of citizen-centred internet-enabled services in the public sector.

It is based on six urban pilot initiatives, which sought to operationalise a new approach to co-producing and co-evaluating civic services in smart cities – commonly referred to as open innovation for smart city services. Research suggests that the evidence base underpinning this approach is not sufficiently robust to support claims being made about its effectiveness.

Instead evaluation research of citizen-centred internet-enabled urban services is in its infancy and there are no tested methods or tools in the literature for supporting this approach.

The paper reports on the development and trialing of a novel Co-evaluation Framework, indicators and reporting categories, used to support the co-production of smart city services in an EU-funded project. Our point of departure is that innovation of services is a sub-set of innovation management that requires effective integration of technological with social innovation, supported by the right skills and capacities. The main skills sets needed for effective co-evaluation of open innovation services are the integration of stakeholder management with evaluation capacities.”

Big Data: the End of the Scientific Method?


Paper by S. Succi and P.V. Coveney at arXiv: “We argue that the boldest claims of Big Data are in need of revision and toning-down, in view of a few basic lessons learned from the science of complex systems. We point out that, once the most extravagant claims of Big Data are properly discarded, a synergistic merging of BD with big theory offers considerable potential to spawn a new scientific paradigm capable of overcoming some of the major barriers confronted by the modern scientific method originating with Galileo. These obstacles are due to the presence of nonlinearity, nonlocality and hyperdimensions which one encounters frequently in multiscale modelling….(More)”.

We Need Transparency in Algorithms, But Too Much Can Backfire


Kartik Hosanagar and Vivian Jair at Harvard Business Review: “In 2013, Stanford professor Clifford Nass faced a student revolt. Nass’s students claimed that those in one section of his technology interface course received higher grades on the final exam than counterparts in another. Unfortunately, they were right: two different teaching assistants had graded the two different sections’ exams, and one had been more lenient than the other. Students with similar answers had ended up with different grades.

Nass, a computer scientist, recognized the unfairness and created a technical fix: a simple statistical model to adjust scores, where students got a certain percentage boost on their final mark when graded by a TA known to give grades that percentage lower than average. In the spirit of openness, Nass sent out emails to the class with a full explanation of his algorithm. Further complaints poured in, some even angrier than before. Where had he gone wrong?…

Kizilcec had in fact tested three levels of transparency: low and medium but also high, where the students got not only a paragraph explaining the grading process but also their raw peer-graded scores and how these were each precisely adjusted by the algorithm to get to a final grade. And this is where the results got more interesting. In the experiment, while medium transparency increased trust significantly, high transparency eroded it completely, to the point where trust levels were either equal to or lower than among students experiencing low transparency.

Making Modern AI Transparent: A Fool’s Errand?

 What are businesses to take home from this experiment?  It suggests that technical transparency – revealing the source code, inputs, and outputs of the algorithm – can build trust in many situations. But most algorithms in the world today are created and managed by for-profit companies, and many businesses regard their algorithms as highly valuable forms of intellectual property that must remain in a “black box.” Some lawmakers have proposed a compromise, suggesting that the source code be revealed to regulators or auditors in the event of a serious problem, and this adjudicator will assure consumers that the process is fair.

This approach merely shifts the burden of belief from the algorithm itself to the regulators. This may a palatable solution in many arenas: for example, few of us fully understand financial markets, so we trust the SEC to take on oversight. But in a world where decisions large and small, personal and societal, are being handed over to algorithms, this becomes less acceptable.

Another problem with technical transparency is that it makes algorithms vulnerable to gaming. If an instructor releases the complete source code for an algorithm grading student essays, it becomes easy for students to exploit loopholes in the code:  maybe, for example, the algorithm seeks evidence that the students have done research by looking for phrases such as “according to published research.” A student might then deliberately use this language at the start of every paragraph in her essay.

But the biggest problem is that modern AI is making source code – transparent or not – less relevant compared with other factors in algorithmic functioning. Specifically, machine learning algorithms – and deep learning algorithms in particular – are usually built on just a few hundred lines of code. The algorithms logic is mostly learned from training data and is rarely reflected in its source code. Which is to say, some of today’s best-performing algorithms are often the most opaque. High transparency might involve getting our heads around reams and reams of data – and then still only being able to guess at what lessons the algorithm has learned from it.

This is where Kizilcec’s work becomes relevant – a way to embrace rather than despair over deep learning’s impenetrability. His work shows that users will not trust black box models, but they don’t need – or even want – extremely high levels of transparency. That means responsible companies need not fret over what percentage of source code to reveal, or how to help users “read” massive datasets. Instead, they should work to provide basic insights on the factors driving algorithmic decisions….(More)”

What top technologies should the next generation know how to use?


Lottie Waters at Devex: “Technology provides some great opportunities for global development, and a promising future. But for the next generation of professionals to succeed, it’s vital they stay up to date with the latest tech, innovations, and tools.

In a recent report produced by Devex in collaboration with the United States Agency for International Development and DAI, some 86 percent of survey respondents believe the technology, skills, and approaches development professionals will be using in 10 years’ time will be significantly different to today’s.

In fact, “technology for development” is regarded as the sector that will see the most development progress, but is also cited as the one that will see the biggest changes in skills required, according to the survey.

“As different technologies develop, new possibilities will open up that we may not even be aware of yet. These opportunities will bring new people into the development sector and require those in it to be more agile in adapting technologies to meet development challenges,” said one survey respondent.

While “blockchain,” “artificial intelligence,” and “drones” may be the current buzzwords surrounding tech in global development, geographical information systems, or GIS, and big data are actually the top technologies respondents believe the next generation of development professionals should learn how to utilize.

So, how are these technologies currently being used in development, how might this change in the near future, and what will their impact be in the next 10 years? Devex spoke with experts in the field who are already integrating these technologies into their work to find out….(More)”

How games can help craft better policy


Shrabonti Bagchi at LiveMint: “I have never seen economists having fun!” Anantha K. Duraiappah, director of Unesco-MGIEP (Mahatma Gandhi Institute of Education for Peace and Sustainable Development), was heard exclaiming during a recent conference. The academics in question were a group of environmental economists at an Indian Society for Ecological Economics conference in Thrissur, Kerala, and they were playing a game called Cantor’s World, in which each player assumes the role of the supreme leader of a country and gets to decide the fate of his or her nation.

Well, it’s not quite as simple as that (this is not Settlers Of Catan!). Players have to take decisions on long-term goals like education and industrialization based on data such as GDP, produced capital, human capital, and natural resources while adhering to the UN’s sustainable development goals. The game is probably the most accessible and enjoyable way of seeing how long-term policy decisions change and impact the future of countries.

That’s what Fields Of View does. The Bengaluru-based non-profit creates games, simulations and learning tools for the better understanding of policy and its impact. Essentially, their work is to make sure economists like the ones at the Thrissur conference actually have some fun while thrashing out crucial issues of public policy.

A screen grab from ‘Cantor’s World’.

A screen grab from ‘Cantor’s World’.

Can policymaking be made more relevant to the lives of people affected by it? Can policymaking be more responsive to a dynamic social-economic-environmental context? Can we reduce the time taken for a policy to go from the drawing board to implementation? These were some of the questions the founders of Fields Of View, Sruthi Krishnan and Bharath M. Palavalli, set out to answer. “There are no binaries in policymaking. There are an infinite set of possibilities,” says Palavalli, who was named an Ashoka fellow in May for his work at the intersection of technology, social sciences and design.

Earlier this year, Fields Of View organized a session of one of its earliest games, City Game, for a group of 300 female college students in Mangaluru. City Game is a multiplayer offline game designed to explore urban infrastructure and help groups and individual understand the dynamics of urban governance…(More)”.

Doing good data science


Mike Loukides, Hilary Mason and DJ Patil at O’Reilly: “(This post is the first in a series on data ethics) The hard thing about being an ethical data scientist isn’t understanding ethics. It’s the junction between ethical ideas and practice. It’s doing good data science.

There has been a lot of healthy discussion about data ethics lately. We want to be clear: that discussion is good, and necessary. But it’s also not the biggest problem we face. We already have good standards for data ethics. The ACM’s code of ethics, which dates back to 1993, is clear, concise, and surprisingly forward-thinking; 25 years later, it’s a great start for anyone thinking about ethics. The American Statistical Association has a good set of ethical guidelines for working with data. So, we’re not working in a vacuum.

And, while there are always exceptions, we believe that most people want to be fair. Data scientists and software developers don’t want to harm the people using their products. There are exceptions, of course; we call them criminals and con artists. Defining “fairness” is difficult, and perhaps impossible, given the many crosscutting layers of “fairness” that we might be concerned with. But we don’t have to solve that problem in advance, and it’s not going to be solved in a simple statement of ethical principles, anyway.

The problem we face is different: how do we put ethical principles into practice? We’re not talking about an abstract commitment to being fair. Ethical principles are worse than useless if we don’t allow them to change our practice, if they don’t have any effect on what we do day-to-day. For data scientists, whether you’re doing classical data analysis or leading-edge AI, that’s a big challenge. We need to understand how to build the software systems that implement fairness. That’s what we mean by doing good data science.

Any code of data ethics will tell you that you shouldn’t collect data from experimental subjects without informed consent. But that code won’t tell you how to implement “informed consent.” Informed consent is easy when you’re interviewing a few dozen people in person for a psychology experiment. Informed consent means something different when someone clicks on an item in an online catalog (hello, Amazon), and ads for that item start following them around ad infinitum. Do you use a pop-up to ask for permission to use their choice in targeted advertising? How many customers would you lose? Informed consent means something yet again when you’re asking someone to fill out a profile for a social site, and you might (or might not) use that data for any number of experimental purposes. Do you pop up a consent form in impenetrable legalese that basically says “we will use your data, but we don’t know for what”? Do you phrase this agreement as an opt-out, and hide it somewhere on the site where nobody will find it?…

To put ethical principles into practice, we need space to be ethical. We need the ability to have conversations about what ethics means, what it will cost, and what solutions to implement. As technologists, we frequently share best practices at conferences, write blog posts, and develop open source technologies—but we rarely discuss problems such as how to obtain informed consent.

There are several facets to this space that we need to think about.

First, we need corporate cultures in which discussions about fairness, about the proper use of data, and about the harm that can be done by inappropriate use of data can be considered. In turn, this means that we can’t rush products out the door without thinking about how they’re used. We can’t allow “internet time” to mean ignoring the consequences. Indeed, computer security has shown us the consequences of ignoring the consequences: many companies that have never taken the time to implement good security practices and safeguards are now paying with damage to their reputations and their finances. We need to do the same when thinking about issues like fairness, accountability, and unintended consequences….(More)”.

Democracy Is a Habit: Practice It


Melvin Rogers at the Boston Review: “After decades of triumph,” The Economist recently concluded, “democracy is losing ground.” But not, apparently, in the West, whose “mature democracies . . . are not yet in serious danger.” On this view, reports of the death of American democracy have been greatly exaggerated. “Donald Trump may scorn liberal norms,” the reasoning goes, “but America’s checks and balances are strong, and will outlast him.” The truly endangered societies are those where “institutions are weaker and democratic habits less ingrained.”

It has become a common refrain, even among those critical of Trump’s administration. “Our democracy is hard to kill,” Harvard political scientist Steven Levitsky said in an interview about his new book with Daniel Zeblatt, How Democracies Die. “We do still have very strong democratic institutions. We’re not Turkey, we’re not Hungary, we’re not Venezuela. We can behave quite recklessly and irresponsibly and probably still muddle through that.”

Is democracy in the United States really so robust? At the outset of World War II, American philosopher John Dewey cautioned against so easy a conclusion—and the simplistic picture of democratic society that it presumes. In Freedom and Culture (1939), he worried that democracy might succumb to the illusion of stability and endurance in the face of threats to liberty and norms of decency. According to Dewey, we must not believe

that democratic conditions automatically maintain themselves, or that they can be identified with fulfillment of prescriptions laid down in a constitution. Beliefs of this sort merely divert attention from what is going on, just as the patter of the prestidigitator enables him to do things that are not noticed by those whom he is engaged in fooling. For what is actually going on may be the formation of conditions that are hostile to any kind of democratic liberties.

Dewey’s was a warning to be wary not just of bad governance but of a more fundamental deformation of society. “This would be too trite to repeat,” he admits, “were it not that so many persons in the high places of business talk as if they believed or could get others to believe that the observance of formulae that have become ritualistic are effective safeguards of our democratic heritage.”…

Dewey may seem like an odd resource to recall in our current political climate. For if we stand in what Hannah Arendt once called “dark times,” Dewey’s optimistic faith in democracy—his unflinching belief in the reflective capacity of human beings to secure the good and avert the bad, and in the progressive character of American democracy—may look ill-equipped to address our current crisis.

Yet this faith was always shaped by an important insight regarding democracy that many seem to have ignored. For Dewey, democracy’s survival depends on a set of habits and dispositions—in short, a culture—to sustain it. …

“The democratic road is the hard one to take,” Dewey concluded in Freedom and Culture. “It is the road which places the greatest burden of responsibility on the greatest number of human beings.” Precisely for this reason, Dewey believed the culture of democracy—the habits and sensibilities of the citizenry—in greater need of scrutiny than its constitution and procedures. For what are constitutions and procedures once you have deformed the ground upon which their proper functioning depends?…(More)”.

How to be a public entrepreneur


Rowan Conway at the RSA: “Political theorist Elinor Ostrom was the first to coin the phrase “public entrepreneur” in her 1965 UCLA PhD thesis where she proposed that government actors should be the makers of purpose-driven businesses. She later went on to surprise the world of economics by winning a Nobel prize.

To the economic establishment Ostrom was a social scientist and her theories of common goods and public purpose enterprise ran counter to the economic orthodoxy. 44 years later, at the same time that she was taking the stage as the first (and only) woman to win a Nobel prize for economics, another California-based thinker was positing his own vision for entrepreneurship… “Move fast and break things” was famously Mark Zuckerberg’s credo for Silicon Valley entrepreneurs. “Unless you are breaking stuff,” he said in 2009, “you are not moving fast enough.” This phrase came to epitomise the “fail fast” start-up culture that has seeped into our consciousness and redefined modern life in the last decade.

Public vs Private entrepreneurs

So which of these two types of entrepreneurship should prevail? I’d say that they’re not playing on the same field and barely even playing the same game. While the Silicon Valley model glorifies the frat boys who dreamt up tech start-ups in their dorm rooms and took the “self-made” financial gains when big tech took off, public entrepreneurs are not cast from this mold. They are the government actors taking on the system to solve social and environmental problems and the idea of “breaking things” won’t appeal to them. “Moving fast”, however, speaks to their ambitions for an agile government that wants to make change in a digital world.

Public entrepreneurs are socially minded — but they differ from social entrepreneurs in that they carry out a public or state role. In a Centre for Public Impact briefing paper entitled “Enter the Public Entrepreneur” the difference is clear:

“While “social entrepreneurs” are people outside government, public entrepreneurs act within government and, at their heart, are a blend of two different roles: that of a public servant, and that of an entrepreneur. The underlying premise is that these roles are usually distinct but the skill sets they require need not be. Indeed, the future public servant will increasingly need to think and act like an entrepreneur — building new relationships, leveraging resources, working across sector lines and acting, and sometimes failing, fast.”

Today we publish a RSA Lab report entitled “Move Fast and Fix Things” in partnership with Innovate UK. The report examines the role of Public Entrepreneurs who want to find ways to move fast without leaving a trail of destruction. It builds on the literature that makes the case for public missionsand entrepreneurship in government and acts as a kind of “how to guide” for those in the public sector who want to think and act like entrepreneurs, but sometimes feel like they are pushing up against an immovable bureaucratic system.

Acting entrepreneurially with procurement

A useful distinction of types of government innovation by the European Commission describes “innovation in government” as transforming public administration, such as the shift to digital service provision and “innovation through government” as initiatives that “foster innovation elsewhere in society, such as the public procurement of innovation”. Our report looks at public procurement — specifically the Small Business Research Initiative (SBRI) — as a route for innovation through government.

Governments have catalytic spending power. The UK public sector alone spends over £251.5 billion annually procuring goods and services which accounts for 33% of public sector spend and 13.7% of GDP. A profound shift in practice is required if government is to proactively use this power to stimulate innovation in the way that Mariana Mazzucato, author of The Entrepreneurial State calls for. As Director of the UCL Institute for Innovation and Public Purpose she advocates for “mission-oriented innovation” which can enable speed as it has “not only a rate, but also a direction” — purposefully using government’s purchasing power to stimulate innovation for good.

But getting procurement professionals to understand how to be entrepreneurial with public funds is no mean feat….(More)”.

Making a 21st Century Constitution: Playing Fair in Modern Democracies


Making a 21st Century Constitution

Book by Frank Vibert: “Democratic constitutions are increasingly unfit for purpose with governments facing increased pressures from populists and distrust from citizens. The only way to truly solve these problems is through reform. Within this important book, Frank Vibert sets out the key challenges to reform, the ways in which constitutions should be revitalised and provides the standards against which reform should be measured…

Democratic governments are increasingly under pressure from populists, and distrust of governmental authority is on the rise. Economic causes are often blamed. Making a 21st Century Constitution proposes instead that constitutions no longer provide the kind of support that democracies need in today’s conditions, and outlines ways in which reformers can rectify this.

Frank Vibert addresses key sources of constitutional obsolescence, identifies the main challenges for constitutional updating and sets out the ways in which constitutions may be made suitable for the the 21st century. The book highlights the need for reformers to address the deep diversity of values in today’s urbanized societies, the blind spots and content-lite nature of democratic politics, and the dispersion of authority among new chains of intermediaries.

This book will be invaluable for students of political science, public administration and policy, law and constitutional economics. Its analysis of how constitutions can be made fit for purpose again will appeal to all concerned with governance, practitioners and reformers alike…(More)”.