Prisoners use VR programme as a rehabilitation tool


Springwise: “The global prison population currently totals 10.5 million, and while many countries including the UK and US have seen a steady decline in crime rates over the past decade, the rate of reoffending prisoners has increased. About two-thirds of released prisoners in the US are rearrested within three years of release, and about three-quarters of released prisoners were rearrested within five. Virtual Rehab is a new project that seeks to rehabilitate inmates using VR technology.

Virtual Rehab’s interactive tool includes education on a broad range of themes, from family violence and sexual offences to psychological challenges including mental & emotional disorders. The programme works by placing the prisoner into interactive role play scenarios which reverse the aggressor / victim roles, propelling the prisoner into the skin of an assaulted person with the aim of developing empathy. The programme also includes formal education and vocational job training, developing professional skills to help ex-offenders thrive in the real world. …

Virtual reality has already had an impact in various areas related to rehabilitation, including in the treatment of conditions such as PTSD and anxiety. At Springwise, we recently covered two programmes (a glove and an online platform) that use online gaming to support the recovery of patients….(More)”

Beyond IRBs: Designing Ethical Review Processes for Big Data Research


Conference Proceedings by Future of Privacy Forum: “The ethical framework applying to human subject research in the biomedical and behavioral research fields dates back to the Belmont Report.Drafted in 1976 and adopted by the United States government in 1991 as the Common Rule, the Belmont principles were geared towards a paradigmatic controlled scientific experiment with a limited population of human subjects interacting directly with researchers and manifesting their informed consent. These days, researchers in academic institutions as well as private sector businesses not subject to the Common Rule, conduct analysis of a wide array of data sources, from massive commercial or government databases to individual tweets or Facebook postings publicly available online, with little or no opportunity to directly engage human subjects to obtain their consent or even inform them of research activities.

Data analysis is now used in multiple contexts, such as combatting fraud in the payment card industry, reducing the time commuters spend on the road, detecting harmful drug interactions, improving marketing mechanisms, personalizing the delivery of education in K-12 schools, encouraging exercise and weight loss, and much more. And companies deploy data research not only to maximize economic gain but also to test new products and services to ensure they are safe and effective. These data uses promise tremendous societal benefits but at the same time create new risks to privacy, fairness, due process and other civil liberties.

Increasingly, corporate officers find themselves struggling to navigate unsettled social norms and make ethical choices that are more befitting of philosophers than business managers or even lawyers. The ethical dilemmas arising from data analysis transcend privacy and trigger concerns about stigmatization, discrimination, human subject research, algorithmic decision making and filter bubbles.

The challenge of fitting the round peg of data-focused research into the square hole of existing ethical and legal frameworks will determine whether society can reap the tremendous opportunities hidden in the data exhaust of governments and cities, health care institutions and schools, social networks and search engines, while at the same time protecting privacy, fairness, equality and the integrity of the scientific process. One commentator called this “the biggest civil rights issue of our time.”…(More)”

Iran’s Civic Tech Sector


Leah Hunter at Forbes: “This is the story of Firuzeh Mahmoudi, founder of United4Iran and Irancubator, the first civic tech-focused startup incubator in Iran. She is also a creator of civil justice apps and a businessperson. Her business? Creating social good in a country she loves.

“Our mission is to improve civil liberties in Iran, and we do that in three ways,” says Mahmoudi, 45, who spent four years working for the United Nations in countries across the world as an international project coordinator before becoming a founder….

Mahmoudi realized that there wasn’t anyone focused on apps made for civic engagement inside Iran, so she built a team to create Irancubator. She works with 30 consultants and partners in the Iranian-American community. She also has a staff of 10 in her San Francisco Bay Area office—most of whom are Iranian, and were still in the country until 2009. “I really worked hard in bringing in resilient people…people who are smart, creative, kind. It’s so important to be kind. How you do the work, and how you show up, is that critical. If you try to make the world a better place, you’d better be nice. If you want to make the government be nicer, you’d better be nice, too.”

She and her team, based in the San Francisco Bay Area are creating apps like the Iran Prison Atlas – a database of all the country’s political prisoners, the judges who sentenced them and the prisons where they’re held. “We believe how these people are treated is a litmus test for our country,” Mahmoudi explains.

They are building an app women can use to track their ovulation cycles and periods. It also acts as a Trojan horse; as you dig deeper, it includes all sorts of information on women’s rights, including how to have equal rights in a marriage. (In Iran, divorce rights for women—as well as the right to equal custody of their children afterward—require a document signed before the wedding ceremony.) “This one’s not specifically targeting the richer women who are living in Northern Tehran. It’s an app that aims to engage people who live in rural areas, or not be as well-off or educated or perhaps more conservative or religious,” Mahmoudi explains. “Once you get in the app, you realize there are other parts. They include information on one’s rights as a woman in a marriage. Or basic concepts that may be completely foreign to them. Like maybe say, “Hey, do you know there’s a concept called ‘marital rape’? Even if someone’s your husband, they can’t treat you this way.”…

Right now, Irancubator is building a dozen apps. The first is launching in late January. Named RadiTo, this app works similarly to YouTube, but for radio instead of TV, allowing people in Iran to broadcast channels about the topics they care about. Someone can create a channel about LGBT rights or about children and education in their language. “Whatever they want—they can have a secure, safe platform to broadcast their message,” Mahmoudi explains.

From an operational perspective, this isn’t easy. Mahmoudi and her staff aren’t just building a startup. They’re operating from the other side of the world, working for users with whom they cannot directly communicate.  “Any startup is challenging and has so many hurdles. For us, it’s another level, working with so many security challenges,” says Mahmoudi….

The biggest challenge of all: they cannot go back to Iran. “The Islamic Republic coined me as an anti-revolutionary fugitive in one of their articles,” Mahmoudi says. “Half of my staff are refugees who got out.”…(More).

Artificial Intelligence Could Help Colleges Better Plan What Courses They Should Offer


Jeffrey R. Young at EdSsurge: Big data could help community colleges better predict how industries are changing so they can tailor their IT courses and other programs. After all, if Amazon can forecast what consumers will buy and prestock items in their warehouses to meet the expected demand, why can’t colleges do the same thing when planning their curricula, using predictive analytics to make sure new degree or certificates programs are started just in time for expanding job opportunities?

That’s the argument made by Gordon Freedman, president of the nonprofit National Laboratory for Education Transformation. He’s part of a new center that will do just that, by building a data warehouse that brings together up-to-date information on what skills employers need and what colleges currently offer—and then applying artificial intelligence to attempt to predict when sectors or certain employment needs might be expanding.

He calls the approach “opportunity engineering,” and the center boasts some heavy-hitting players to assist in the efforts, including the University of Chicago, the San Diego Supercomputing Center and Argonne National Laboratory. It’s called the National Center for Opportunity Engineering & Analysis.

Ian Roark, vice president of workforce development at Pima Community College in Arizona, is among those eager for this kind of “opportunity engineering” to emerge.

He explains when colleges want to start new programs, they face a long haul—it takes time to develop a new curriculum, put it through an internal review, and then send it through an accreditor….

Other players are already trying to translate the job market into a giant data set to spot trends. LinkedIn sits on one of the biggest troves of data, with hundreds of millions of job profiles, and ambitions to create what it calls the “economic graph” of the economy. But not everyone is on LinkedIn, which attracts mainly those in white-collar jobs. And companies such as Burning Glass Technologies have scanned hundreds of thousands of job listings and attempt to provide real-time intelligence on what employers say they’re looking for. Those still don’t paint the full picture, Freedman argues, such as what jobs are forming at companies.

“We need better information from the employer, better information from the job seeker and better information from the college, and that’s what we’re going after,” Freedman says…(More)”.

Solving some of the world’s toughest problems with the Global Open Policy Report


 at Creative Commons: “Open Policy is when governments, institutions, and non-profits enact policies and legislation that makes content, knowledge, or data they produce or fund available under a permissive license to allow reuse, revision, remix, retention, and redistribution. This promotes innovation, access, and equity in areas of education, data, software, heritage, cultural content, science, and academia.

For several years, Creative Commons has been tracking the spread of open policies around the world. And now, with the new Global Open Policy Report (PDF) by the Open Policy Network, we’re able to provide a systematic overview of open policy development.

screen-shot-2016-12-02-at-5-57-09-pmThe first-of-its-kind report gives an overview of open policies in 38 countries, across four sectors: education, science, data and heritage. The report includes an Open Policy Index and regional impact and local case studies from Africa, the Middle East, Asia, Australia, Latin America, Europe, and North America. The index measures open policy strength on two scales: policy strength and scope, and level of policy implementation. The index was developed by researchers from CommonSphere, a partner organization of CC Japan.

The Open Policy Index scores were used to classify countries as either Leading, Mid-Way, or Delayed in open policy development. The ten countries with the highest scores are Argentina, Bolivia, Chile, France, Kyrgyzstan, New Zealand, Poland, South Korea, Tanzania, and Uruguay…(More)

The data-driven social worker


NESTA: “Newcastle City Council has been using data to change the way it delivers long-term social work to vulnerable children and families.

Social workers have data analysts working alongside them. This is helping them to identify common factors among types of cases, understand the root causes of social problems and create more effective (and earlier) interventions.

What is it?

Social work teams have an embedded data analyst, whose role is to look for hypotheses to test and analysis to perform that offers insight into how best to support families.

Their role is not purely quantitative; they are expected to identify patterns, and undertake deep-dive or case study analysis. The data analysts also test what works, measuring the success of externally commissioned services, along with cost information.

While each social worker only has knowledge of their own individual cases, data analysts have a bird’s-eye view of the whole team’s activity, enabling them to look across sets of families for common patterns.

How does it work?

Data analysts are responsible for maintaining ChildStat, a data dashboard that social workers use to help manage their caseloads. The data insights found by the embedded analysts can highlight the need to work in a different way.

For example, one unit works with children at risk of physical abuse. Case file analysis of the mental health histories of the parents found that 20% of children had parents with a personality disorder, while 60-70% had a parent who had experienced sexual or physical abuse as children themselves.

Traditional social work methods may not have uncovered this insight, which led Newcastle to look for new responses to working with these types of families.

Data analysis has also helped to identify the factors that are most predictive of a child becoming NEET (not in education, employment or training), enabling the team to review their approach to working with families and focus on earlier intervention….(More)”

Data Literacy – What is it and how can we make it happen?


Introduction by Mark Frank, Johanna Walker, Judie Attard, Alan Tygel of Special Issue on Data Literacy of The Journal of Community Informatics: “With the advent of the Internet and particularly Open Data, data literacy (the ability of non-specialists to make use of data) is rapidly becoming an essential life skill comparable to other types of literacy. However, it is still poorly defined and there is much to learn about how best to increase data literacy both amongst children and adults. This issue addresses both the definition of data literacy and current efforts on increasing and sustaining it. A feature of the issue is the range of contributors. While there are important contributions from the UK, Canada and other Western countries, these are complemented by several papers from the Global South where there is an emphasis on grounding data literacy in context and relating it the issues and concerns of communities. (Full Text: PDF)

See also:

Creating an Understanding of Data Literacy for a Data-driven Society by Annika Wolff, Daniel Gooch, Jose J. Cavero Montaner, Umar Rashid, Gerd Kortuem

Data Literacy defined pro populo: To read this article, please provide a little information by David Crusoe

Data literacy conceptions, community capabilities by Paul Matthews

Urban Data in the primary classroom: bringing data literacy to the UK curriculum by Annika Wolff, Jose J Cavero Montaner, Gerd Kortuem

Contributions of Paulo Freire for a Critical Data Literacy: a Popular Education Approach by Alan Freihof Tygel, Rosana Kirsch

DataBasic: Design Principles, Tools and Activities for Data Literacy Learners by Catherine D’Ignazio, Rahul Bhargava

Perceptions of ICT use in rural Brazil: Factors that impact appropriation among marginalized communities by Paola Prado, J. Alejandro Tirado-Alcaraz, Mauro Araújo Câmara

Graphical Perception of Value Distributions: An Evaluation of Non-Expert Viewers’ Data Literacy by Arkaitz Zubiaga, Brian Mac Namee

How to Hold Algorithms Accountable


Nicholas Diakopoulos and Sorelle Friedler at MIT Technology Review:  Algorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold them accountable.

Various industry efforts, including a consortium of Silicon Valley behemoths, are beginning to grapple with the ethics of deploying algorithms that can have unanticipated effects on society. Algorithm developers and product managers need new ways to think about, design, and implement algorithmic systems in publicly accountable ways. Over the past several months, we and some colleagues have been trying to address these goals by crafting a set of principles for accountable algorithms….

Accountability implies an obligation to report and justify algorithmic decision-making, and to mitigate any negative social impacts or potential harms. We’ll consider accountability through the lens of five core principles: responsibility, explainability, accuracy, auditability, and fairness.

Responsibility. For any algorithmic system, there needs to be a person with the authority to deal with its adverse individual or societal effects in a timely fashion. This is not a statement about legal responsibility but, rather, a focus on avenues for redress, public dialogue, and internal authority for change. This could be as straightforward as giving someone on your technical team the internal power and resources to change the system, making sure that person’s contact information is publicly available.

Explainability. Any decisions produced by an algorithmic system should be explainable to the people affected by those decisions. These explanations must be accessible and understandable to the target audience; purely technical descriptions are not appropriate for the general public. Explaining risk assessment scores to defendants and their legal counsel would promote greater understanding and help them challenge apparent mistakes or faulty data. Some machine-learning models are more explainable than others, but just because there’s a fancy neural net involved doesn’t mean that a meaningful explanationcan’t be produced.

Accuracy. Algorithms make mistakes, whether because of data errors in their inputs (garbage in, garbage out) or statistical uncertainty in their outputs. The principle of accuracy suggests that sources of error and uncertainty throughout an algorithm and its data sources need to be identified, logged, and benchmarked. Understanding the nature of errors produced by an algorithmic system can inform mitigation procedures.

Auditability. The principle of auditability states that algorithms should be developed to enable third parties to probe and review the behavior of an algorithm. Enabling algorithms to be monitored, checked, and criticized would lead to more conscious design and course correction in the event of failure. While there may be technical challenges in allowing public auditing while protecting proprietary information, private auditing (as in accounting) could provide some public assurance. Where possible, even limited access (e.g., via an API) would allow the public a valuable chance to audit these socially significant algorithms.

Fairness. As algorithms increasingly make decisions based on historical and societal data, existing biases and historically discriminatory human decisions risk being “baked in” to automated decisions. All algorithms making decisions about individuals should be evaluated for discriminatory effects. The results of the evaluation and the criteria used should be publicly released and explained….(More)”

New Data Portal to analyze governance in Africa


Governance and Service Delivery: Practical Applications of Social Accountability Across Sectors


Book edited by Derick W. Brinkerhoff, Jana C. Hertz, and Anna Wetterberg: “…Historically, donors and academics have sought to clarify what makes sectoral projects effective and sustainable contributors to development. Among the key factors identified have been (1) the role and capabilities of the state and (2) the relationships between the state and citizens, phenomena often lumped together under the broad rubric of “governance.” Given the importance of a functioning state and positive interactions with citizens, donors have treated governance as a sector in its own right, with projects ranging from public sector management reform, to civil society strengthening, to democratization (Brinkerhoff, 2008). The link between governance and sectoral service delivery was highlighted in the World Bank’s 2004 World Development Report, which focused on accountability structures and processes (World Bank, 2004).

Since then, sectoral specialists’ awareness that governance interventions can contribute to service delivery improvements has increased substantially, and there is growing recognition that both technical and governance elements are necessary facets of strengthening public services. However, expanded awareness has not reliably translated into effective integration of governance into sectoral programs and projects in, for example, health, education, water, agriculture, or community development. The bureaucratic realities of donor programming offer a partial explanation…. Beyond bureaucratic barriers, though, lie ongoing gaps in practical knowledge of how best to combine attention to governance with sector-specific technical investments. What interventions make sense, and what results can reasonably be expected? What conditions support or limit both improved governance and better service delivery? How can citizens interact with public officials and service providers to express their needs, improve services, and increase responsiveness? Various models and compilations of best practices have been developed, but debates remain, and answers to these questions are far from settled. This volume investigates these questions and contributes to building understanding that will enhance both knowledge and practice. In this book, we examine six recent projects, funded mostly by the United States Agency for International Development and implemented by RTI International, that pursued several different paths to engaging citizens, public officials, and service providers on issues related to accountability and sectoral services…(More)”