How AI Is Crunching Big Data To Improve Healthcare Outcomes


PSFK: “The state of your health shouldn’t be a mystery, nor should patients or doctors have to wait long to find answers to pressing medical concerns. In PSFK’s Future of Health Report, we dig deep into the latest in AI, big data algorithms and IoT tools that are enabling a new, more comprehensive overview of patient data collection and analysis. Machine support, patient information from medical records and conversations with doctors are combined with the latest medical literature to help form a diagnosis without detracting from doctor-patient relations.

The impact of improved AI helps patients form a baseline for well-being and is making changes all across the healthcare industry. AI not only streamlines intake processes and reduces processing volume at clinics, it also controls input and diagnostic errors within a patient record, allowing doctors to focus on patient care and communication, rather than data entry. AI also improves pattern recognition and early diagnosis by learning from multiple patient data sets.

By utilizing deep learning algorithms and software, healthcare providers can connect various libraries of medical information and scan databases of medical records, spotting patterns that lead to more accurate detection and greater breadth of efficiency in medical diagnosis and research. IBM Watson, which has previously been used to help identify genetic markers and develop drugs, is applying its neural learning networks to help doctors correctly diagnose heart abnormalities from medical imaging tests. By scanning thousands of images and learning from correct diagnoses, Watson is able to increase diagnostic accuracy, supporting doctors’ cardiac assessments.

Outside of the doctor’s office, AI is also being used to monitor patient vitals to help create a baseline for well-being. By monitoring health on a day-to-day basis, AI systems can alert patients and medical teams to abnormalities or changes from the baseline in real time, increasing positive outcomes. Take xbird, a mobile platform that uses artificial intelligence to help diabetics understand when hypoglycemic attacks will occur. The AI combines personal and environmental data points from over 20 sensors within mobile and wearable devices to create an automated personal diary and cross references it against blood sugar levels. Patients then share this data with their doctors in order to uncover their unique hypoglycemic triggers and better manage their condition.

In China, meanwhile, web provider Baidu has debuted Melody, a chat-based medical assistant that helps individuals communicate their symptoms, learn of possible diagnoses and connect to medical experts….(More)”.

From binoculars to big data: Citizen scientists use emerging technology in the wild


Interview by Rebecca Kondos: “For years, citizen scientists have trekked through local fields, rivers, and forests to observe, measure, and report on species and habitats with notebooks, binoculars, butterfly nets, and cameras in hand. It’s a slow process, and the gathered data isn’t easily shared. It’s a system that has worked to some degree, but one that’s in need of a technology and methodology overhaul.

Thanks to the team behind Wildme.org and their Wildbook software, both citizen and professional scientists are becoming active participants in using AI, computer vision, and big data. Wildbook is working to transform the data collection process, and citizen scientists who use the software have more transparency into conservation research and the impact it’s making. As a result, engagement levels have increased; scientists can more easily share their work; and, most important, endangered species like the whale shark benefit.

In this interview, Colin Kingen, a software engineer for WildBook, (with assistance from his colleagues Jason Holmberg and Jon Van Oast) discusses Wildbook’s work, explains classic problems in field observation science, and shares how Wildbook is working to solve some of the big problems that have plagued wildlife research. He also addresses something I’ve wondered about: why isn’t there an “uberdatabase” to share the work of scientists across all global efforts? The work Kingen and his team are doing exemplifies what can be accomplished when computer scientists with big hearts apply their talents to saving wildlife….(More)”.

Open data on universities – New fuel for transformation


François van Schalkwyk at University World News: “Accessible, usable and relevant open data on South African universities makes it possible for a wide range of stakeholders to monitor, advise and challenge the transformation of South Africa’s universities from an informed perspective.

Some describe data as the new oil while others suggest it is a new form of capital or compare it to electricity. Either way, there appears to be a groundswell of interest in the potential of data to fuel development.

Whether the proliferation of data is skewing development in favour of globally networked elites or disrupting existing asymmetries of information and power, is the subject of ongoing debate. Certainly, there are those who will claim that open data, from a development perspective, could catalyse disruption and redistribution.

Open data is data that is free to use without restriction. Governments and their agencies, universities and their researchers, non-governmental organisations and their donors, and even corporations, are all potential sources of open data.

Open government data, as a public rather than a private resource, embedded in principles of universal access, participation and transparency, is touted as being able to restore the deteriorating levels of trust between citizens and their governments.

Open data promises to do so by making the decisions and processes of the state more transparent and inclusive, empowering citizens to participate and to hold public institutions to account for the distribution of public services and resources.

Benefits of open data

Open data has other benefits over its more cloistered cousins (data in private networks, big data, etc). By democratising access, open data makes possible the use of data on, for example, health services, crime, the environment, procurement and education by a range of different users, each bringing their own perspective to bear on the data. This can expose bias in the data or may improve the quality of the data by surfacing data errors. Both are important when data is used to shape government policies.

By removing barriers to reusing data such as copyright or licence-fees, tech-savvy entrepreneurs can develop applications to assist the public to make more informed decisions by making available easy-to-understand information on medicine prices, crime hot-spots, air quality, beneficial ownership, school performance, etc. And access to open research data can improve quality and efficiency in science.

Scientists can check and confirm the data on which important discoveries are based if the data is open, and, in some cases, researchers can reuse open data from other studies, saving them the cost and effort of collecting the data themselves.

‘Open washing’

But access alone is not enough for open data to realise its potential. Open data must also be used. And data is used if it holds some value for the user. Governments have been known to publish server rooms full of data that no one is interested in to support claims of transparency and supporting the knowledge economy. That practice is called ‘open washing’. …(More)”

Bangalore Taps Tech Crowdsourcing to Fix ‘Unruly’ Gridlock


Saritha Rai at Bloomberg Technology: “In Bangalore, tech giants and startups typically spend their days fiercely battling each other for customers. Now they are turning their attention to a common enemy: the Indian city’s infernal traffic congestion.

Cross-town commutes that can take hours has inspired Gridlock Hackathon, a contest initiated by Flipkart Online Services Pvt. for technology workers to find solutions to the snarled roads that cost the economy billions of dollars. While the prize totals a mere $5,500, it’s attracting teams from global giants Microsoft Corp., Google and Amazon.com. Inc. to local startups including Ola.

The online contest is crowdsourcing solutions for Bangalore, a city of more than 10 million, as it grapples with inadequate roads, unprecedented growth and overpopulation. The technology industry began booming decades ago and with its base of talent, it continues to attract companies. Just last month, Intel Corp. said it would invest $178 million and add more workers to expand its R&D operations.

The ideas put forward at the hackathon range from using artificial intelligence and big data on traffic flows to true moonshots, such as flying cars.

The gridlock remains a problem for a city dependent on its technology industry and seeking to attract new investment…(More)”.

Open data: Accountability and transparency


 at Big Data and Society: “The movements by national governments, funding agencies, universities, and research communities toward “open data” face many difficult challenges. In high-level visions of open data, researchers’ data and metadata practices are expected to be robust and structured. The integration of the internet into scientific institutions amplifies these expectations. When examined critically, however, the data and metadata practices of scholarly researchers often appear incomplete or deficient. The concepts of “accountability” and “transparency” provide insight in understanding these perceived gaps. Researchers’ primary accountabilities are related to meeting the expectations of research competency, not to external standards of data deposition or metadata creation. Likewise, making data open in a transparent way can involve a significant investment of time and resources with no obvious benefits. This paper uses differing notions of accountability and transparency to conceptualize “open data” as the result of ongoing achievements, not one-time acts….(More)”.

Justice in Algorithmic Robes


Editorial by Joseph Savirimuthu of a Special Issue of the International Review of Law, Computers & Technology: “The role and impact of algorithms has attracted considerable interest in the media. Its impact is already being reflected in adjustments made in a number of sectors – entertainment, travel, transport, cities and financial services. From an innovation point of view, algorithms enable new knowledge to be created and identify solutions to problems. The emergence of smart sensing technologies, 3D printing, automated systems and robotics is seamlessly being interwoven into discourses such as ‘the collaborative economy’, ‘governance by platforms’ and ‘empowerment’. Innovations such as body worn cameras, fitness trackers, 3D printing, smart meters, robotics and Big Data hold out the promise of a new algorithmic future. However, the shift in focus from natural and scarce resources towards information also makes individuals the objects and the mediated construction of access and knowledge infrastructures now provide the conditions for harnessing value from data. The increasing role of algorithms in environments mediated by technology also coincide with growing inter-disciplinary scholarship voicing concerns about the vulnerability of the values we associate with fundamental freedoms and how these are being algorithmically reconfigured or dismantled in a systematic manner. The themed issue, Justice in Algorithmic Robes, is intended to initiate a dialogue on both the challenges and opportunities as digitalization ushers in a period of transformation that has no immediate parallels in terms of scale, speed and reach. The articles provide different perspectives to the transformation taking place in the digital environment. The contributors offer an inter-disciplinary view of how the digital economy is being invigorated and evaluate the regulatory responses – in particular, how these transformations interact with law. The different spheres covered in Justice in Algorithmic Robes – the relations between the State and individuals, autonomous technology, designing human–computer interactions, infrastructures of trust, accountability in the age of Big Data, and health and wearables – not only reveal the problem of defining spheres of economic, political and social activity, but also highlight how these contexts evolve into structures for dominance, power and control. Re-imagining the role of law does not mean that technology is the problem but the central idea from the contributions is that how we critically interpret and construct Justice in Algorithmic Robes is probably the first step we must take, always mindful of the fact that law may actually reinforce power structures….(Full Issue)”.

Data and the City


Book edited by Rob Kitchin, Tracey P. Lauriault, and Gavin McArdle: “There is a long history of governments, businesses, science and citizens producing and utilizing data in order to monitor, regulate, profit from and make sense of the urban world. Recently, we have entered the age of big data, and now many aspects of everyday urban life are being captured as data and city management is mediated through data-driven technologies.

Data and the City is the first edited collection to provide an interdisciplinary analysis of how this new era of urban big data is reshaping how we come to know and govern cities, and the implications of such a transformation. This book looks at the creation of real-time cities and data-driven urbanism and considers the relationships at play. By taking a philosophical, political, practical and technical approach to urban data, the authors analyse the ways in which data is produced and framed within socio-technical systems. They then examine the constellation of existing and emerging urban data technologies. The volume concludes by considering the social and political ramifications of data-driven urbanism, questioning whom it serves and for what ends.

This book, the companion volume to 2016’s Code and the City, offers the first critical reflection on the relationship between data, data practices and the city, and how we come to know and understand cities through data. It will be crucial reading for those who wish to understand and conceptualize urban big data, data-driven urbanism and the development of smart cities….(More)”

Children and the Data Cycle: Rights And Ethics in a Big Data World


Gabrielle Berman andKerry Albright at UNICEF: “In an era of increasing dependence on data science and big data, the voices of one set of major stakeholders – the world’s children and those who advocate on their behalf – have been largely absent. A recent paper estimates one in three global internet users is a child, yet there has been little rigorous debate or understanding of how to adapt traditional, offline ethical standards for research involving data collection from children, to a big data, online environment (Livingstone et al., 2015). This paper argues that due to the potential for severe, long-lasting and differential impacts on children, child rights need to be firmly integrated onto the agendas of global debates about ethics and data science. The authors outline their rationale for a greater focus on child rights and ethics in data science and suggest steps to move forward, focusing on the various actors within the data chain including data generators, collectors, analysts and end-users. It concludes by calling for a much stronger appreciation of the links between child rights, ethics and data science disciplines and for enhanced discourse between stakeholders in the data chain, and those responsible for upholding the rights of children, globally….(More)”.

Using Collaboration to Harness Big Data for Social Good


Jake Porway at SSIR: “These days, it’s hard to get away from the hype around “big data.” We read articles about how Silicon Valley is using data to drive everything from website traffic to autonomous cars. We hear speakers at social sector conferences talk about how nonprofits can maximize their impact by leveraging new sources of digital information like social media data, open data, and satellite imagery.

Braving this world can be challenging, we know. Creating a data-driven organization can require big changes in culture and process. Some nonprofits, like Crisis Text Line and Watsi, started off boldly by building their own data science teams. But for the many other organizations wondering how to best use data to advance their mission, we’ve found that one ingredient works better than all the software and tech that you can throw at a problem: collaboration.

As a nonprofit dedicated to applying data science for social good, DataKind has run more than 200 projects in collaboration with other nonprofits worldwide by connecting them to teams of volunteer data scientists. What do the most successful ones have in common? Strong collaborations on three levels: with data science experts, within the organization itself, and across the nonprofit sector as a whole.

1. Collaborate with data science experts to define your project. As we often say, finding problems can be harder than finding solutions. ….

2. Collaborate across your organization to “build with, not for.” Our projects follow the principles of human-centered design and the philosophy pioneered in the civic tech world of “design with, not for.” ….

3. Collaborate across your sector to move the needle. Many organizations think about building data science solutions for unique challenges they face, such as predicting the best location for their next field office. However, most of us are fighting common causes shared by many other groups….

By focusing on building strong collaborations on these three levels—with data experts, across your organization, and across your sector—you’ll go from merely talking about big data to making big impact….(More).

Big Data: A Twenty-First Century Arms Race


Report by Atlantic Council and Thomson Reuters: “We are living in a world awash in data. Accelerated interconnectivity, driven by the proliferation of internet-connected devices, has led to an explosion of data—big data. A race is now underway to develop new technologies and implement innovative methods that can handle the volume, variety, velocity, and veracity of big data and apply it smartly to provide decisive advantage and help solve major challenges facing companies and governments

For policy makers in government, big data and associated technologies like machine-learning and artificial Intelligence, have the potential to drastically improve their decision-making capabilities. How governments use big data may be a key factor in improved economic performance and national security. This publication looks at how big data can maximize the efficiency and effectiveness of government and business, while minimizing modern risks. Five authors explore big data across three cross-cutting issues: security, finance, and law.

Chapter 1, “The Conflict Between Protecting Privacy and Securing Nations,” Els de Busser
Chapter 2, “Big Data: Exposing the Risks from Within,” Erica Briscoe
Chapter 3, “Big Data: The Latest Tool in Fighting Crime,” Benjamin Dean, Fellow
Chapter 4, “Big Data: Tackling Illicit Financial Flows,” Tatiana Tropina
Chapter 5, “Big Data: Mitigating Financial Crime Risk,” Miren Aparicio….Read the Publication (PDF)