New mathematical model can help save endangered species


Blogpost by Majken Brahe and Ellegaard Christensen: “What does the blue whale have in common with the Bengal tiger and the green turtle? They share the risk of extinction and are classified as endangered species. There are multiple reasons for species to die out, and climate changes is among the main reasons.

The risk of extinction varies from species to species depending on how individuals in its populations reproduce and how long each animal survives. Understanding the dynamics of survival and reproduction can support management actions to improve a specie’s chances of surviving.

Mathematical and statistical models have become powerful tools to help explain these dynamics. However, the quality of the information we use to construct such models is crucial to improve our chances of accurately predicting the fate of populations in nature.

Colchero’s research focuses on mathematically recreating the population dynamics by better understanding the species’s demography. He works on constructing and exploring stochastic population models that predict how a certain population (for example an endangered species) will change over time.

These models include mathematical factors to describe how the species’ environment, survival rates and reproduction determine to the population’s size and growth. For practical reasons some assumptions are necessary.

Two commonly accepted assumptions are that survival and reproduction are constant with age, and that high survival in the species goes hand in hand with reproduction across all age groups within a species. Colchero challenged these assumptions by accounting for age-specific survival and reproduction, and for trade-offs between survival and reproduction. This is, that sometimes conditions that favor survival will be unfavorable for reproduction, and vice versa.

For his work Colchero used statistics, mathematical derivations, and computer simulations with data from wild populations of 24 species of vertebrates. The outcome was a significantly improved model that had more accurate predictions for a species’ population growth.

Despite the technical nature of Fernando’s work, this type of model can have very practical implications as they provide qualified explanations for the underlying reasons for the extinction. This can be used to take management actions and may help prevent extinction of endangered species….(More)”

Data Was Supposed to Fix the U.S. Education System. Here’s Why It Hasn’t.


Simon Rodberg at Harvard Business School: “For too long, the American education system failed too many kids, including far too many poor kids and kids of color, without enough public notice or accountability. To combat this, leaders of all political persuasions championed the use of testing to measure progress and drive better results. Measurement has become so common that in school districts from coast to coast you can now find calendars marked “Data Days,” when teachers are expected to spend time not on teaching, but on analyzing data like end-of-year and mid-year exams, interim assessments, science and social studies and teacher-created and computer-adaptive tests, surveys, attendance and behavior notes. It’s been this way for more than 30 years, and it’s time to try a different approach.

The big numbers are necessary, but the more they proliferate, the less value they add. Data-based answers lead to further data-based questions, testing, and analysis; and the psychology of leaders and policymakers means that the hunt for data gets in the way of actual learning. The drive for data responded to a real problem in education, but bad thinking about testing and data use has made the data cure worse than the disease….

The leadership decision at stake is how much data to collect. I’ve heard variations on “In God we trust; all others bring data” at any number of conferences and beginning-of-school-year speeches. But the mantra “we believe in data” is actually only shorthand for “we believe our actions should be informed by the best available data.” In education, that mostly means testing. In other fields, the kind of process is different, but the issue is the same. The key question is not, “will the data be useful?” (of course it can be) or, “will the data be interesting?” (Yes, again.) The proper question for leaders to ask is: will the data help us make better-enough decisions to be worth the cost of getting and using it? So far, the answer is “no.”

Nationwide data suggests that the growth of data-driven schooling hasn’t worked even by its own lights. Harvard professor Daniel Koretz says “The best estimate is that test-based accountability may have produced modest gains in elementary-school mathematics but no appreciable gains in either reading or high-school mathematics — even though reading and mathematics have been its primary focus.”

We wanted data to help us get past the problem of too many students learning too little, but it turns out that data is an insufficient, even misleading answer. It’s possible that all we’ve learned from our hyper-focus on data is that better instruction won’t come from more detailed information, but from changing what people do. That’s what data-driven reform is meant for, of course: convincing teachers of the need to change and focusing where they need to change….(More)”.

All of Us Research Program Expands Data Collection Efforts with Fitbit


NIH Press Release: “The All of Us Research Program has launched the Fitbit Bring-Your-Own-Device (BYOD) project. Now, in addition to providing health information through surveys, electronic health records, and biosamples, participants can choose to share data from their Fitbit accounts to help researchers make discoveries. The project is a key step for the program in integrating digital health technologies for data collection.

Digital health technologies, like mobile apps and wearable devices, can gather data outside of a hospital or clinic. This data includes information about physical activity, sleep, weight, heart rate, nutrition, and water intake, which can give researchers a more complete picture of participants’ health. The All of Us Research Program is now gathering this data in addition to surveys, electronic health record information, physical measurements, and blood and urine samples, working to make the All of Us resource one of the largest and most diverse data sets of its kind for health research.

“Collecting real-world, real-time data through digital technologies will become a fundamental part of the program,” said Eric Dishman, director of the All of Us Research Program. “This information, in combination with many other data types, will give us an unprecedented ability to better understand the impact of lifestyle and environment on health outcomes and, ultimately, develop better strategies for keeping people healthy in a very precise, individualized way.”…

All of Us is developing additional plans to incorporate digital health technologies. A second project with Fitbit is expected to launch later in the year. It will include providing devices to a limited number of All of Us participants who will be randomly invited to take part, to enable them to share wearable data with the program. And All of Us will add connections to other devices and apps in the future to further expand data collection efforts and engage participants in new ways….(More)”.

The Internet of Bodies: A Convenient—and, Yes, Creepy—New Platform for Data Discovery


David Horrigan at ALM: “In the Era of the Internet of Things, we’ve become (at least somewhat) comfortable with our refrigerators knowing more about us than we know about ourselves and our Apple watches transmitting our every movement. The Internet of Things has even made it into the courtroom in cases such as the hot tub saga of Amazon Echo’s Alexa in State v. Bates and an unfortunate wife’s Fitbit in State v. Dabate.

But the Internet of Bodies?…

The Internet of Bodies refers to the legal and policy implications of using the human body as a technology platform,” said Northeastern University law professor Andrea Matwyshyn, who works also as co-director of Northeastern’s Center for Law, Innovation, and Creativity (CLIC).

“In brief, the Internet of Things (IoT) is moving onto and inside the human body, becoming the Internet of Bodies (IoB),” Matwyshyn added….


The Internet of Bodies is not merely a theoretical discussion of what might happen in the future. It’s happening already.

Former U.S. Vice President Dick Cheney revealed in 2013 that his physicians ordered the wireless capabilities of his heart implant disabled out of concern for potential assassin hackers, and in 2017, the U.S. Food and Drug Administration recalled almost half a million pacemakers over security issues requiring a firmware update.

It’s not just former vice presidents and heart patients becoming part of the Internet of Bodies. Northeastern’s Matwyshyn notes that so-called “smart pills” with sensors can report back health data from your stomach to smartphones, and a self-tuning brain implant is being tested to treat Alzheimer’s and Parkinson’s.

So, what’s not to like?

Better with Bacon?

“We are attaching everything to the Internet whether we need to or not,” Matwyshyn said, calling it the “Better with Bacon” problem, noting that—as bacon has become a popular condiment in restaurants—chefs are putting it on everything from drinks to cupcakes.

“It’s great if you love bacon, but not if you’re a vegetarian or if you just don’t like bacon. It’s not a bonus,” Matwyshyn added.

Matwyshyn’s bacon analogy raises interesting questions: Do we really need to connect everything to the Internet? Do the data privacy and data protection risks outweigh the benefits?

The Northeastern Law professor divides these IoB devices into three generations: 1) “body external” devices, such as Fitbits and Apple watches, 2) “body internal” devices, including Internet-connected pacemakers, cochlear implants, and digital pills, and 3) “body embedded” devices, hardwired technology where the human brain and external devices meld, where a human body has a real time connection to a remote machine with live updates.

Chip Party for Chipped Employees

A Wisconsin company, Three Square Market, made headlines in 2017—including an appearance on The Today Show—when the company microchipped its employees, not unlike what veterinarians do with the family pet. Not surprisingly, the company touted the benefits of implanting microchips under the skin of employees, including being able to wave one’s hand at a door instead of having to carry a badge or use a password….(More)”.

The Age of Surveillance Capitalism


Book by Shoshana Zuboff: “The challenges to humanity posed by the digital future, the first detailed examination of the unprecedented form of power called “surveillance capitalism,” and the quest by powerful corporations to predict and control our behavior.

Shoshana Zuboff’s interdisciplinary breadth and depth enable her to come to grips with the social, political, business, and technological meaning of the changes taking place in our time. We are at a critical juncture in the confrontation between the vast power of giant high-tech companies and government, the hidden economic logic of surveillance capitalism, and the propaganda of machine supremacy that threaten to shape and control human life. Will the brazen new methods of social engineering and behavior modification threaten individual autonomy and democratic rights and introduce extreme new forms of social inequality? Or will the promise of the digital age be one of individual empowerment and democratization?

The Age of Surveillance Capitalism is neither a hand-wringing narrative of danger and decline nor a digital fairy tale. Rather, it offers a deeply reasoned and evocative examination of the contests over the next chapter of capitalism that will decide the meaning of information civilization in the twenty-first century. The stark issue at hand is whether we will be the masters of information and machines or its slaves. …(More)”.

Mining the Social Web: Data Mining Facebook, Twitter, LinkedIn, Instagram, GitHub, and More


Book (New 3rd Edition) by Matthew A. Russell and Mikhail Klassen:  “Mine the rich data tucked away in popular social websites such as Twitter, Facebook, LinkedIn, and Instagram. With the third edition of this popular guide, data scientists, analysts, and programmers will learn how to glean insights from social media—including who’s connecting with whom, what they’re talking about, and where they’re located—using Python code examples, Jupyter notebooks, or Docker containers.

In part one, each standalone chapter focuses on one aspect of the social landscape, including each of the major social sites, as well as web pages, blogs and feeds, mailboxes, GitHub, and a newly added chapter covering Instagram. Part two provides a cookbook with two dozen bite-size recipes for solving particular issues with Twitter….(More)”.

China will now officially try to extend its Great Firewall to blockchains


Mike Orcutt at Technology Review: “China’s crackdown on blockchain technology has taken another step: the country’s internet censorship agency has just approved new regulations aimed at blockchain companies. 

Hand over the data: The Cyberspace Administration of China (CAC) will require any “entities or nodes” that provide “blockchain information services” to collect users’ real names and national ID or telephone numbers, and allow government officials to access that data.

It will ban companies from using blockchain technology to “produce, duplicate, publish, or disseminate” any content that Chinese law prohibits. Last year, internet users evaded censors by recording the content of two banned articles on the Ethereum blockchain. The rules, first proposed in October, will go into effect next month.

Defeating the purpose? For more than a year, China has been cracking down on cryptocurrency trading and its surrounding industry while also singing the praises of blockchain. It appears its goal is to take advantage of the resiliency and tamper-proof nature of blockchains while canceling out their most most radical attribute: censorship resistance….(More)”.

Your old tweets give away more location data than you think


Issie Lapowsky at Wired: “An international group of researchers has developed an algorithmic tool that uses Twitter to automatically predict exactly where you live in a matter of minutes, with more than 90 percent accuracy. It can also predict where you work, where you pray, and other information you might rather keep private, like, say, whether you’ve frequented a certain strip club or gone to rehab.

The tool, called LPAuditor (short for Location Privacy Auditor), exploits what the researchers call an “invasive policy” Twitter deployed after it introduced the ability to tag tweets with a location in 2009. For years, users who chose to geotag tweets with any location, even something as geographically broad as “New York City,” also automatically gave their precise GPS coordinates. Users wouldn’t see the coordinates displayed on Twitter. Nor would their followers. But the GPS information would still be included in the tweet’s metadata and accessible through Twitter’s API.

Twitter didn’t change this policy across its apps until April of 2015. Now, users must opt-in to share their precise location—and, according to a Twitter spokesperson, a very small percentage of people do. But the GPS data people shared before the update remains available through the API to this day.

The researchers developed LPAuditor to analyze those geotagged tweets and infer detailed information about people’s most sensitive locations. They outline this process in a new, peer-reviewed paper that will be presented at the Network and Distributed System Security Symposium next month. By analyzing clusters of coordinates, as well as timestamps on the tweets, LPAuditor was able to suss out where tens of thousands of people lived, worked, and spent their private time…(More)”.

High-performance medicine: the convergence of human and artificial intelligence


Eric Topol in Nature: “The use of artificial intelligence, and the deep-learning subtype in particular, has been enabled by the use of labeled big data, along with markedly enhanced computing power and cloud storage, across all sectors. In medicine, this is beginning to have an impact at three levels: for clinicians, predominantly via rapid, accurate image interpretation; for health systems, by improving workflow and the potential for reducing medical errors; and for patients, by enabling them to process their own data to promote health. The current limitations, including bias, privacy and security, and lack of transparency, along with the future directions of these applications will be discussed in this article. Over time, marked improvements in accuracy, productivity, and workflow will likely be actualized, but whether that will be used to improve the patient–doctor relationship or facilitate its erosion remains to be seen….(More)”.

Paying Users for Their Data Would Exacerbate Digital Inequality


Blog post by Eline Chivot: “Writing ever more complicated and intrusive regulations rules about data processing and data use has become the new fad in policymaking. Many are lending an ear to tempting yet ill-advised proposals to treat personal data as traditional finite resource. The latest example can be found in an article, A Blueprint for a Better Digital Society, by Glen Weyl, an economist at Microsoft Research, and Jaron Lanier, a computer scientist and writer. Not content with Internet users being able to access many online services like Bing and Twitter for free, they want online users to be paid in cash for the data they provide. To say that this proposal is flawed is an understatement. Its flawed for three main reasons: 1) consumers would lose significant shared value in exchange for minimal cash compensation; 2) higher incomes individuals would benefit at the expense of the poor; and 3) transaction costs would increase substantially, further reducing value for consumers and limiting opportunities for businesses to innovate with the data.

Weyl and Lanier’s argument is motivated by the belief that because Internet users are getting so many valuable services—like search, email, maps, and social networking—for free, they must be paying with their data. Therefore, they argue, if users are paying with their data, they should get something in return. Never mind that they do get something in return: valuable digital services that they do not pay for monetarily. But Weyl and Lanier say this is not enough, and consumers should get more.

While this idea may sound good on paper, in practice, it would be a disaster.

…Weyl and Lanier’s self-declared objective is to ensure digital dignity, but in practice this proposal would disrupt the equal treatment users receive from digital services today by valuing users based on their net worth. In this techno-socialist nirvana, to paraphrase Orwell, some pigs would be more equal than others. The French Data Protection Authority, CNIL, itself raised concerns about treating data as a commodity, warning that doing so would jeopardize society’s humanist values and fundamental rights which are, in essence, priceless.

To ensure “a better digital society,” companies should continue to be allowed to decide the best Internet business models based on what consumers demand. Data is neither cash nor a commodity, and pursuing policies based on this misconception will damage the digital economy and make the lives of digital consumers considerably worse….(More)”.