Twitter might have a better read on floods than NOAA


Interview by By Justine Calma: “Frustrated tweets led scientists to believe that tidal floods along the East Coast and Gulf Coast of the US are more annoying than official tide gauges suggest. Half a million geotagged tweets showed researchers that people were talking about disruptively high waters even when government gauges hadn’t recorded tide levels high enough to be considered a flood.

Capturing these reactions on social media can help authorities better understand and address the more subtle, insidious ways that climate change is playing out in peoples’ daily lives. Coastal flooding is becoming a bigger problem as sea levels rise, but a study published recently in the journal Nature Communications suggests that officials aren’t doing a great job of recording that.

The Verge spoke with Frances Moore, lead author of the new study and a professor at the University of California, Davis. This isn’t the first time that she’s turned to Twitter for her climate research. Her previous research also found that people tend to stop reacting to unusual weather after dealing with it for a while — sometimes in as little as two years. Similar data from Twitter has been used to study how people coped with earthquakes and hurricanes…(More)”.

The many perks of using critical consumer user data for social benefit


Sushant Kumar at LiveMint: “Business models that thrive on user data have created profitable global technology companies. For comparison, market capitalization of just three tech companies, Google (Alphabet), Facebook and Amazon, combined is higher than the total market capitalization of all listed firms in India. Almost 98% of Facebook’s revenue and 84% of Alphabet’s come from serving targeted advertising powered by data collected from the users. No doubt, these tech companies provide valuable services to consumers. It is also true that profits are concentrated with private corporations and societal value for contributors of data, that is, the user, can be much more significant….

In the existing economic construct, private firms are able to deploy top scientists and sophisticated analytical tools to collect data, derive value and monetize the insights.

Imagine if personalization at this scale was available for more meaningful outcomes, such as for administering personalized treatment for diabetes, recommending crop patterns, optimizing water management and providing access to credit to the unbanked. These socially beneficial applications of data can generate undisputedly massive value.

However, handling critical data with accountability to prevent misuse is a complex and expensive task. What’s more, private sector players do not have any incentives to share the data they collect. These challenges can be resolved by setting up specialized entities that can manage data—collect, analyse, provide insights, manage consent and access rights. These entities would function as a trusted intermediary with public purpose, and may be named “data stewards”….(More)”.

See also: http://datastewards.net/ and https://datacollaboratives.org/

An Algorithm That Grants Freedom, or Takes It Away


Cade Metz and Adam Satariano at The New York Times: “…In Philadelphia, an algorithm created by a professor at the University of Pennsylvania has helped dictate the experience of probationers for at least five years.

The algorithm is one of many making decisions about people’s lives in the United States and Europe. Local authorities use so-called predictive algorithms to set police patrols, prison sentences and probation rules. In the Netherlands, an algorithm flagged welfare fraud risks. A British city rates which teenagers are most likely to become criminals.

Nearly every state in America has turned to this new sort of governance algorithm, according to the Electronic Privacy Information Center, a nonprofit dedicated to digital rights. Algorithm Watch, a watchdog in Berlin, has identified similar programs in at least 16 European countries.

As the practice spreads into new places and new parts of government, United Nations investigators, civil rights lawyers, labor unions and community organizers have been pushing back.

They are angered by a growing dependence on automated systems that are taking humans and transparency out of the process. It is often not clear how the systems are making their decisions. Is gender a factor? Age? ZIP code? It’s hard to say, since many states and countries have few rules requiring that algorithm-makers disclose their formulas.

They also worry that the biases — involving race, class and geography — of the people who create the algorithms are being baked into these systems, as ProPublica has reported. In San Jose, Calif., where an algorithm is used during arraignment hearings, an organization called Silicon Valley De-Bug interviews the family of each defendant, takes this personal information to each hearing and shares it with defenders as a kind of counterbalance to algorithms.

Two community organizers, the Media Mobilizing Project in Philadelphia and MediaJustice in Oakland, Calif., recently compiled a nationwide database of prediction algorithms. And Community Justice Exchange, a national organization that supports community organizers, is distributing a 50-page guide that advises organizers on how to confront the use of algorithms.

The algorithms are supposed to reduce the burden on understaffed agencies, cut government costs and — ideally — remove human bias. Opponents say governments haven’t shown much interest in learning what it means to take humans out of the decision making. A recent United Nations report warned that governments risked “stumbling zombie-like into a digital-welfare dystopia.”…(More)”.

Federal Agencies Use Cellphone Location Data for Immigration Enforcement


Byron Tau and Michelle Hackman at the Wall Street Journal: “The Trump administration has bought access to a commercial database that maps the movements of millions of cellphones in America and is using it for immigration and border enforcement, according to people familiar with the matter and documents reviewed by The Wall Street Journal.

The location data is drawn from ordinary cellphone apps, including those for games, weather and e-commerce, for which the user has granted permission to log the phone’s location.

The Department of Homeland Security has used the information to detect undocumented immigrants and others who may be entering the U.S. unlawfully, according to these people and documents.

U.S. Immigration and Customs Enforcement, a division of DHS, has used the data to help identify immigrants who were later arrested, these people said. U.S. Customs and Border Protection, another agency under DHS, uses the information to look for cellphone activity in unusual places, such as remote stretches of desert that straddle the Mexican border, the people said.

The federal government’s use of such data for law enforcement purposes hasn’t previously been reported.

Experts say the information amounts to one of the largest known troves of bulk data being deployed by law enforcement in the U.S.—and that the use appears to be on firm legal footing because the government buys access to it from a commercial vendor, just as a private company could, though its use hasn’t been tested in court.

“This is a classic situation where creeping commercial surveillance in the private sector is now bleeding directly over into government,” said Alan Butler, general counsel of the Electronic Privacy Information Center, a think tank that pushes for stronger privacy laws.

According to federal spending contracts, a division of DHS that creates experimental products began buying location data in 2017 from Venntel Inc. of Herndon, Va., a small company that shares several executives and patents with Gravy Analytics, a major player in the mobile-advertising world.

In 2018, ICE bought $190,000 worth of Venntel licenses. Last September, CBP bought $1.1 million in licenses for three kinds of software, including Venntel subscriptions for location data. 

The Department of Homeland Security and its components acknowledged buying access to the data, but wouldn’t discuss details about how they are using it in law-enforcement operations. People familiar with some of the efforts say it is used to generate investigative leads about possible illegal border crossings and for detecting or tracking migrant groups.

CBP has said it has privacy protections and limits on how it uses the location information. The agency says that it accesses only a small amount of the location data and that the data it does use is anonymized to protect the privacy of Americans….(More)”

If China valued free speech, there would be no coronavirus crisis


Verna Yu in The Guardian: “…Despite the flourishing of social media, information is more tightly controlled in China than ever. In 2013, an internal Communist party edict known as Document No 9 ordered cadres to tackle seven supposedly subversive influences on society. These included western-inspired notions of press freedom, “universal values” of human rights, civil rights and civic participation. Even within the Communist party, cadres are threatened with disciplinary action for expressing opinions that differ from the leadership.

Compared with 17 years ago, Chinese citizens enjoy even fewer rights of speech and expression. A few days after 34-year-old Li posted a note in his medical school alumni social media group on 30 December, stating that seven workers from a local live-animal market had been diagnosed with an illness similar to Sars and were quarantined in his hospital, he was summoned by police. He was made to sign a humiliating statement saying he understood if he “stayed stubborn and failed to repent and continue illegal activities, (he) will be disciplined by the law”….

Unless Chinese citizens’ freedom of speech and other basic rights are respected, such crises will only happen again. With a more globalised world, the magnitude may become even greater – the death toll from the coronavirus outbreak is already comparable to the total Sars death toll.

Human rights in China may appear to have little to do with the rest of the world but as we have seen in this crisis, disaster could occur when China thwarts the freedoms of its citizens. Surely it is time the international community takes this issue more seriously….(More)”.

Re-thinking Public Innovation, Beyond Innovation in Government


Jocelyne Bourgon at Dubai Policy Review: “The situation faced by public servants and public sector leaders today may not be more challenging in absolute terms than in previous generations, but it is certainly different. The problems societies face today stem from a world characterised by increasing complexity, hyper-connectivity and a high level of uncertainty. In this context, the public sector’s role in developing innovative solutions is critical. Despite the need for public innovation, public servants (when asked to discuss the challenges they face in New Synthesis1 labs and workshops) tend to present a narrow perspective, rarely going beyond the boundary of their respective units. While recent public sector reforms have encouraged a drive for efficiency and productivity, they have also generated a narrow and sometimes distorted view of the scale of the role of government in society. Ideas and principles matter. The way one thinks has a direct impact on the solutions that will be found and the results that will be achieved. Innovation in government has received much attention over the years. For the most part, the focus has been introspective, giving special attention to the modernisation of public sector systems and practices as well as the service delivery functions of government. The focus of attention in these conversations is on innovation in government and as a result may have missed the most important contributions of government to public innovation….

I define public innovation as “innovative solutions serving a public purpose that require the use of public means”9. What distinguishes public innovation from social innovation is the intimate link to government actions and the use of instruments of the State10. From this perspective, far from being risk averse, the State is the ultimate risk taker in society. Government takes risks on a scale that no other sector or agent in society could take on and intervenes in areas where the forces of the market or the capacity of civil society would be unable to go. This broader perspective reveals some of the distinctive characteristics of public innovation….(More)”

Astroturfing Is Bad But It's Not the Whole Problem


Beth Noveck at NextGov: “In November 2019, Securities and Exchange Commission Chairman Jay Clayton boasted that draft regulations requiring proxy advisors to run their recommendations past the companies they are evaluating before giving that advice to their clients received dozens of letters of support from ordinary Americans. But the letters he cited turned out to be fakes, sent by corporate advocacy groups and signed with the names of people who never saw the comments or who do not exist at all.

When interest groups manufacture the appearance that comments come from the “ordinary public,” it’s known as astroturfing. The practice is the subject of today’s House Committee on Financial Services Subcommittee on Oversight and Investigations hearing, entitled “Fake It till They Make It: How Bad Actors Use Astroturfing to Manipulate Regulators, Disenfranchise Consumers, and Subvert the Rulemaking Process.” 

Of course, commissioners who cherry-pick from among the public comments looking for the information to prove themselves right should be called out and it is tempting to use the occasion to embarrass those who do, especially when they are from the other party. But focusing on astroturfing distracts attention away from the more salient and urgent problem: the failure to obtain the best possible evidence by creating effective public participation opportunities in federal rulemaking. 

Thousands of federal regulations are enacted every year that touch every aspect of our lives, and under the 1946 Administrative Procedure Act, the public has a right to participate.

Participation in rulemaking advances both the legitimacy and the quality of regulations by enabling agencies—and the congressional committees that oversee them—to obtain information from a wider audience of stakeholders, interest groups, businesses, nonprofits, academics and interested individuals. Participation also provides a check on the rulemaking process, helping to ensure public scrutiny.

But the shift over the last two decades to a digital process, where people submit comments via regulations.gov has made commenting easier yet also inadvertently opened the floodgates to voluminous, duplicative and, yes, even “fake” comments, making it harder for agencies to extract the information needed to inform the rulemaking process.

Although many agencies receive only a handful of comments, some receive voluminous responses, thanks to this ease of digital commenting. In 2017, when the Federal Communications Commission sought to repeal an earlier Obama-era rule requiring internet service providers to observe net neutrality, the agency received 22 million comments in response. 

There is a remedy. Tools have evolved to make quick work of large data stores….(More)”. See also https://congress.crowd.law/

Why It’s So Hard for Users to Control Their Data


Bhaskar Chakravorti at the Harvard Business Review: “A recent IBM study found that 81% of consumers say they have become more concerned about how their data is used online. But most users continue to hand over their data online and tick consent boxes impatiently, giving rise to a “privacy paradox,” where users’ concerns aren’t reflected in their behaviors. It’s a daunting challenge for regulators and companies alike to navigate the future of data governance.

In my view, we’re missing a system that defines and grants users digital agency” — the ability to own the rights to their personal data, manage access to this data and, potentially, be compensated fairly for such access. This would make data similar to other forms of personal property: a home, a bank account or even a mobile phone number. But before we can imagine such a state, we need to examine three central questions: Why don’t users care enough to take actions that match their concerns? What are the possible solutions? Why is this so difficult?

Why don’t users’ actions match their concerns?

To start, data is intangible. We don’t actively hand it over. As a byproduct of our online activity, it is easy to ignore or forget about. A lot of data harvesting is invisible to the consumer — they see the results in marketing offers, free services, customized feeds, tailored ads, and beyond.

Second, even if users wanted to negotiate more data agency, they have little leverage. Normally, in well-functioning markets, customers can choose from a range of competing providers. But this is not the case if the service is a widely used digital platform. For many, leaving a platform like Facebook feels like it would come at a high cost in terms of time and effort and that they have no other option for an equivalent service with connections to the same people. Plus, many people use their Facebook logins on numerous apps and services. On top of that, Facebook has bought up many of its natural alternatives, like Instagram. It’s equally hard to switch away from other major platforms, like Google or Amazon, without a lot of personal effort.

Third, while a majority of American users believe more regulation is needed, they are not as enthusiastic about broad regulatory solutions being imposed. Instead, they would prefer to have better data management tools at their disposal. However, managing one’s own data would be complex – and that would deter users from embracing such an option….(More)”.

Change of heart: how algorithms could revolutionise organ donations


Tej Kohli at TheNewEconomy: “Artificial intelligence (AI) and biotechnology are both on an exponential growth trajectory, with the potential to improve how we experience our lives and even to extend life itself. But few have considered how these two frontier technologies could be brought together symbiotically to tackle global health and environmental challenges…

For example, combination technologies could tackle a global health issue such as organ donation. According to the World Health Organisation, an average of around 100,800 solid organ transplants were performed each year as of 2008. Yet, in the US, there are nearly 113,000 people waiting for a life-saving organ transplant, while thousands of good organs are discarded each year. For years, those in need of a kidney transplant had limited options: they either had to find a willing and biologically viable living donor, or wait for a viable deceased donor to show up in their local hospital.

But with enough patients and willing donors, big data and AI make it possible to facilitate far more matches than this one-to-one system allows, through a system of paired kidney donation. Patients can now procure a donor who is not a biological fit and still receive a kidney, because AI can match donors to recipients across a massive array of patient-donor relationships. In fact, a single person who steps forward to donate a kidney – to a loved one or even to a stranger – can set off a domino effect that saves dozens of lives by resolving the missing link in a long chain of pairings….

The moral and ethical implications of today’s frontier technologies are far-reaching. Fundamental questions have not been adequately addressed. How will algorithms weigh the needs of poor and wealthy patients? Should a donor organ be sent to a distant patient – potentially one in a different country – with a low rejection risk or to a nearby patient whose rejection risk is only slightly higher?

These are important questions, but I believe we should get combination technologies up and working, and then decide on the appropriate controls. The matching power of AI means that eight lives could be saved by just one deceased organ donor; innovations in biotechnology could ensure that organs are never wasted. The faster these technologies advance, the more lives we can save…(More)”.

Do you trust your fellow citizens more than your leaders?


Domhnall O’Sullivan at swissinfo.ch:” “Voting up to four times a year, as the Swiss do, is a nice democratic right, but it also means keeping up with a lot of topics.

Usually this means following the media, talking to family and friends, watching what political parties and campaigners are saying, and wading through information sent out by authorities before vote day.

Last week, in advance of the next national ballot on February 9, 21,000 voters in the town of Sion got something new in the post: an informational sheet, drafted by a group of 20 randomly selected locals, giving a citizen’s take on what’s at stake.

The document, written by the citizen panel over two weekends last November, is the first output of ‘demoscan’: a project aiming to spur participation in a country where turnout rates are low and electoral issues sometimes complex.

On the front side, the issue (a proposed increase in the building of social housing) is presented in eight key points, listed in order of perceived importance; on the back, there are three arguments for and three arguments against the proposal.

At first reading, it’s not clear how different or more digestible the information is compared with what’s sent out by federal authorities, aside from the fact that unlike in the government’s package, there is no recommendation on how to vote. (Official materials include the position of parliament and government on each issue).

Demoscan project leader Nenad Stojanović says however that the main added value is that the document presents a “filtering” and “prioritising” of information – ultimately giving an overview of the most pertinent points as seen through the eyes of 20 “normal” citizens.

He also reckons that the process was as important as the output.

By selecting the participants randomly and representatively, the project included social groups not normally involved in the political debate, he says. Four days of research and deliberation were like a “democracy school”, teaching them about the functioning of previously distant institutions….(More)”.