Transparency Trumps Technology: Reconciling Open Meeting Laws with Modern Technology


Note by Cassandra B. Roeder in Wm. & Mary L. Rev: “As technological advances revolutionize communication patterns in the private and public sectors, government actors must consider their reactions carefully. Public representatives may take advantage of modern technology to improve communications with constituents and to operate more efficiently. However, this progress must be made with an eye to complying with certain statutory restrictions placed on public bodies…
This Note will argue that, in order to comply with the spirit and the letter of open meeting laws, public bodies should limit use of modern technology to: (1) providing information and soliciting public feedback through noninteractive websites, and (2) enabling remote participation of public body members at meetings. This Note will then contend that public bodies should not utilize interactive online forums or group e-mails. Although these technologies may offer certain obvious benefits, this Note argues that: (1) they do not comply with current open meeting law requirements, and (2) legislatures should not alter open meeting laws to allow for their use.8 It concludes that although more permissive statutes might lead to an increase in civic participation and government efficiency, these potential gains must be sacrificed in order to preserve transparency, the primary purpose of open meeting laws…”

 

The Emerging Science of Superspreaders (And How to Tell If You're One Of Them)


Emerging Technology From the arXiv: “Who are the most influential spreaders of information on a network? That’s a question that marketers, bloggers, news services and even governments would like answered. Not least because the answer could provide ways to promote products quickly, to boost the popularity of political parties above their rivals and to seed the rapid spread of news and opinions.
So it’s not surprising that network theorists have spent some time thinking about how best to identify these people and to check how the information they receive might spread around a network. Indeed, they’ve found a number of measures that spot so-called superspreaders, people who spread information, ideas or even disease more efficiently than anybody else.
But there’s a problem. Social networks are so complex that network scientists have never been able to test their ideas in the real world—it has always been too difficult to reconstruct the exact structure of Twitter or Facebook networks, for example. Instead, they’ve created models that mimic real networks in certain ways and tested their ideas on these instead.
But there is growing evidence that information does not spread through real networks in the same way as it does through these idealised ones. People tend to pass on information only when they are interested in a topic and when they are active, factors that are hard to take into account in a purely topological model of a network.
So the question of how to find the superspreaders remains open. That looks set to change thanks to the work of Sen Pei at Beihang University in Beijing and a few pals who have performed the first study of superspreaders on real networks.
These guys have studied the way information flows around various networks ranging from the Livejournal blogging network to the network of scientific publishing at the American Physical Society’s, as well as on subsets of the Twitter and Facebook networks. And they’ve discovered the key indicator that identifies superspreaders in these networks.
In the past, network scientists have developed a number of mathematical tests to measure the influence that individuals have on the spread of information through a network. For example, one measure is simply the number of connections a person has to other people in the network, a property known as their degree. The thinking is that the most highly connected people are the best at spreading information.
Another measure uses the famous PageRank algorithm that Google developed for ranking webpages. This works by ranking somebody more highly if they are connected to other highly ranked people.
Then there is ‘betweenness centrality’ , a measure of how many of the shortest paths across a network pass through a specific individual. The idea is that these people are more able to inject information into the network.
And finally there is a property of nodes in a network known as their k-core. This is determined by iteratively pruning the peripheries of a network to see what is left. The k-core is the step at which that node or person is pruned from the network. Obviously, the most highly connected survive this process the longest and have the highest k-core score..
The question that Sen and co set out to answer was which of these measures best picked out superspreaders of information in real networks.
They began with LiveJournal, a network of blogs in which individuals maintain lists of friends that represent social ties to other LiveJournal users. This network allows people to repost information from other blogs and to use a reference the links back to the original post. This allows Sen and co to recreate not only the network of social links between LiveJournal users but also the way in which information is spread between them.
Sen and co collected all of the blog posts from February 2010 to November 2011, a total of more than 56 million posts. Of these, some 600,000 contain links to other posts published by LiveJournal users.
The data reveals two important properties of information diffusion. First, only some 250,000 users are actively involved in spreading information. That’s a small fraction of the total.
More significantly, they found that information did not always diffuse across the social network. The found that information could spread between two LiveJournal users even though they have no social connection.
That’s probably because they find this information outside of the LiveJournal ecosystem, perhaps through web searches or via other networks. “Only 31.93% of the spreading posts can be attributed to the observable social links,” they say.
That’s in stark contrast to the assumptions behind many social network models. These simulate the way information flows by assuming that it travels directly through the network from one person to another, like a disease spread by physical contact.
The work of Sen and co suggests that influences outside the network are crucial too. In practice, information often spreads via several seemingly independent sources within the network at the same time. This has important implications for the way superspreaders can be spotted.
Sen and co say that a person’s degree– the number of other people he or her are connected to– is not as good a predictor of information diffusion as theorists have thought.  “We find that the degree of the user is not a reliable predictor of influence in all circumstances,” they say.
What’s more, the Pagerank algorithm is often ineffective in this kind of network as well. “Contrary to common belief, although PageRank is effective in ranking web pages, there are many situations where it fails to locate superspreaders of information in reality,” they say….
Ref: arxiv.org/abs/1405.1790 : Searching For Superspreaders Of Information In Real-World Social Media”

Open Source Intelligence in the Twenty-First Century


New book by Christopher Hobbs, Matthew Moran and Daniel Salisbury: “This edited volume takes a fresh look at the subject of open source intelligence (OSINT), exploring both the opportunities and the challenges that this emergent area offers at the beginning of the twenty-first century. In particular, it explores the new methodologies and approaches that technological advances have engendered, while at the same time considering the risks associated with the pervasive nature of the Internet.
Drawing on a diverse range of experience and expertise, the book begins with a number of chapters devoted to exploring the uses and value of OSINT in a general sense, identifying patterns, trends and key areas of debate. The focus of the book then turns to the role and influence of OSINT in three key areas of international security – nuclear proliferation; humanitarian crises; and terrorism. The book offers a timely discussion on the merits and failings of OSINT and provides readers with an insight into the latest and most original research being conducted in this area.”
Table of contents:
PART I: OPEN SOURCE INTELLIGENCE: NEW METHODS AND APPROACHES
1. Exploring the Role and Value of Open Source Intelligence; Stevyn Gibson
2. Towards the discipline of Social Media Intelligence ‘ SOCMINT’; David Omand,  Carl Miller and Jamie Bartlett
3. The Impact of OSINT on Cyber-Security; Alastair Paterson and James Chappell
PART II: OSINT AND PROLIFERATION
4. Armchair Safeguards: The Role of OSINT in Proliferation Analysis; Christopher Hobbs and Matthew Moran
5. OSINT and Proliferation Procurement: Combating Illicit Trade; Daniel Salisbury
PART III: OSINT and Humanitarian Crises
6. Positive and Negative Noise in Humanitarian Action: The OSINT Dimension; Randolph Kent
7. Human Security Intelligence: Towards a Comprehensive Understanding of Humanitarian Crises; Fred Bruls and Walter Dorn
PART IV:OSINT and Counter-terrorism
8. Detecting Events from Twitter: Situational Awareness in the Age of Social Media; Simon Wibberley and Carl Miller
9. Jihad Online: What Militant Groups Say about Themselves and What it Means for Counterterrorism Strategy; John Amble
Conclusion; Christopher Hobbs, Matthew Moran and Daniel Salisbury

Health plan giants to make payment data accessible to public


Paul Demko in ModernHealthCare: “A new initiative by three of the country’s largest health plans has the potential to transform the accessibility of claims payment data, according to healthcare finance experts. UnitedHealthcare, Aetna and Humana announced a partnership on Wednesday with the Health Care Cost Institute to create a payment database that will be available to the public for free. …The database will be created by HCCI, a not-for-profit group established in 2011, from information provided by the insurers. HCCI expects it to be available in 2015 and that more health plans will join the initiative prior to its launch.
UnitedHealthcare is the largest insurer in the country in terms of the number of individuals covered through its products. All three participating plans are publicly traded, for-profit companies.
Stephen Parente, chair of HCCI’s board, said the organization was approached by the insurance companies about the initiative. “I’m not quite sure what the magic trigger was,” said Parente, who is a professor at the University of Minnesota and advised John McCain’s 2008 presidential campaign on healthcare issues. “We’ve kind of proven as a nonprofit and an independent group that we can be trustworthy in working with their data.”
Experts say cost transparency is being spurred by a number of developments in the healthcare sector. The trend towards high-deductible plans is giving consumers a greater incentive to understand how much healthcare costs and to utilize it more efficiently. In addition, the launch of the exchanges under the Patient Protection and Affordable Care Act has brought unprecedented attention to the difficulties faced by individuals in shopping for insurance coverage.
“There’s so many things that are kind of pushing the industry toward this more transparent state,” Hempstead said. “There’s just this drumbeat that people want to have this information.”
Insurers may also be realizing they aren’t likely to have a choice about sharing payment information. In recent years, more and more states have passed laws requiring the creation of claims databases. Currently, 11 states have all payer claims databases, and six other states are in the process of creating such a resource, according to the All-Payer Claims Database Council….”

Rethinking Personal Data: A New Lens for Strengthening Trust


New report from the World Economic Forum: “As we look at the dynamic change shaping today’s data-driven world, one thing is becoming increasingly clear. We really do not know that much about it. Polarized along competing but fundamental principles, the global dialogue on personal data is inchoate and pulled in a variety of directions. It is complicated, conflated and often fueled by emotional reactions more than informed understandings.
The World Economic Forum’s global dialogue on personal data seeks to cut through this complexity. A multi-year initiative with global insights from the highest levels of leadership from industry, governments, civil society and academia, this work aims to articulate an ascendant vision of the value a balanced and human-centred personal data ecosystem can create.
Yet despite these aspirations, there is a crisis in trust. Concerns are voiced from a variety of viewpoints at a variety of scales. Industry, government and civil society are all uncertain on how to create a personal data ecosystem that is adaptive, reliable, trustworthy and fair.
The shared anxieties stem from the overwhelming challenge of transitioning into a hyperconnected world. The growth of data, the sophistication of ubiquitous computing and the borderless flow of data are all outstripping the ability to effectively govern on a global basis. We need the means to effectively uphold fundamental principles in ways fit for today’s world.
Yet despite the size and scope of the complexity, it cannot become a reason for inaction. The need for pragmatic and scalable approaches which strengthen transparency, accountability and the empowerment of individuals has become a global priority.
Tools are needed to answer fundamental questions: Who has the data? Where is the data? What is being done with it? All of these uncertainties need to be addressed for meaningful progress to occur.
Objectives need to be set. The benefits and harms for using personal data need be more precisely defined. The ambiguity surrounding privacy needs to be demystified and placed into a real-world context.
Individuals need to be meaningfully empowered. Better engagement over how data is used by third parties is one opportunity for strengthening trust. Supporting the ability for individuals to use personal data for their own purposes is another area for innovation and growth. But combined, the overall lack of engagement is undermining trust.
Collaboration is essential. The need for interdisciplinary collaboration between technologists, business leaders, social scientists, economists and policy-makers is vital. The complexities for delivering a sustainable and balanced personal data ecosystem require that these multifaceted perspectives are all taken into consideration.
With a new lens for using personal data, progress can occur.

Figure 1: A new lens for strengthening trust
 

Source: World Economic Forum

Obama Signs Nation's First 'Open Data' Law


William Welsh in Information Week: “President Barack Obama enacted the nation’s first open data law, signing into law on May 9 bipartisan legislation that requires federal agencies to publish their spending data in a standardized, machine-readable format that the public can access through USASpending.gov.
The Digital Accountability and Transparency Act of 2014 (S. 994) amends the eight-year-old Federal Funding Accountability and Transparency Act to make available to the public specific classes of federal agency spending data “with more specificity and at a deeper level than is currently reported,” a White House statement said….
Advocacy groups applauded the bipartisan legislation, which is being heralded the nation’s first open data law and furnishes a legislative mandate for Obama’s one-year-old Open Data Policy.
“The DATA Act will unlock a new public resource that innovators, watchdogs, and citizens can mine for valuable and unprecedented insight into federal spending,” said Hudson Hollister, executive director of the Data Transparency Coalition. “America’s tech sector already has the tools to deliver reliable, standardized, open data. [The] historic victory will put our nation’s open data pioneers to work for the common good.”
The DATA Act requires agencies to establish government-wide standards for financial data, adopt accounting approaches developed by the Recovery Act’s Recovery Accountability and Transparency Board (RATB), and streamline agency reporting requirements.
The DATA Act empowers the Secretary of the Treasury to establish a data analytics center, which is modeled on the successful Recovery Operations Center. The new center will support inspectors general and law enforcement agencies in criminal and other investigations, as well as agency program offices in the prevention of improper payments. Assets of the RATB related to the Recovery Operations Center would transfer to the Treasury Department when the board’s authorization expires.
The treasury secretary and the Director of the White House’s Office of Management and Budget are jointly tasked with establishing the standards required to achieve the goals and objectives of the new statute.
To ensure that agencies comply with the reporting requirements, agency inspectors general will report on the quality and accuracy of the financial data provided to USASpending.gov. The Government Accountability Office also will report on the data quality and accuracy and create a Government-wide assessment of the financial data reported…”

New crowdsourcing site like ‘Yelp’ for philanthropy


Vanessa Small in the Washington Post: “Billionaire investor Warren Buffett once said that there is no market test for philanthropy. Foundations with billions in assets often hand out giant grants to charity without critique. One watchdog group wants to change that.
The National Committee for Responsive Philanthropy has created a new Web site that posts public feedback about a foundation’s giving. Think Yelp for the philanthropy sector.
Along with public critiques, the new Web site, Philamplify.org, uploads a comprehensive assessment of a foundation conducted by researchers at the National Committee for Responsive Philanthropy.
The assessment includes a review of the foundation’s goals, strategies, partnerships with grantees, transparency, diversity in its board and how any investments support the mission.
The site also posts recommendations on what would make the foundation more effective in the community. The public can agree or disagree with each recommendation and then provide feedback about the grantmaker’s performance.
People who post to the site can remain anonymous.
NCRP officials hope the site will stir debate about the giving practices of foundations.
“Foundation leaders rarely get honest feedback because no one wants to get on the wrong side of a foundation,” said Lisa Ranghelli, a director at NCRP. “There’s so much we need to do as a society that we just want these philanthropic resources to be used as powerfully as possible and for everyone to feel like they have a voice in how philanthropy operates.”
With nonprofit rating sites such as Guidestar and Charity Navigator, Philamplify is just one more move to create more transparency in the nonprofit sector. But the site might be one of the first to force transparency and public commentary exclusively about the organizations that give grants.
Foundation leaders are open to the site, but say that some grantmakers already use various evaluation methods to improve their strategies.
Groups such as Grantmakers for Effective Organizations and the Center for Effective Philanthropy provide best practices for foundation giving.
The Council on Foundations, an Arlington-based membership organization of foundation groups, offers a list of tools and ideas for foundations to make their giving more effective.
“We will be paying close attention to Philamplify and new developments related to it as the project unfolds,” said Peter Panepento, senior vice president of community and knowledge at the Council on Foundations.
Currently there are three foundations up for review on the Web site: the William Penn Foundation in Philadelphia, which focuses on improving the Greater Philadelphia community; the Robert W. Woodruff Foundation in Atlanta, which gives grants in science and education; and the Lumina Foundation for Education in Indianapolis, which focuses on access to higher learning….”
Officials say Philamplify will focus on the top 100 largest foundations to start. Large foundations would include groups such as the Bill and Melinda Gates Foundation, the Robert Wood Johnson Foundation and Silicon Valley Community Foundation, and the foundations of companies such as Wal-Mart, Wells Fargo, Johnson & Johnson and GlaxoSmithKline.
Although there are concerns about the site’s ability to keep comments objective, grantees hope it will start a dialogue that has been absent in philanthropy.

Believe the hype: Big data can have a big social impact


Annika Small at the Guardian: “Given all the hype around so called big data at the moment, it would be easy to dismiss it as nothing more than the latest technology buzzword. This would be a mistake, given that the application and interpretation of huge – often publicly available – data sets is already supporting new models of creativity, innovation and engagement.
To date, stories of big data’s progress and successes have tended to come from government and the private sector, but we’ve heard little about its relevance to social organisations. Yet big data can fuel big social change.
It’s already playing a vital role in the charitable sector. Some social organisations are using existing open government data to better target their services, to improve advocacy and fundraising, and to support knowledge sharing and collaboration between different charities and agencies. Crowdsourcing of open data also offers a new way for not-for-profits to gather intelligence, and there is a wide range of freely available online tools to help them analyse the information.
However, realising the potential of big and open data presents a number of technical and organisational challenges for social organisations. Many don’t have the required skills, awareness and investment to turn big data to their advantage. They also tend to lack the access to examples that might help demystify the technicalities and focus on achievable results.
Overcoming these challenges can be surprisingly simple: Keyfund, for example, gained insight into what made for a successful application to their scheme through using a free, online tool to create word clouds out of all the text in their application forms. Many social organisations could use this same technique to better understand the large volume of unstructured text that they accumulate – in doing so, they would be “doing big data” (albeit in a small way). At the other end of the scale, Global Giving has developed its own sophisticated set of analytical tools to better understand the 57,000+ “stories” gathered from its network.
Innovation often happens when different disciplines collide and it’s becoming apparent that most value – certainly most social value – is likely to be created at the intersection of government, private and social sector data. That could be the combination of data from different sectors, or better “data collaboration” within sectors.
The Housing Association Charitable Trust (HACT) has produced two original tools that demonstrate this. Its Community Insight tool combines data from different sectors, allowing housing providers easily to match information about their stock to a large store of well-maintained open government figures. Meanwhile, its Housing Big Data programme is building a huge dataset by combining stats from 16 different housing providers across the UK. While Community Insight allows each organisation to gain better individual understanding of their communities (measuring well-being and deprivation levels, tracking changes over time, identifying hotspots of acute need), Housing Big Data is making progress towards a much richer network of understanding, providing a foundation for the sector to collaboratively identify challenges and quantify the impact of their interventions.
Alongside this specific initiative from HACT, it’s also exciting to see programmes such as 360giving, which forge connections between a range of private and social enterprises, and lays foundations for UK social investors to be a significant source of information over the next decade. Certainly, The Big Lottery Fund’s publication of open data late last year is a milestone which also highlights how far we have to travel as a sector before we are truly “data-rich”.
At Nominet Trust, we have produced the Social Tech Guide to demonstrate the scale and diversity of social value being generated internationally – much of which is achieved through harnessing the power of big data. From Knewton creating personally tailored learning programmes, to Cellslider using the power of the crowd to advance cancer research, there is no shortage of inspiration. The UN’s Global Pulse programme is another great example, with its focus on how we can combine private and public sources to pin down the size and shape of a social challenge, and calibrate our collective response.
These examples of data-driven social change demonstrate the huge opportunities for social enterprises to harness technology to generate insights, to drive more effective action and to fuel social change. If we are to realise this potential, we need to continue to stretch ourselves as social enterprises and social investors.”

The solutions to all our problems may be buried in PDFs that nobody reads


Christopher Ingraham at the Washington Post: “What if someone had already figured out the answers to the world’s most pressing policy problems, but those solutions were buried deep in a PDF, somewhere nobody will ever read them?
According to a recent report by the World Bank, that scenario is not so far-fetched. The bank is one of those high-minded organizations — Washington is full of them — that release hundreds, maybe thousands, of reports a year on policy issues big and small. Many of these reports are long and highly technical, and just about all of them get released to the world as a PDF report posted to the organization’s Web site.
The World Bank recently decided to ask an important question: Is anyone actually reading these things? They dug into their Web site traffic data and came to the following conclusions: Nearly one-third of their PDF reports had never been downloaded, not even once. Another 40 percent of their reports had been downloaded fewer than 100 times. Only 13 percent had seen more than 250 downloads in their lifetimes. Since most World Bank reports have a stated objective of informing public debate or government policy, this seems like a pretty lousy track record.
pdfs
Now, granted, the bank isn’t Buzzfeed. It wouldn’t be reasonable to expect thousands of downloads for reports with titles like “Detecting Urban Expansion and Land Tenure Security Assessment: The Case of Bahir Dar and Debre Markos Peri-Urban Areas of Ethiopia.” Moreover, downloads aren’t the be-all and end-all of information dissemination; many of these reports probably get some distribution by e-mail, or are printed and handed out at conferences. Still, it’s fair to assume that many big-idea reports with lofty goals to elevate the public discourse never get read by anyone other than the report writer and maybe an editor or two. Maybe the author’s spouse. Or mom.
I’m not picking on the World Bank here. In fact, they’re to be commended, strongly, for not only taking a serious look at the question but making their findings public for the rest of us to learn from. And don’t think for a second that this is just a World Bank problem. PDF reports are basically the bread and butter of Washington’s huge think tank industry, for instance. Every single one of these groups should be taking a serious look at their own PDF analytics the way the bank has.
Government agencies are also addicted to the PDF. As The Washington Post’s David Fahrenthold reported this week, federal agencies spend thousands of dollars and employee-hours each year producing Congressionally-mandated reports that nobody reads. And let’s not even get started on the situation in academia, where the country’s best and brightest compete for the honor of seeing their life’s work locked away behind some publisher’s paywall.”
Not every policy report is going to be a game-changer, of course. But the sheer numbers dictate that there are probably a lot of really, really good ideas out there that never see the light of day. This seems like an inefficient way for the policy community to do business, but what’s the alternative?
One final irony to ponder: You know that World Bank report, about how nobody reads its PDFs? It’s only available as a PDF. Given the attention it’s receiving, it may also be one of their most-downloaded reports ever.

Working Together in a Networked Economy


Yochai Benkler at MIT Technology Review on Distributed Innovation and Creativity, Peer Production, and Commons in a Networked Economy: “A decade ago, Wikipedia and open-source software were treated as mere curiosities in business circles. Today, these innovations represent a core challenge to how we have thought about property and contract, organization theory and management, over the past 150 years.
For the first time since before the Industrial Revolution, the most important inputs into some of the most important economic sectors are radically distributed in the population, and the core capital resources necessary for these economic activities have become widely available in wealthy countries and among the wealthier populations of emerging economies. This technological feasibility of social production generally, and peer production — the kind of network collaboration of which Wikipedia is the most prominent example — more specifically, is interacting with the high rate of change and the escalating complexity of global innovation and production systems.
Increasingly, in the business literature and practice, we see a shift toward a range of open innovation and models that allow more fluid flows of information, talent, and projects across organizations.
Peer production, the most significant organizational innovation that has emerged from Internet-mediated social practice, is large-scale collaborative engagement by groups of individuals who come together to produce products more complex than they could have produced on their own. Organizationally, it combines three core characteristics: decentralization of conception and execution of problems and solutions; harnessing of diverse motivations; and separation of governance and management from property and contract.
These characteristics make peer production highly adept at experimentation, innovation, and adaptation in changing and complex environments. If the Web was innovation on a commons-based model — allocating access and use rights in resources without giving anyone exclusive rights to exclude anyone else — Wikipedia’s organizational innovation is in problem-solving.
Wikipedia’s user-generated content model incorporates knowledge that simply cannot be managed well, either because it is tacit knowledge (possessed by individuals but difficult to communicate to others) or because it is spread among too many people to contract for. The user-generated content model also permits organizations to explore a space of highly diverse interests and tastes that was too costly for traditional organizations to explore.
Peer production allows a diverse range of people, regardless of affiliation, to dynamically assess and reassess available resources, projects, and potential collaborators and to self-assign to projects and collaborations. By leaving these elements to self-organization dynamics, peer production overcomes the lossiness of markets and bureaucracies, and its benefits are sufficient that the practice has been widely adopted by firms and even governments.
In a networked information economy, commons-based practices and open innovation provide an evolutionary model typified by repeated experimentation and adoption of successful adaptation rather than the more traditional, engineering-style approaches to building optimized systems.
Commons-based production and peer production are edge cases of a broader range of openness strategies that trade off the freedom of these two approaches and the manageability and appropriability that many more-traditional organizations seek to preserve. Some firms are using competitions and prizes to diversify the range of people who work on their problems, without ceding contractual control over the project. Many corporations are participating in networks of firms engaging in a range of open collaborative innovation practices with a more manageable set of people, resources, and projects to work with than a fully open-to-the-world project. And the innovation clusters anchored around universities represent an entrepreneurial model at the edge of academia and business, in which academia allows for investment in highly uncertain innovation, and the firms allow for high-risk, high-reward investment models.

To read the full article,  click here.