Samantha Horton at WFYI: “Though many websites offer non-scientific ratings on a number of services, two Indiana University scientists say judging hospitals that way likely isn’t fair.
Their recently-released study compares the federal government’s Hospital Compare and crowdsourced sites such as Facebook, Yelp and Google. The research finds it’s difficult for people to accurately understand everything a hospital does, and that leads to biased ratings.
Patient experiences with food, amenities and bedside manner often aligns with federal government ratings. But IU professor Victoria Perez says judging quality of care and safety is much more nuanced and people often get it wrong.
“About 20 percent of the hospitals rated best within a local market on social media were rated worst in that market by Hospital Compare in terms of patient health outcomes,” she says.
For the crowdsourced ratings to be more useful, Perez says people would have to know how to cross-reference them with a more reliable data source, such as Hospital Compare. But even that site can be challenging to navigate depending on what the consumer is looking for.
“If you have a condition-specific concern and you can see the clinical measure for a hospital that may be helpful,” says Perez. “But if your particular medical concern is not listed there, it might be hard to extrapolate from the ones that are listed or to know which ones you should be looking at.”
She says consumers would need more information about patient outcomes and other quality metrics to be able to reliably crowdsource a hospital on a site such as Google…(More)”.