Data on hospital performance often shows a wide variation and this poses the question of whether it should be available to the public or kept confidential.
The government wants more transparency and announced that surgeons will have their performance data published – including mortality rates – and will name and shame those who refuse. One hospital trust in Manchester also said it would be publishing performance information on all its consultants for patients to see. The Care Quality Commission has announced Ofsted-style inspections for hospitals.
If a hospital is shown to be performing poorly compared to others – because it has high levels of hospital-acquired infections for example – it has to have the right incentives to improve. This is because it takes considerable effort to change – its managers have to understand the reasons why it’s not performing, how it can improve, and then take significant steps to change.
If information about badly performing hospitals isn’t made public, can we assume that altruism within hospitals will generate sufficient energy and resources to make the necessary changes? Or will outing the hospital’s bad performance provide enough incentive to overcome possible inertia?
There is a further question about what will happen now that the NHS in England system is based on more choice and competition between hospitals – does this mean more patients will switch from poorly performing hospitals to better ones, causing some to lose more money and some to gain more? The “invisible hand” of patient choice could be a huge incentive for hospitals to do better.
The Wisconsin experiment
In England we have had only limited evidence of the effects of making information publicly available on hospital performance. But in the US this began in the 1990s with the publication of hospital report cards. The US is of course a market-driven system but this revolution in publishing performance has been the subject of a few evaluations.
A consistent finding across these studies has been that opening up information has little impact on the finances of hospitals. But publication has resulted in improved performance. For example, after the publication of mortality rates (adjusted for risk) for cardiac surgery in the New York state, patients continued to go to those hospitals that had been identified as statistical outliers with high mortality rates, but those hospitals still took action to improve.
The Wisconsin experiment was based on the way the data on quality was supplied to three sets of hospitals. The different sets of data were organised to rank performance on quality across all three hospitals: a public set, for which everything was published; a private set, where the hospital was given data confidentially; and a non-report set, where none of the data was supplied or published.
This study had two principal findings. First, only the public-report set hospital made strenuous efforts to use the data to improve performance. Second, they found the reason for this was because its managers sought to repair damage to their public reputation. They didn’t believe the information would affect their market shares, and a follow-up study showed that belief to have been correct.
This suggests then that neither relying only on trust (through private reporting) or financial losses and gains from public reporting will generate enough incentives for hospitals to take steps to improve.
Instead, the study’s authors argued that for a system of public reporting to generate improvements, it needs to be designed to satisfy the following four requirements: a hospital ranking system; information that is published and widely disseminated; information that is easily understood by the public (so that they can see which providers are performing well and poorly); and all this to be followed up by future reports (that show whether performance has actually improved or not).
Hospital gaming is a problem
The annual “star rating” system that applied to the NHS in England between 2000 and 2005 satisfied these requirements and various studies have shown that it was highly effective in generating strong incentives for improvements in reported performance. But on the flipsideit also leads to gaming – where hospitals (and surgeons) are more likely to pick particular, and potentially lower-risk, patients.
Gaming is well known to be a generic problem in such systems. In the New York study there was evidence of improved outcomes in reported mortality rates but also of gaming, as a consequence of the “incentive to decline to treat more difficult and complicated patients”.
The problem is that there is no perfect system of measuring the risks of patients. As one heart surgeon, who was subject to public reporting of his mortality rate, remarked – when he is referred a high-risk patient, he asks himself, “Do I feel lucky?” If he had had a good run (for example no one had died for two years), he would take that patient. But, if a patient of his died the week before, he would be tempted to refuse to take that high-risk patient.
Without public reporting, the evidence suggests there will be lack of incentives for action to be taken to tackle poor performance, but as a consequence of publication, it is likely to result in reluctance for surgeons to operate on more difficult and complicated patients. And we have yet to work out that paradox.
Source: The Conversation, story by Gwyn Bevan