To determine whether a large set of care performance indicators (‘Intelligent Monitoring’) can be used to predict the Care Quality Commission’s acute hospital trust provider ratings.
The Intelligent Monitoring dataset and first-inspection ratings were used to build linear and ordered logistic regression models for the whole dataset (all trusts). This was repeated for subsets of the trusts, with these models then applied to predict the inspection ratings of the remaining trusts.
The United Kingdom Department of Health and Social Care’s Care Quality Commission is the regulator for all health and social care services in England. We consider their first-inspection cycle of acute hospital trusts (2013 to 2016).
All 156 English NHS acute hospital trusts.
Main Outcome Measure(s)
Percentage of correct predictions and weighted kappa.
Only 24% of the predicted Overall ratings for the test sample were correct and the weighted kappa of 0.01 indicates very poor agreement between predicted and actual ratings. This lack of predictive power is also found for each of the rating domains.
While hospital inspections draw on a much wider set of information, the poor power of performance indicators to predict subsequent inspection ratings may call into question the validity of indicators, ratings or both. We conclude that a number of changes to the way performance indicators are collected and used could improve their predictive value, and suggest that assessing predictive power should be undertaken prospectively when sets of indicators are being designed and selected by regulators.