This is a perfect example of the statistical adage that data collected for one purpose shouldn't be used for another. Mathematical manipulation forms only one part of statistical analysis; equally important is a deep understanding of the provisos behind the raw data, and the inferences that can (or more importantly, cannot) be drawn.
If the incidence or prevalence of a disease has changed, is it because of different methods of testing? Has the topic under analysis been redefined during the investigation, perhaps to include or exclude certain conditions? And crucially, are there hidden psychological drivers to explain why reporting of data may be subject to bias?
A good example of the effect of such bias is the recent request from the European Centre for Disease Prevention and Control (ECDC), asking GPs to stop prescribing antibiotics for coughs and colds. Could the over-prescribing of antibiotics be related to GPs' anxiety that pushy patients will complain about their doctors if they don't get what they want? Will GPs obeying the ECDC risk being downgraded on the new, government-supported, practice assessment websites, or alternatively red-carded on their 'balanced scorecards' by their PCT simply because they have received too many complaints?
This is the difficulty with micromanaging the NHS by using targets. Most targets are simplistic. Many can be gamed - hospital waiting times in particular, which is the underlying reason certain hospitals reject so many Choose and Book appointments and ask GPs to fax in referral letters instead.
Einstein observed that not everything that can be counted matters and not everything that matters can be counted. Even when data is apparently complete and accurate to two decimal places, its relevance may be zero and its very presence counterproductive. In medicine the circumstances of data collection must be thoroughly understood before its analysis can mean anything.