Diagnosis Codes: Do They Skew Statistics?

I was just looking over my lab results and charting from my recent visit to my new endo. It is a large endo group. I was surprised to see how my diagnosis was categorized:

Diabetes Mellitus uncomplicated type I uncontrolled E10.65 (ICD-10-CM)

What surprised me is that with a long history of diabetes, no complications, and an A1c of 5.6 how could I possibly get a diagnosis code labeling me as uncontrolled?

Then, when I thought about it I remembered when I first got to San Antonio and went to a GP at UTSA faculty practice, I was diagnosed as T2!

These diagnosis codes are how statistics are developed, and probably more importantly, how an institution is funded in various ways. Could it be that my new endo practice can gather more resources if it had a higher number of uncontrolled T1’s? Is it that UTSA needs so many T2’s to fund a research project?

I really don’t care if they call me uncontrolled, and if it helps them out then that’s fine with me. But what effect does this have on the overall statistics on diabetes in our country? Are the oft quoted statistics a case of garbage in garbage out?

What do you think, and does anyone have facts relating to my observations?

It may be that they just do that to make it easier to get things approved for insurance.


Yes, that certainly is a possibility.

But, do you think that this misreporting is widespread enough to influence the statistics that are gleaned from it?

I don’t know how widespread it is, but I do know my Endo has made notes when trying to get some tests done, and I asked him about it, and he told me he wrote it down so it is easier to get through insurance…

I kinda respect that. He wants to make the decision on getting blood tests, and not leave it up to insurance. So he did what he had to in order for it to go through easier.

The obvious best solution would be to leave it to the doctors so this sort of stuff wasn’t necessary, but I don’t want to hijack your thread on an entirely old discussion about insurance… :smile:


Doc, I think that would be true, it would be misreported when doing reviews using all of the Medicare data based on coding. However, most groups don’t rely on those codes when they do their studies. Teaching hospitals with research groups collect much more detailed information on their patients and use their own databases when doing retrospective analysis. So I would say, when you read a paper see what their source was and take it with a grain of salt.

1 Like

Hijack away! I’m sure insurance companies play a big part in the over or under use of diagnostic codes.

This. At most, those codes might be used in very broad studies to attempt to ascertain population incidence of a diagnosis, but I doubt it’s used in terms of assessing numbers in good control or not, since that’s such bad data for that for several reasons. First, it’s an arbitrary/subjective cut-point (compared to objective lab values). Second, it dichotomizes (in control/not) what’s better represented as a continuous variable (level of control, such as A1c)—you have much much greater statistical power with continuous variables. Third, even if you thought physicians were generally accurate in their diagnostic labeling, in EMRs, it’s sometimes very easy for diagnostic codes to not get updated/to be retained because of laziness/efficiency.

Also, when writing grants, what tends to matter more are published studies demonstrating incidence rates (often national ones done by the CDC), and your lab/research’s group track record recruiting from the available population. You sometimes report hospital stats to demonstrate that you’ll be able to recruit enough people, but unless you’re studying something rare (not diabetes) or in a small place or your study has a ton of exclusion criteria, it probably won’t be a big deal. So fudging the numbers within an institution probably won’t make a big difference.