Results of a cohort study on incidence of complications


Just skimmed the article, but some takeaways:

  1. Why is A1c of 6.5 regarded as the magic low end? Why not 5.5 or 6.0?

  2. I noted on one of the charts that SD averaged in each cohort level to be >70 (I think). I don’t think this would result in a very good Time in Range.

  3. In the conclusions, it is recommended to have clinicians be focused on hypoglycemia with an A1c as low as 6.5. Perhaps a CGM with appropriate alarms would be indicated?

It seems that these studies are focused on poorly controlled T1 diabetics.


Am I missing something here?

Mean age of participants was 14.7 years (43.4% female), mean duration of diabetes was 1.3 years

Study cohort

We included children and adults with a diagnosis of diabetes for five or less years when first recorded in the registries.

Complications are a concern years down the road. Seems like an odd group to be studying.

A 6.5% was just the upper end of the low range, so it would have included those with an A1c of 6.0% or 5.5%. My guess is that the number of people who hit an A1c of 6.5% is already pretty low, so narrowing it even further would result in even fewer participants to compare to.

Having said that, I do agree it would be interesting to look at lower A1c levels. But the reality is that actually achieving lower A1c levels is very difficult for most people even with pumps and CGMs and all the other tools available. So I’m not sure there’s much motivation (or funding) to look at an A1c of, say, 5.5% when most people may never be able to achieve that. I read somewhere that the lower target for the DCCT was actually supposed to be 6.0%, but 7.0% was what that cohort actually achieved (on average).

I’m pretty sure these were just the characteristics of the participants at the start of the study. The study followed them for up to 20 years.


Thanks for posting - I find it very interesting.


That makes more sense, thanks.

I would say it is related to this statement half way down pg6
Moreover, we observed an increased risk of severe hypoglycaemia with HbA1c levels <6.5% (<48 mmol/mol) compared with 6.5-6.9% (48-52 mmol/mol).

I’m confused about the results. Based on the chart (Table 1) below, it looks like .6% of those with 16-20 years duration with an A1c of <6.5 have Albuminuria whereas 4.9% of those with 16-20 years duration with an A1c of between 6.5 & 6.9 have Albuminuria.

Is this an accurate interpretation? I understand the difference may not be statistically significant, but the difference in % seems consistent across duration categories. [Edit: these percentages are merely the percentage of people with retinopathy/albuminuria that have that A1c- not the percentage I thought it was. They don’t give the N value for each duration which I find annoying because by the time 20 years have passed, we might only have 10 people left in the <6.5 group]


“Participants were categorised by glycaemic control
based on area under the curve, a weighted mean
value calculated using the trapezoidal method that
takes into consideration the interval between HbA1c

Are they saying that participants were kept in the same A1c cohort throughout time and they’ve sort of averaged the A1c values with the trapezoidal method?

1 Like

Also, the odds ratios in the chart below (table 2) are confusing:


The unadjusted and adjusted odds ratios of any retinopathy vs none for someone with an A1c <6.5% are well under 1.0 (though the p-value shows this difference isn’t necessarily meaningful)

However, somehow the odds ratios increase significantly when each form of retinopathy is evaluated individually. That doesn’t really make any sense.

1 Like

Look at the n-values. The total N for <6.5% is 75. The N for the two specific kinds of retinopathy shown in your example are 7 + 6. So that says the other 62 folks had a much lower incidence of retinopathy to get an overall odds ratio less than 1.

Are they saying that participants were kept in the same A1c cohort throughout time and they’ve sort of averaged the A1c values with the trapezoidal method?

Looks to me like they calculate each subject’s “average” A1C over the 20 year period, and put them into that group, so their results don’t distinguish between those who improved their A1C from those who didn’t.


Yes, but there is no reason the odds ratio should increase dramatically when each form of retinopathy is evaluated individually versus any form of retinopathy at all.

I don’t think 75 represents the number of people in that category. I think it represents the number of people with “Any retinopathy” in that group (<6.5%) since that value falls under the “No. of participants with events” column.

Yes, that was my understanding as well.

"The risk of any retinopathy (simplex or worse)
for participants with an HbA1c level <6.5% (<48 mmol/
mol) compared with 6.5-6.9% (48-52 mmol/mol) did
not differ: unadjusted odds ratio 0.85 (95% confidence
interval 0.63 to 1.13, P=0.25) and adjusted odds ratio
0.77 (0.56 to 1.05, P=0.10)…

A different pattern was observed for preproliferative
diabetic retinopathy or worse, with risk increases in
participants with low HbA1c levels (<6.5%, <48 mmol/
mol) in unadjusted analysis (odds ratio 3.74, 1.19 to
11.76, P=0.02) and in adjusted analysis (3.29, 0.99 to
10.96, P=0.05)…

For the most advanced
stage (proliferative diabetic retinopathy or earlier
laser photocoagulation), risk increases were observed
for participants with low HbA1c levels (<6.5%, <48
mmol/mol): odds ratio 4.18 (1.17 to 14.90, P=0.03) in
unadjusted analysis and 2.48 (0.71 to 8.62, P=0.15)
in adjusted analysis."

I didn’t look at the original paper, and just guessed that the two categories of retinopathy that they showed were “optical” outliers. With small numbers in the subgroups it seems reasonable to expect lots of spread, so some subgroups will look extra bad or extra good just because of the spread, even if that doesn’t accurately reflect the underlying reality that we’d find in a large population.

I guess they’re saying the odds of simplex are better (not necessarily significantly) in the <6.5 group, but that the odds of the worse retinopathies are higher? The only three categories of retinopathy are simplex, PPDR, or PDR.

Still seems like a strange conclusion.