We have sometimes seen an out-of-range Dexcom calibration result in a mediocre track when we get back in range. The calibration wiki definitely recommends calibrating in-range when at all possible:
Yet, sometimes, we just have to calibrate when out of range. A few days ago, we started a new sensor in the middle of a major hormonal peak. Since we knew the results might be iffy, we really went by the book: super clean hands, warm blood, no milking, huge blood drop. We calibrated around 275 (ouch). But here is the track, have a look at the first BG test when in range:
The in-range BG meter measurement at 12:27am is straddled by the two consecutive CGM measurements! I felt like going “Alright Dexcom:” with a high five
Some of it, I think, is due to the fact that my son did a good technical job for his calibration measurements. But much of it, I am sure is luck. However, this made me wonder: could conventional wisdom be wrong in this? Is it possible that out-of-range calibration works well? Common sense tells me NO, but I wonder:
what is your experience with out-of-range calibration?
Why? I do not see anything wrong with calibration at out-of-range bg, as long as bg is flat around the time meter reading is taken. In fact, it may be better to calibrate at various flat in-range and out-of-range values to get the best overall results.
The purpose of calibration is to determine parameters of a relationship between the measured signal (current through the sensor), and the bg value. Assuming this relationship is a straight line (which I think Dexcom does), calibration amounts to estimating the slope and the intercept of that straight line. If calibrations are always performed when bg is in range, errors in estimating the slope and intercept parameters are likely going to be higher not lower compared to the case when calibrations are performed at (flat) bg levels farther apart.
I think the reason for the in-range calibration request is due to the fact that the bg meters we all carry around have much more variability in the above normal range, so the calibration may not give the algorithm the information it needs. i.e. the actual bg might not be close to the actual bg. With that said, if you need to calibrate, you need to calibrate.
Also, I think it’s harder to calibrate with a flat bg when you’re not in range. I’m always trying to get myself back in range so it’s unlikely I’d have a flat line while out of range unless my methods of correcting my blood sugar weren’t working properly.
Sorry I don’t have any helpful graphs to share @Michel . I generally wouldn’t calibrate out of range unless I inconveniently had to change my sensor. In that case, I’ve always calibrated while out of range and then calibrated extra once I was back in range and flat. My sensors tend to be less reliable the first day, so I personally wouldn’t think too much of it.
Couldn’t you circumvent this variability by having more samples? In other words, do four finger sticks rather than two, and average them into two numbers? it’s not ideal but presumably you would then get some slight reduction in the variability of the glucose meter?
I find that I have better results when calibrating more in range, or at least not when really high or low, I believe for the reason @Chris said. I’ve delayed starting my new sensor if I test, and my value is really out of range, until I’m closer in range. If I didn’t want to do that, I would calibrate, but then would recheck again once I seemed closer to in range and flat (after whatever correction I did wore off) and if it was substantially off, I might just double-enter that value again to jump start it from a better point.
If you think of a calibration curve, it is weighted heavier at the bottom (biased because that is where we ideally want to be). As you incorporate a larger gap in your curve, you are accounting for a wider chance of error as your curve is spread out. I would guess that a person whose BG is typically high and seldom has a lower BG would have an inverse problem.
In theory this makes some sense. Would be an interesting test. i.e. go into your endo appointment on the high end. Do their venous draw and compare that to one, two, and four blood sticks you do right after or before the venous draw.
Note: if it isn’t clear, you just do 4 blood sticks and record them in order and see if they get closer to the actual value.
Mapping from the actual BG to the dexcom measurement is a problem of linear approximation. It is unlikely that the whole relationship across the whole 40-400 domain is linear. More likely, it is a somewhat non-linear but 1-to-1 (bijective/ homeomorphous) mapping.
When you calibrate, you find a slope and intercept for the range that you are sampling. That gives you a linear (straight line relationship between the two. But where you pick the sample will make a difference to what (slope, intercept) pair you get.
In this imagined example, you can see that calibrating in different ranges will give you different results, and that, when you are close to range, in general you would like the linear approximation that would be obtained by calibrating in range.
At least, that’s how I see it in theory But I wonder how reality actually works!
yea, mine was made up and over-simplified. But the idea is the same. Your r-value will start to creep as you calibrate values far apart from the rest. Not to mention the design of the sensor itself. Nice graph!
Sure. That’s exactly why Dexcom asks for 2 samples at the sensor start when there is no prior history to rely on. They could ask for more samples at any calibration point, and accuracy would improve, but they know people would find that inconvenient, so their follow-up calibration algorithm works with singly samples and probably uses some form of averaging over multiple calibrations to mitigate noisy meter measurements.
Yes: I think you could drastically reduce variability that way. I don’t quite remember the theory, but I think that, for a Gaussian distribution, an approximation is that you could divide SD by 3 if you take 3 samples (doesn’t work for other numbers though). Someone may correct me with a better approximation here?
Based on Dexcom patents, which I looked at some time back, I think the relationship is in fact surprisingly linear over wide bg range, but there are couple of problems: (1) slope and the intercept may drift over time, which is why follow-up calibrations are needed, and (2) current measured is tiny and very noisy, which is why they do heavy averaging over 5 min and even then there is (as we all know) noise in the readings. It’s been some time, so I do not remember technical details from the patents - they are interesting to read.
Assuming Gaussian distribution, variance of an average of n samples is reduced by n times, which means standard deviation is reduced by square root of n. So, averaging n=2 meter samples reduces SD by about 1.4 times. If you take and average 3 samples, you reduce SD by about 1.7 times, etc.
This correlates well with what I experience, but I have read on this forum that others have a lot more variation between the cgm and meter reading at higher values. This would indicate that, for whatever reason, the relationship may be less linear for these people.
There may be factors at play here that haven’t been determined yet. Calibrating in range may be more important for some people than for others.
I’m sure we can all agree that calibrating with a flat bg is always necessary.