Would be nice to get some better understanding about the internal definition of the IAQ accuracy.
Following will happen:
1) starting sensor - IAQ=0, accuracy=0
2) some later - IAQ=50, accuracy=0 (as sample)
3) some later - IAQ=55, accuracy=1 (as sample)
4) some later - IAQ=87, accuracy=1 (as sample)
1000) some later - IAQ=76, accuracy=2 (as sample)
2000) some later - IAQ=155, accuracy=2 (as sample)
3000) some later - IAQ=125, accuracy=3 (as sample)
3000) some later - IAQ=87, accuracy=2 (as sample)
3005) some later - IAQ=25, accuracy=2 (as sample)
3010) some later - IAQ=5, accuracy=2 (as sample)
3020) some later - IAQ=0, accuracy=2 (as sample)
3021) some later - IAQ=0, accuracy=2 (as sample)
3022) some later - IAQ=0, accuracy=2 (as sample)
.......for long time
xxxx) some later - IAQ=0, accuracy=2 (as sample)
As far as I understand the sensor is collecting data and after was decided that enough data available the accuracy will be set to 1/2/3.
So the questions:
1) how long this process should be going on, and what kind of data needed?
(one senor here is already collecting data since >10 hours (every 3 seconds) and still have accuracy=1)
2) what is the condition for accuracy=2 and accuracy=3? Can somebody describe the internal workflow?
3) How can it be correct that accuracy=2 -> but IAQ=0 for a long time?
I think in such a case there could be an internal "reset" of collected data (or recalculation/shift) and accuracy should be set back to "0", right?
Current accuracy=2 means - not trust these values, they can be wrong at all, right?
But if this is right - after what time we can expect an accuracy=3 then???
Maybe somebody from development can tell us a bit more about this?
Solved! Go to Solution.
Your description of the IAQ accuracy is not totally correct.
The IAQ accuracy is actually reflecting the current state of the background calibration process, such as:
Therefore to answer your questions:
Thank you for your explanation.
Fact is, that in such a situation the IAQ=0 with accuracy=2 will be live for hours (same was reported in Github before closing it).
The only effective way is to restart the sensor (will be ok with valid values very fast).
Of course this make only sense, if the saved data is still valid and was not overwritten by the time based save procedure (see link: other topic)
On other side - one of my sensors is (as in normal case expected) live in a working room since >36 hours - and have still the accuracy=1.
IAQ was between 25 and 100 in this time.
So the accuracy=1 maybe "stable" at all, right?
I think would be helpful to write a bit more about this in documentation.
BTW: I like this sensor very well - the results are much better as for other air quality sensors.
As sample this seems to be a good sensor (maybe the best one) for detecting bad outside air quality 😉
As mentioned in the other topic you have linked, the initial obervation reported of IAQ dropping to 0 for unexpected long duration of time could be linked to a glitch in the state file if one was used in the output you have shared.
Since an accuracy=1 reflects that only low stimuli have been observed over BSEC's history, it is indeed possible that it remained at this value if the air quality itself in the room was stable for that same duration. This also seems to be confirmed as no significant events were seen in the IAQ output during this period.
Sorry to interrupt,
I also have a question about IAQ calibration. And IAQ accuracy.
I do understand what IAQ accuracy==1 means.
If I reset the whole device(including the nvs where the device state is stored),
obviously it starts from IAQ accuaracy =0 and once the sensor is stabilized it turns to 1. But it takes around 2 days to reach 3. It is same with other devices.
Whereas, once it reaches 3, even if I restart the device(not erasing the state data) it reaches 3 in about 10~30 minutes.
So here is my question.
1. Is the calibrating algorithm triggered not just by the deviation in IAQ but also it has to satisfy accumulation of certain amount of data(which is pretty much same as duration of its running time since it has first turned on) if the whole state is reseted ?
2. In order to make new device to reach accuracy 3 , would it be fine to save the state of one device which has already reached accuracy of 3. And then load this to the newly reseted device?
(I am curious because, first, I am not sure the sensor state is the factor that causes this. Second, I guess the calibration should be device specific.)
Truly sorry about this long question, and thank you so much !!
Look forward to answer!