@piconomix wrote:
Every 300 seconds it takes a reading and reports calculated values.
This seems to indicate that BSEC_SAMPLE_RATE_ULP was successfully used in bsec_update_subscription().
@piconomix wrote:
The BSEC library still wants to wake up every 3 seconds, but only wastes 3 ms each time.
But this seems to indicate that generic_33v_300s_4d was possibly not loaded properly, or there is some configuration mismatch somewhere.
If I try to reproduce your setup:
generic_33v_300s_4d configuration,
BSEC_SAMPLE_RATE_ULP used for all outputs in bsec_update_subscription(),
calling bsec_sensor_control() with the expected 300s delay runs successfully,
calling bsec_sensor_control() after a delay of only 3seconds, triggers a Warning:
bsec_sensor_control() called at 6004ms, next call expected at 306004ms.
bsec_sensor_control() called at 306004ms, next call expected at 606004ms.
In bsec_sensor_control() at 309004ms:
BSEC WARNING: 100. Difference between actual and defined sampling intervals of bsec_sensor_control() greater
than allowed.
@piconomix wrote:
Why does the BSEC library want to wake up every 3 seconds? Can I safely ignore and only call it once every 300 seconds?
Are there BSEC library settings to improve battery life, i.e. sleep for a long time, wake up to take a measurement and perform a calculation quickly and then go back to sleep?
Please review your implementation, that you are importing a valid config string from the correct file, that you are making use of next_call and process_data structure elements returned by BSEC, etc. Feel free to share which reference code you've been using, or your relevant source code snippets.
For optimal power savings, it is even possible to completely turn off (or deep-sleep) your MCU/system. In this case BSEC is operated slightly differently, mainly you would need to add the extra steps of saving/restoring BSEC's state in some NVM between every samples, and keep track of an absolute timestamp. This approach was for instance discussed there, but for another platform..
@piconomix wrote:
Why must the time stamps be in 64-bit nanosecond resolution? It feels like overkill and a waste to me on a 32-bit architecture. Why would a 32-bit millisecond time stamps not suffice?
I believe that is just a matter of handling overflows somewhere, and I guess BSEC expects it to be handled by the host. BSEC relies on accurate timings for optimal performance, and with absolute/continuous timestamps, it can trigger appropriate errors/warnings if violations are detected (as seen above). With an unsigned 32bit integer, assuming the timestamp is in milliseconds, one can count up to (2^32-1)/1000/3600/24=49.7days, this means the buffer would overflow before 2months of continuous operation.
... View more