Previous Abstract Return to Session C5

**Sensor Scheduling by Greedily Maximizing Shannons-per-Joule**

*Maxwell Lichtenetein and Gabriel Elkaim, University of California Santa Cruz*

**Location:** Atrium Ballroom

Alternate Number 2

In certain types of wireless sensor nodes (WSNs) such as animal tracking tags and certain medical sensors, batteries cannot be replaced, so the lifetime of the WSN is limited. The designers of these devices, or their users, are faced with many difficult decisions in their sensor scheduling policy. These decisions negotiate tradeoffs between power and the resolution of the resulting data, since sensors generally use less power when generating data at lower frequencies. In theory, an experiment designer could determine the various physical signals they wished to measure (such as geographic location, blood pressure, etc), select a particular target resolution for each signal, and then choose the lowest-power scheduling policy that produces data meeting that resolution.

However, choosing such a scheduling policy can be complicated by a variety of issues. First, there may be mutual information between types of sensors. For example, accelerometery data contains information about changes in geographic location. In this case, an optimized sensor policy must account for the behavior of both types of sensor, and a model describing the relationship between the two signals. Another complicating factor is that the power-vs-information tradeoff for a given sensor may not be clear.

Doubling the sampling rate of an accelerometer doubles the bitrate that sensor yields, but it does not necessarily double the rate of useful information, as the new measurements will be somewhat redundant. So is it worth the extra power? Third, in most systems, the information relationships between the sensor signals is non-linear, so different sampling rates may be appropriate in different situations.

Crafting an optimal policy would require a model of the full system, including the power costs of each sensor, the useful information each sensor yields at each power rate, and an information model relating the sensors to each other. In this paper, we outline a method for building such a model on a wildlife tracking tag, using an Extended Kalman Filter (EKF) and simple, datasheet-derived power models. We also use this model to build a greedy sensor scheduling algorithm based on the metric of shannon-per-joule, and will test this algorithm in the field. We will offer some motivation for this algorithm. Next, we develop a formal notation for the algorithm and its underlying models. We will then describe some models appropriate for the application of a wildlife tracking tag. Finally, we will describe a proposed field test of the resulting algorithm.

To motivate this discussion, we consider the problem of selecting a target sensing resolution for a GPS unit on a wildlife sensing tag. The simple approach is to pick a fixed sampling rate, but this is costly in conditions where that target rate has an exceptionally high energy cost, or an exceptionally low information yield. As an example of the former, consider a GPS unit that is occluded from satellite view. A rigid sensor scheduling algorithm might enable this GPS unit, insisting on a fix as soon as possible, and remain enabled until the device happens to leave the occlusion. As an example of the latter, consider a GPS unit attached to a sleeping animal. A rigid sensor policy will conduct measurements at the same rate as during the animal's wakeful motion. In the first case, a measurement will be thoroughly informative, but prohibitively costly. In the second case, a measurement might have an ordinary energy cost, but yields very little information (provided some sensor can detect that the animal is sleeping).

Intuitively, then, it seems an effective design is attempting to maximize the ratio of information yielded by each measurement versus the energy cost of each measurement. To formalize this into a useful policy, we must quantify the information yielded by each measurement Shannon Entropy is a natural quantity to use for this purpose, describing the average information content, rather than the pure bitrate. The unit of entropy is the "shannon." If a stream of data has all redundancy removed, the length of the result is its information content in shannons. A perfect scheduling policy would schedule measurements in such a way that the stream of data from all sensors is minimally redundant. However, zero redundancy can be achieved by the undesirable policy of scheduling sensing events arbitrarily far apart, so some sort of target resolution is required.

To make these quantities well-defined, we need to establish three kinds of models. Shannon entropy is only well-defined in the context of a probabilistic model, which we call the ``information model.'' Additionally, we must predict the information a given measurement would yield before it is conducted. Given an information model, this can be achieved with a model that predicts the result of a measurement on a given sensor. We call this the ``sensing model''. Finally, we must predict the energy required to perform a given measurement, which we call the ``energy model.'' In formally describing these models, we assume our system is well-represented by some EKF. Our device's state, then, are the EKF's state vector, X, and its covariance matrix, P. We also assume that there is a set of continuous sensors {s_0, s_1, ... s_n} on the device, where the sampling rate of each sensor can be adjusted. For the animal tracking tag discussed here, these sensors are an accelerometer, a gyroscope, and a magnetometer. We also assume a single continuous sensor, d, which in a tracking tag is the GPS (GPS units do operate in continuous mode, but in tracking tags, they are generally turned off moments after acquiring an accurate fix, and so can be modeled as discrete sensors).

Each of these models take as input some estimate of the device's state, which we denote T. For our purposes, T contains the EKF's state vector, X, its and covariance matrix, P, and the time since the most recent GPS fix. The information model is expressed as a function I(T), which returns the magnitude of Shannon information contained in that state (or in other words, the length of a fully compressed bitstring that can represent that state, according to a compression scheme whose properties we describe shortly). The sensor model for a sensor s_i is S(i, T), which estimates the most probable result r of a measurement by sensor s_i. The energy model is E(T) for the discrete sensor, which estimates the most probable energy required to perform a discrete measurement. For continuous sensors the energy model is P(i, r), or the power required to run sensor i at rate r (for simplicity, we assume these sensors behavior does not depend on T, which is reasonable for the IMU sensors in an animal collar). Finally, we define Tpost(S(i,T),i), which describes the X and P the EKF would have after updating given the measurement from the sensor i.

If these three models are established, we can estimate the shannon-per-joule yield of a measurement using a discrete sensor i: SPJ(T) = [I(Tpost(S(T),i)) - I(T)] / E(T).

For a continuous sensor, we note that the quantity shannon-per-joule is dimensionally equivalent to the quantity bandwidth-per-watt. We can then estimate the bandwidth-per-watt of a sensor i operating at rate r as BPW(i, r, T) as follows: We propagate T by time 1/r using the ordinary propagation procedure of the EKF, then calculate [I(Tpost(S(i,T),i)) - I(T)], and add the result to a running sum. After repeating up to a certain interval, say, 1 second, we have an estimate of the information that would be yielded over 1 second of running at rate r. Then, we can divide this by the power required to run the sensor at that rate, P(i). Now, we can write the following algorithm, which would run periodically: For each sensor s_i in {s}: change r_i to the r that minimizes BPW(i,r,T) if SPJ(T) is greater than the largest BPW(i, r_i, T): enable sensor d else: disable sensor d This algorithm somewhat greedily selects the power policy that maximizes average shannon-per-joule at each policy update period. This is entirely dependent on the ability to create usefully predictive information, sensor, and power models. We now describe the models we intend to use here. We assume that there exists an effective EKF that performs sensor fusion to track the tag's position. The continuous power models P() are generally given in datasheets, and should be straightforward. The discrete energy model E() is more challenging, but the literature provides two simple models for estimating the energy cost of a GPS measurement. One model notes the time required to achieve a GPS fix is correlated with the time since the last GPS fix, since both the GPS ephemeris and location estimates degrade with time. If a constant power draw is assumed during fixing, this relationship yields a straightforward model. A second model divides the range of the tag into a grid, and in each grid records the average time required to obtain a fix. Then, again assuming a constant power draw, the power cost can be modeled as this average. We borrow these methods from the literature, though the parameters of these models should be calibrated to our specific device. The EKF itself provides information and sensor models, since X and P together describe a multidimensional probability density function (PDF) in the observable state space of the device. The mode of this probability density function is simply X, so s(i, T) are equivalent to the observation function of the Kalman filter (usually denoted as h(X)). If the PDF described by the Kalman filter were discrete, then calculating I(T) would be a straightforward application of fundamental information theory formulas. Unfortunately, the Kalman filter represents a continuous PDF, upon which shannon entropy becomes undefined. However, the relative entropy between two PDFs on the same state space is defined, so we cast I(T) as the relative entropy between T and some ground T_0. To keep the definition of I() as intuitive as possible, we should choose T_0 as a minimum-information bound, where the magnitudes of the covariance matrix diagonals are the size of the surface of the earth, for example. At this point, we have loosely described the full intended implementation of a greedy shannon-per-joule maximizing algorithm for sensor scheduling. This approach is novel, and should be tested. In other work submitted to ION PLANS 2020, we describe an EKF for a tracking tag which is suitable for this algorithm. We intend to test the scheduling policy described here, and compare overall energy usage under this scheduling policy against more established approaches. This approach relies on a variety of assumptions, and represents a significant increase in complexity over existing methods. However, it also allows a greater degree of in-field scheduling flexibility which might enable tags and other non-rechargable WSNs the ability to make choices that extend their lifetimes significantly. In addition, it contributes to a somewhat sparse literature on modelling sensor scheduling using concepts from information theory.

Previous Abstract Return to Session C5