Guest post by Faris Alfarhan*
Channel dependent scheduling is commonly used in cellular systems. In LTE, orthogonal frequency division multiple access (OFDMA) in the downlink and single carrier frequency division multiple access (SC-FDMA) in the uplink allow scheduling to be performed orthogonally in both the frequency and time domains. Instead of exploiting the frequency diversity of the channel, frequency-selective scheduling leverages the channel’s time and frequency selectivity to allocate valuable radio resources in an optimal manner. The OFDMA and SC-FDMA shared channel transmissions incorporated in LTE offer great flexibility for integrating adaptive scheduling strategies. The minimum resource allocation corresponds to a resource block of 180 kHz and a time duration of 0.5 ms. Downlink resource allocation relies on the channel quality index (CQI) reported by the user. For frequency selective scheduling to be applicable, the CQI must be reported for all of the carrier’s resource blocks.
In this article, the aim is to evaluate the gain achieved from frequency selective scheduling (FSS) in LTE networks by means of statistical models and simulations. There are various scheduling algorithms with various scheduling objectives. This article is focused on the widely known proportional fair scheduling policy. Proportional fairness provides a balance between increasing the cell capacity and improving the cell-edge user experience. The algorithm favors the following two groups of users equally: cell edge users with the worst radio and interference conditions, and cell center users who contribute the best increase in spectral efficiency. The remaining users, including cell-edge users with decent radio conditions and cell-center users with limited spectral efficiency increase potential, are left with the remaining resource blocks. The algorithm determines the group of cell-center users providing maximal increase in spectral efficiency according to their potential to reach higher order LTE modulation and coding schemes when assigned better quality resource blocks.
In order to accurately model the statistical gain achieved from frequency selective scheduling, the gain is evaluated using an LTE-FDD system level simulator where multiple simulation runs are performed. Within each simulation run, users are dropped randomly and uniformly across the simulation area. At each run, a realization of the channel between each user and each cell is modeled. The channel gain then encompasses small-scale and large-scale fading components. The FSS gains are evaluated by comparing the users’ carrier to interference plus noise ratios (CINRs) achieved when frequency selective resource allocation is used to CINRs achieved when a round robin random scheduler is used, given that both schedulers allocate the same number of resource blocks. Both downlink and uplink aspects of the network have been evaluated, and are concluded to have similar FSS gains. Simulation results presented in this article are limited to the downlink though. There are a number of factors that contribute to the amount of gain or loss that is obtained from frequency selective scheduling. These are mainly the FSS scheduling algorithm, the cell resource load, the traffic characteristics and the number of users, the user’s speed, and the radio environment. In order to examine some of these factors efficiently, each factor is analyzed separately under the proportional fair FSS algorithm while keeping other factors fixed.
A. Speed and CQI Accuracy
Frequency selective scheduling algorithms work based on the channel information that is fed back to the eNode-B scheduler. Increasing the accuracy of this information directly translates into better efficiency in reaching the scheduling algorithm’s objective. The subscriber’s speed determines the degree of time selectivity in the channel, namely the channel coherence time. As the speed increases, the Doppler spread increases and the channel’s coherence time decreases. Consequently, fast fading causes the user’s CQI and interference conditions to change during the scheduling delay associated with the reported CQI. As a result, the usefulness of the reported CQI estimate is limited to short scheduling delays. As the scheduling delay increases, the frequency selective scheduling process converges to a random scheduling scheme. LTE measurements indicate a typical delay between measurement and application of the reported CQI of about 10 to 15 ms. Figure1 shows the FSS CINR gains in dB achieved using the proportional fair FSS algorithm for three mobile speeds: 0, 3, and 30 km/h. Note that perfect CQI reporting is assumed for stationary users, and hence the shape of the gains directly follow the algorithm’s objectives. Other factors in the simulation are fixed to a cell load of 100% and 1 resource block per user. It is worth noting here that the scheduling delay is not increased in these simulations, and is fixed to 15 ms.
B. Cell Resource Load
A fully loaded cell implies that all available resource blocks are used, including good and bad quality resource blocks. On the other hand, if a cell is not fully loaded, the scheduler could avoid allocating the resource blocks of bad quality to cell users. Simulation results show that reducing the cell load mitigates the FSS scheduling losses that exist in fully loaded cells for the group of users who are not favored by the FSS algorithm.
C. Allocated Bandwidth per User
The number of users within the service area of the cell and the characteristics of the traffic demand determine the amount of allocated bandwidth per user, which is translated into a number of resource blocks per user. These parameters have a considerable impact on the gains achieved from FSS. FSS gains decrease when increasing the number of resource blocks per user, because users assigned with narrower pieces of bandwidth provide more potential for multi-user frequency selectivity. As the per-user channels get wider, the selectivity in terms of the channel’s induced power is decreased and the scheduler is less likely to find the user with the best selective scheduling opportunity. For this reason, narrowband services such as voice over IP have greater potential for FSS gains when scheduled dynamically.
Frequency selective scheduling provides benefits to both FDD and TDD LTE networks. FDD has the advantage of having more instantaneous CQI feedback from the subscriber, so it is more advantageous in the case of severe multipath or high user mobility. TDD includes a time split within the LTE radio frame between the uplink and downlink which results in longer CQI reporting delay that depends on the TDD UL/DL frame configuration. On the other hand, TDD has both the uplink and downlink on the same center frequency. Therefore, TDD has an edge over FDD of some slight improvement in the accuracy of the reported channel state. To conclude, frequency selective scheduling provides a great potential in LTE, but is constrained to a number of limiting factors mentioned in this article. In all cases, user mobility is shown to decrease the performance of FSS due to the CQI reporting delay.
* Faris is wireless systems engineer in the research and specifications team at InfoVista. His domain of interest and expertise include radio access network design and optimization, performance simulations, and advanced technologies.
Nice article. Is there any difference for the results depending on indoor and outdoor channels?
The simulation was run for an outdoor cellular network. The results would be somewhat different for indoor channels where the channel model and the fading distribution would differ from the Rayleigh model used in the simulation. //Faris