Applicable Versions | NetSim Standard | NetSim Pro |

NetSim calculates the PHY rate per the 3GPP formula, which is explained in the infographic below.

A simple, approximate way to think of the above formula is

*Date Rate = BW * Q * R * N * (1 - OH) ... (1)*

where Data rate is the per carrier PHY rate, BW is the allocated bandwidth to the particular UE, Q is the modulation order, R is the code rate, N is the number of MIMO layers, and OH is the Overhead. In NetSim, OH is usually taken as 2/14 since we have 2 control symbols in a slot spanning 14 symbols.

How is the formula in (1) equivalent to the 3GPP formula? Let's examine the variables.

- Q, R, N, and (1-OH) are common to both
- The scaling factor in NetSim is assumed as 1
- The data rate in (1) per carrier and needed to be summed up for multiple carriers

When operating in TDD mode, the above computation would give the two-way (downlink + uplink) data rate. Therefore the downlink data rate would be

* DL-rate = Data Rate * DL-Fraction ... (2)*

While BW, OH, and N are based on user inputs in NetSim, Q and R are dependent on the modulation and coding scheme (MCS). The MCS i.e., Q and R, is chosen by looking up the the 3GPP spectral efficiency to MCS table assuming ideal Shannon rate whereby

*Spectral-Efficiency = log2(1+SINR[linear]) ... (3)*

The expression thus becomes

*DL/UL Data Rate [Mbps] = BW [MHz] * Q [bits/symbol]* R *** N * (1 - OH)*(DL/UL fraction) ...(4)*

Now, in 5G, the transmitter adapts its PHY layer MCS depending on the receiver's SINR. The SINR in turn depends on the received power, which is transmit-power less path loss. In NetSim users can record the radio measurements to obtain the SINR and MCS (for each UE) over time if the channel is time-varying.

**Example**

Consider the following scenario where we have downlink traffic from the 1 gNB to 2 UEs. The pathloss models are set such that UE1 sees MCS13 and UE2 sees MCS7. Additionally, UE1 has tx-rx antennas 1x1 while UE2 has 2x2; the gNB is 2x2. Thus UE1 sees 1 MIMO stream (layer) while UE2 sees 2 MIMO streams (layers). The scheduling algorithm is round-robin.

The PHY Data Rate Calculations for UE_1

**Results and discussion**

We run a simulation in NetSim per the above scenario and obtain the throughput values tabulated below.

Application throughput (simulation) [Mbps] | PHY Data Rate (Analytical) [Mbps] | |

UE 1 | 95.50 | 101.69 |

UE 2 | 109.85 | 116.80 |

The application layer throughput would be

*DL-App-Throughput = DL-Data-Rate * (App-Layer-Packet-Size/PHY-Layer-Packet-Size) ... (4)*

The computation of the PHY layer packet size is complex. It involves various layers adding overhead: the Transport layer (UDP) contributes 8 B, and the Network Layer (IP) adds 20 B. The MAC layer introduces additional overhead, with the SDAP header contributing 1B and the PDCP header adding 16B. At this point, the packet size is the size of the application layer packet plus 45 B. The MAC layer in 5G further processes these packets, fitting them into transport blocks (TBs). These TBs are then divided into code blocks (CBs), which are grouped into code block groups (CBGs) for transmission over the air. The sizes of the TB and CB depend on various parameters, and additional overheads are incurred during this process. As a result, it's challenging to provide a simple analytical formula for PHY layer packet size. A reasonable estimate would be about 5 - 10% reduction between the PHY rate and the application throughput. This is what we observe when we compare the simulation results with the theoretical predictions in the above table.

The above discussion assumes a conservative MCS is selected, ensuring a Block Error Rate (BLER) of zero. However, if a more aggressive MCS is chosen, which typically has a higher throughput but also a higher t-BLER (e.g., 5% or 10%), the computation must account for this increased BLER.

**Useful links**

1. Overview of NetSim 5G library: https://tetcos.com/5g.html

2. NetSim 5G documentation (v14.0): https://www.tetcos.com/downloads/v14/5G-NR.pdf