Abstract
This document provides information on how Host Policy Maps work in both Symmetric and Asymmetric transmission rate implementations.
Terminology and Acronyms
• AR – Available Rate
• BITW – Bump in the Wire
• CAC – Call Admission Control
• DPDK – Data Plane Development Kit
• DPI – Deep Packet Inspection
• EFC – Egress Flow Class
• EMAP – Egress Policy Map
• FACL – Flow Access Control List
• FR – Fixed Rate
• HE – Host Equalization
• HMAP – Host Policy Map
• IFD – Intelligent Flow Delivery
• IFC – Ingress Flow Class
• IMAP- Ingress Policy Map
• NPE – Network Performance Enforcement
• P2P – Peer to Peer
• QoS – Quality of Service
• STM – Saisei Traffic Manager
• TR – Total Rate
• VoIP – Voice over IP
• VM – Virtual Machine a software based virtual Guest or virtual appliance
• Hypervisor – a hardware abstraction layer enabling multiple VM guests share the resources of a hardware platform, generally an X86 COTS server.
Introduction
The intention of this document is to explain the fundamental usage of Host Policy Maps within a defined configuration for the Saisei Traffic Manager.
Host Policy Maps are merely an Extension of Egress Policy Maps and act upon an individual host while the policies within Egress Policy Maps act upon a group of hosts.
Egress Policies and Policy Maps
The process of Classification is one which takes all Ingress Packets on a particular interface and filters those packets according to a user defined set of filters otherwise known as Ingress Flow Classes and channels the resulting packets into one or more bandwidth partitions that exist within the Egress Policy Map assigned to Egress Interface. This process allows a user to define different bandwidth partions to handle traffic types appropriately which might be Best Effort or Guaranteed with and without Priorities.
The STM allows several different types of bandwidth partition to be defined based on two core principles. The core definitions can be viewed as “Best Effort” and “Guaranteed” and can be modified by weighting bandwidth allocation by Priority or in STM terms through the Rate Multiplier value. Additionally bandwidth in all Bandwidth Partitions can be shared according to Flows, Hosts and Children enabling an almost infinite level of flexibility to share the available bandwidth.
By default ALL Bandwidth Partitions will start life as Best Effort and Flow Equalized.
What does Best Effort mean?
A single Best Effort Partition can only use the Configured BandwidthA Best Effort Bandwidth Partition defines the absolute maximum bandwidth that ALL flows using the partition can use, so if the partition is created using 50Mbps Upstream and Downstream then no matter what bandwidth is available on an interface, if the 50Mbps limit is reached then this will cap the users to this as a Maximum value. More flows arriving to use this partition will cause the actual flow rate for individual flows to be reduced accordingly.
Multiple Best Effort Partitions sharing the Configured Bandwidth Equally Multiple Best Effort Bandwidth Partitions can be created, each with its own limit, here again the partitions actually share the available bandwidth equally if the Rate Multiplier is left at the default value of 1. Ultimately the bandwidth is actually shared by flow counts per partition so if one partition has more flows than another the one with the most flows will actually get a higher overall share of the available bandwidth when the multiplier is set to 1.
The next step when using Best Effort Bandwidth Partitions is to set different Priorities using the Rate Multiplier setting, so as an example and assuming a similar number of flows or hosts in each partition we could set 4 priority levels using 1, 2, 4 and 8 as the multipliers which would mean that on a 1Gbps Interface the partitions would be approximately 66Mbps, 132Mbps, 198Mbps and 396Mbps when each partition has a similar number of flows.
Multiple Best Effort Partitions sharing the Configured Bandwidth by Priority
Now we have covered Best Effort what does “Guaranteed” mean, from a Bandwidth Partition definition viewpoint and more specifically how does it differ from Best Effort? You will recall a Best Effort partition when defined sets an Absolute Maximum bandwidth cap for how much bandwidth can be used by flows and hosts allocated to such a partition.
Guaranteed Bandwidth Partitions on the other hand define a Guaranteed Minimum amount of bandwidth that can be used. This means that when a so defined partition is able to it can use all of the defined bandwidth at all times with the additional benefit that when additional bandwidth is available the flows and hosts will be allowed to use the extra.
This allows the maximum use of all bandwidth at all times, when more Best Effort flows arrive to use the excess or free bandwidth the guaranteed group of flows will be slowed to their guaranteed level. If on the other hand the flows and hosts in the guaranteed partition cannot use the assigned bandwidth then it will be redirected to the Best Effort partition until more flows or hosts enter the guaranteed partition thereby maximizing the overall throughput once again.
Guaranteed and Best Effort Partitions sharing the Configured BandwidthBandwidth Partitions can be modified by using other attributes such as: -
- Host Equalization – when this is configured it converts an individual partition from handling individual flows to handling hosts. This will cause the host count to be tracked and changes how bandwidth is allocated to hosts and flows using the partition. As the name suggests bandwidth is shared equally to all hosts using the partition regardless of flow count and where host do not use their allotted bandwidth it will be reallocated to the remaining hosts to maximize the utilization of the available bandwidth. Host Equalization should be considered to be the default operational setting.
- Child Equalization – is used to treat ALL child policies equally in that if a parent policy has 5 children and there is 200Mbps available then it will be offered to the children equally by allocating 40Mbps per child regardless of how many flows or hosts are active in each child. As always when the STM recognizes that what is being offered is not being used it will be offered to the other children to once again maximize throughput. Child equalization is to be used to share bandwidth between sub interfaces such as VLANs.
- Parent – by default an Egress Policy Map is given the Interface as its parent since each interface is given its own bandwidth value which may be different to the actual link speed. This makes it necessary for all Egress Policies to be controlled by the interface configuration in effect setting up “Parental” control, in this situation the interface rate is channelled through a policy entry defined as the “default” Egress Flow Class. This feature allows a configuration hierarchy to be defined enabling many other avenues of control.
Host Policies & Policy Maps
From an overall usage viewpoint we can see that Egress Policies within Egress Policy Maps act upon groups of hosts and flows with the available bandwidth being shared according to the bandwidth allocated to the policies in use according to guarantees and priorities. Worth stressing here is that there are no hard defined maximums for any specific host or user within the Egress Policy mechanism.
This is where Host Policy Maps come in as they are specific to individual hosts and therefore users’ and can in fact apply a maximum upstream and downstream rate limits to each host.
The main difference in the creation of a Host Policy Map is that it does in fact contain vital attributes that are set to create limits for individual hosts. Once configured Host Policies are added using Egress Flow Classes in the same way that Egress Policies are added to Egress Policy Maps, except in this case the definitions are for how flows should be handled on a per host basis.
The actual policies behave in the same way as defined above for Egress Policies in that they can be Best Effort or Guaranteed and weighted using the Rate Multiplier field. In reality is now becomes possible to guarantee VoIP for an individual host while limiting say P2P or Streaming traffic.
Multiple Host Policy Maps can be created each with different host rate caps, and once created allocated to hosts through several different methods, the most useful ones being automatically applied by the underlying STM processes.
One key fact to understand prior to building a Host Policy Map is that like all other control functions within the STM bandwidth MUST be allocated to the Host Policy Map before it can then be allocated to the Host Policies contained in the Host Policy Map.
The bandwidth partition for this is known as the root efc partition and MUST exist in the Egress Policy Map assigned to each physical interface, this root partition MUST be Host Equalized to effectively share the available between ALL hosts using a Host Policy Map.
Multiple root efc partitions can be created to define congestion points within the managed network, these congestion points perhaps known as Access Points will have a collection of hosts attached to them. These bandwidth partitions MUST also be contained in the Egress Policy Map assigned to a physical or sub interface and MUST be Host Equalized with NO traffic type classified to them directly.
All Host Policy Maps should be created to use a default root efc bandwidth partition with the actual root being configured to the host when the Host Policy Map is assigned to the host.
The key attributes contained within a hosts parameter set are the policy and the root efc.
Bandwidth Partitions and Control FeedbackBuilding a Configuration that uses Host Policy Maps
To use Host Policy Maps it is necessary to configure a number of items before turning on Host Policy Usage.
WARNING: - Host Policy Maps are enabled under the “running” entry in the GUI Navigation tree and once enabled the entire STM will run in Host Policy Mode ONLY – EVERY host will at the very least be given the default Host Policy Map if not otherwise configured. Once configured the configuration MUST be saved and the STM RELOADED. To revert to Egress Policy Map mode it is necessary to disable Host Policies, remove the default host policy, save the configuration and RELOAD the STM. Failure to reload after enabling or disabling the feature will have unpredictable effects on overall throughput, flow and host rates.
The best way to see how Host Policy Maps work is through an example, this will be based on ISP networks where individual users purchase a Rate Plan and access the provider network through Access Points.
For this example we will assume that the congestion points to be controlled in the network are the Access Points, the Uplink to the Internet and the User Rate Plans, which actually means in Host Policy Map terms the users attach to the Access Points and thus they become the root partitions.
So for the example we have:
- Internet link – 100Mbps
- Three Access Points
- ap1 – Max throughput 20Mbps
- ap2 – Max throughput 30Mbps
- ap3 – Max throughput 100Mbps
- Three Host Policy Maps – Rate Plans
- hp1 – Max upstream 1.5Mbps, downstream 3Mbps
- hp2 – Max upstream 2.5Mbps, downstream 5Mbps
- hp3 – Max upstream 5Mbps, downstream 10Mbps
- Three policies per plan
- p2p – no more than 10% of the Maximum plan rate
- streaming – no more than 50% of the Maximum plan rate
- data – runs at 100% of Maximum plan rate
- Default Host Policy Map will use the hp2 Host Policy Map
All Host Policy Maps or Rate Plans are initially created to use a bandwidth partition or EFC policy labeled as the Base Root EFC which is created with the same bandwidth as the Internet link rate. Once created the Host Policy Maps can be reassigned to the Access Point through which a host passes when the host is allocated its policy.
The Users or Hosts of the Host Policy Maps or Rate Plans are linked to the Access Points each of which can handle a specific Bandwidth Allocation which must be shared between the attached usest/hosts. In this case a host will be allocated a policy and also root efc associated with the Access Point. The allocation of policy and root efc is normally done by a sript that links the STM to a backend database. When a user/host policy and Access Point are not known the default Host Policy Map is used which uses the base root efc.
For this example we will begin as though the STM does not have a configuration at all, in which case we must build ALL elements required which means we also need to create the classification rules for data, p2p and streaming and use the IP ANY ANY ACL.
We will need a layer2 ACL for filtering all layer 2 traffic, this we create first. To do his we need to be in Expert Mode.

This is because the attribute that needs to be configured – Ethertype,is only visible in this mode.
With the ACL name created we can now add an ACL Entry for layer2, this requires the “Special Case” value of ZERO set into the EtherType field.
The next step is to create all of the required Egress Flow Classes, which are:
- data – this bandwidth partition (EFC) will be used for ALL traffic other than p2p and streaming
- p2p – this bandwidth partiion will be used for peer to peer traffic
- streaming – this bandwidth partition will be used for streaming traffic
- base-root-efc – this bandwidth partition will be used by the default Rate Plan (Host Policy Map)
- ap1 – this bandwidth partition will be used by ALL hosts accessing through Access Point 1
- ap2 – this bandwidth partition will be used by ALL hosts accessing through Access Point 2
- ap3 – this bandwidth partition will be used by ALL hosts accessing through Access Point 3
- layer2 – this is the bandwidth partition used by ALL layer 2 packets without IP headers
This step is repeated for ALL Egress Flow Classes.
With the Egress Flow Classes (Bandwidth Partition names) created we can build an Egress Policy Map containing the policies and therefore bandwidth partitions as follows.
- data – this is a Best Effort partition with 100Mbps up and down – set to the internet interface limit
- p2p – this is a Best Effort partition set to 10% of the internet limit – 10Mbps up and down
- streaming – this this is a Best Effort partition set to 50% of the internet limit – 50Mbps up and down
- base-root-efc – this is a Best Effort partition with 100Mbps up and down – set to the internet limit
- ap1 – this is a Best Effort partition with 20Mbps up and down
- ap2 – this is a Best Effort partition with 30Mbps up and down
- ap3 – this is a Best Effort partition with 100Mbps up and down
- layer2 – this is a Guaranteed partion with 10Mbps up and down
Create the Egress Policy Map.
Then add the policies as named above with the defined rates, each should set the Host Equalization checkbox and for layer2 only the Assured checkbox should also be set.
The following sequence would be repeated for ALL of the named partitions.
The primary attributes to set are:
- Name
- Downstream Rate
- Egress Flow Class
- Host Equalization
- Upstream Rate
These are set for ALL of the policies to the defined values above except the layer2 entry which must set the Assured checkbox and unset the Host Equalization checkbox.
Next we must create the three Host Policy Maps: -
Note:- ALL Host Policy Maps must be created before any associated Ingress Flow Classes or Ingress Policy Map policy entries.
- hp1 – with a max up rate of 1.5Mbps and down rate of 3Mbps using the default root EFC and includes three policies: -
- data – max up rate of 1.5Mbps and down rate of 3Mbps
- p2p – 10% of plan maximums i.e. 125Kbps up rate and 300Kbps down rate o streaming – 50% of plan maximums i.e. 750Kbps up rate and 1.5Mbps down rate
- hp2 – with a max up rate of 2.5Mbps and down rate of 5Mbps using the default root EFC and includes three policies: -
- data – max up rate of 2.5Mbps and down rate of 5Mbps
- p2p – 10% of plan maximums i.e. 250Kbps up rate and 500Kbps down rate o streaming – 50% of plan maximums i.e. 1.25Mbps up rate and 2.5Mbps down rate
- hp3 – with a max up rate of 5Mbps and down rate of 10Mbps using the default root EFC and includes three policies: -
- data – max up rate of 5Mbps and down rate of 10Mbps
- p2p – 10% of plan maximums i.e. 500Kbps up rate and 1Mbps down rate
- streaming – 50% of plan maximums i.e. 2.5Mbps up rate and 5Mbps down rate

This creates the Host Policy Map name and sets the maximum rates a user or host is allowed to use, the required fields being: -
- Name
- Downstream Rate
- Root EFC
- Upstream Rate
With the policy map created the individual policies can be added.
This creates a new policy in the “default” Host Policy map for the data partition, the essential fields here are: -
- Name
- Downstream Rate
- Egress Flow Class
- Upstream Rate
This will be repeated for the p2p and streaming to complete the first Host Policy Map or Rate Plan, then the whole process is repeated for each of the other two Host Policy Maps listed above.
With the Egress requirements set we now need to create the Ingress Flow Classes or Classification Filters, for this we need four IFCs as follows: -
- data – uses the inbuilt IP ANY ANY ACL and data EFC
- p2p – uses the inbuilt ACL, p2p EFC and a Required Group of p2p
- streaming – uses the inbuilt ACL, streaming EFC and a Required Group of streaming
- layers – uses the layer2 ACL we created and the layer2 EFC
Notice when using the inbuilt IP ANY ANY ACL the ACL field in the New Ingress Flow Class form is left blank which indicates to the configuration logic that the inbuilt ACL is to be used.
The above is repeated for each IFC as specified above. For the streaming and p2p IFCs we are going to use the inbuilt application group names for these which have already been linked to the applications to which they apply. To create these we again use the default ACL as well as the named EFC we created leaving the final piece being to set the name in the “Required Groups” field just scroll to the bottom of the form displayed above and set streaming for the first IFC and p2p for the second. The layer2 entry will need to select the layer2 ACL.
Moving on with the configuration we now build the Ingress Policy Map and the Policy entries. Here we need to remember that the Classification Rules (IFCs) are evaluated in sequence number order starting with the lowest sequence number and working to the highest making the policy requirements as follows: - • data – this handles ALL IP traffic and needs to have the highest sequence number of IP traffic at 10000
- p2p – this handles ALL traffic classified as p2p and should have a sequence of 1000
- steaming – this handled ALL traffic classified as streaming and should have a sequence of 2000
- layer2 – this handles ALL traffic with NO IP Header and should be the highest entry with a sequence of 11000
And now to add an Ingress Policy.
The only entries required in an Ingress Policy are:
- Name
- Ingress Flow Class
- Sequence
Now that we have the basics of the configuration done we must configure the STM to run in Host Policy
Map mode, this is done by “Modifying” the running level configuration: -
In the Modify window you will need to scroll to the bottom of the list to reveal the “Use Host Policies” option, this is check and once set the system will run in Host Policy Mode only, this therefore means you also need to set the Default Host Policy field at the top of the window to the “hp2” Host Policy Map we created.
Remember that once you are in this mode to revert to normal Egress Policy Map operation you will need to uncheck the “Use Host Policies” check box and remove the Default Host Policy, then Modify, then save the configuration and finally reload the STM to remove the special elements created to track Host Policies. Failure to do this will cause rate control to be less than perfect
The final step to complete this example – from scratch – configuration is to configure the “Bump In The Wire” (BITW) pair of interfaces and add the Ingress and Egress Policy Maps.
A prerequisite here is that you know the System Interface that will be the External Interface – the one connected to the Internet – and the one that will be the Internal Interface – the one pointing to the Users.
You can use the “Flash LED” function for each System Interface to determine the location in the server and to decide which interface use for each direction. The Flash LED function flashes the Link LED for the selected interface.
For this example System Interface stm2 will be the External Interface and stm3 will be the Internal Interface.
It should also be noted at this point that if there is more than one BITW pair to be configured then it is necessary to create ALL interfaces in the “Disabled” state and then enable them all one after the other as this is a requirement of the underlying software. In fact if at a future date a new BITW pair is added then the requirement will be to disable the existing interfaces, save and reload and then add the new interfaces and then Enable All interfaces.
The required fields to be filled are: -
- Name – not that the name needs to be the same as the System Interface to be used
- Type – will default to Ethernet
- Direction (requested) – set to External for stm2 and Internal for stm3
- Egress Policy Map – epm which is the one we created
- Ingress Policy Map – ipm again the one we created
- Link Speed – 1g or 10g depending on the type of NIC
- Rate – the actual control rate which we defined earlier as 100Mbps
- State – Disabled until ALL interfaces have been created
- System Interface – the same as the Name
For this example the previous step would be repeated for Interface stm3, when creating the stm3 interface you MUST set it’s “peer” interface to “stm2” now with each interface defined, each Interface can then be “Modified” to change the State from Disabled to Enabled.
Once both Interfaces are enabled the system would become fully functional and traffic will begin to pass all being controlled by the default Host Policy Map.
Once the above configuration is running what actually happens is that as every host is added to the Hosts collection it will be configured to use the default policy and be controlled according to that Host Policy Maps policies.
So how would the hosts be allocated to their actual Access Points and be given their actual Host Policy Maps?
This is normally done by a self contained script designed to reconfigure the STM automatically as hosts become active, typically what needs to happen is that the script can create a username based on radius, DNS or Active directory and then needs to link the user to the host by modifying the host and at the same time the actual policy is also configured to the host along with the actual Access Point to which the user is connected, this is set as the Root EFC value.
It is worth noting here that the Hosts collection will show the Intern Hosts while FIBs -> fib0 -> Hosts will show ALL hosts, Internal and External and when setting Host Policy Maps we are setting the “Internal” hosts.
For internal hosts therefore the automated script or external configuration manager must set the following:
- Policy – sets the actual Rate Plan to use, hp1 (3mbps), hp2 (5mbps) or hp3 (10mbps)
- Root EFC – which in this example would be ap1, ap2 or ap3
- User – the username for the actual user