Chapter 1. About Saisei Traffic Manager
This chapter contains an overview of Saisei Traffic Manager and how it works.
What is Saisei Traffic Manager?
The Saisei Traffic Manager (STM) is a real-time traffic monitor and controller for Internet
traffic. It collects real-time, fine-grained statistics about all traffic flowing on a link. At the
same time it controls traffic using powerful, flexible, user-defined policies.
STM is a software product, that is distributed either as a virtual machine image to run on
a hypervisor, or packaged on a hardware system suitable for the target network speed (1
Gb/s or 10 Gb/s). It is also available as an RPM file for Centos or Ubuntu.
STM can be accessed via a graphical user interface (GUI), a command line interface (CLI) or
a REST API.
STM is incorporated into a network as a "bump in the wire" (BITW). All network traffic
that is being managed enters on an ingress interface and exits on its corresponding egress
STM can be configured to run in monitor mode or control mode.
- In monitor mode, STM provides realtime visibility into the network, but does not apply
- In control mode, STM will monitor the network, but it will also manage flows by
How Saisei Traffic Manager works
STM reads data packet headers of incoming network traffic and assigns the packet to a
flow. The flow is then managed according to flow control rules that are defined during
STM operates in the manner shown in the figures below:
How Saisei Traffic Manager works
How STM operates is dictated by configuration settings as illustrated in the image below.
Configuration settings define how Saisei Traffic Manager operates
Chapter 2. Flow Concepts
The concept of network traffic 'flow' is central to understanding the operation of STM.
The first thing that the Saisei Traffic Manager does when it receives a packet is to associate
it with a flow. A flow is a stream of packets corresponding to the same TCP or UDP session,
identified by their IP addresses and port numbers.
A flow corresponds to traffic in just a single direction, such as, from client to server. A TCP
session has one flow in each direction, from client to server and from server to client. STM
understands and makes use of the relationship between these two flows, but manages their
If the protocol in the data packet header is TCP or UDP, STM identifies unique flows based
on the ’five-tuple’:
- Source IP address
- Destination IP address
- Source port
- Destination port
- Layer 4 protocol
If the protocol used for the flow is neither TCP nor UDP, flow identification is limited to
- Source IP address
- Destination IP address
- Layer 4 protocol
When STM receives the first packet for a new flow, it classifies the flow to determine the
way the flow will be handled. This classification is repeated frequently during the life of a
Flows can be classified according to several criteria, including:
- The application it is serving (for example, a specific website or a protocol such as VoIP,
Skype, or BitTorrent)
- The external geographic location it is serving
- The internal and external hosts it is connecting
- The internal user it is serving (if an address-to-user database such as Microsoft Active Directory or OpenLDAP is available)
- Groups (for example, a group could consist of all countries where a company has business partners, or all applications whose network usage is to be tightly controlled)
- Behavioral characteristics, such as number of packets and duration
STM manages every flow in real-time, constantly monitoring each flow to make sure
that it stays within its allocated bandwidth. This eliminates the problems that occur with traditional IP routers, where the algorithms might randomly drop packets to manage
congestion, causing flows to timeout, reset, or stall.
STM maintains detailed state information for the flows it services. As each packet is
received, it is associated with a flow. The state of the flow, (for example, current bandwidth
in use and the number of bytes, packets and any anomalous events) is updated as each
packet is received.
Flow classification allows STM to manage flows based on higher-level characteristics,
such as behavior (for example, flow duration) or type of application (for example, Skype
or BitTorrent). STM can detect whether a flow represents voice traffic, video traffic, or
bulk data transfers. This allows STM to enable different levels of service for different
applications and users.
Management of TCP Flows
When the allotted bandwidth for a flow is approached, STM manages each flow according to the policy that applies to it. If possible, it delays packets as an
indication to the hosts that transmission should slow down. If packet delay is not configured or proves ineffective, packets are selectively dropped. The TCP
standard ensures that hosts react correctly to this, and also that dropped packets are
retransmitted, ensuring the integrity of the data.
Management of Non-TCP Flows
Many application protocols that use UDP rather than TCP have their own
congestion management mechanisms that work in ways similar to TCP. Some kinds
of real-time traffic (such as VoIP and video) react badly to packet loss in a very
noticeable way, such as voice or image breakup. STM can be configured to protect
such traffic from packet loss.
Chapter 3. Saisei Traffic Manager Components
STM’s policies and attributes must be configured before STM can perform monitoring or
The components needed for a basic STM configuration are:
- Access control list (ACL) and ACL entries (ACE)
- Ingress flow class (IFC)
- Ingress policy map (IPM) and ingress policies
- Egress flow class (EFC)
- Egress policy map (EPM) and egress policies
Overview of Saisei Traffic Manager Components
The configuration components of STM select other components to establish the
relationships that will be used to create the rule set used for flow control.
The diagram below shows the configuration components of STM and their relationships.
Solid lines with arrows indicate that a component is specifying another component during
- In the diagram, there are two interfaces, Eth1 and Eth2. Each interface selects two policy
maps; an egress policy map (EPM) and an ingress policy map (IPM)
- Each policy map contains policies. Egress policies select an egress flow class (EFC) and
ingress policies select an ingress flow class (IFC)
- The IFC selects an EFC and an access control list (ACL)
- As the same EFC is selected by the egress policy and by the IFC, a link is created between the ingress policy and the egress policy. These two linked policies are then
used by STM to create the rule set for implementing flow control
Relationships between configuration components of Saisei Traffic Manager
Interfaces are STM's connection to the rest of the network.
STM defines an interface as a place where packets can be received and transmitted. There are two kinds of interface: physical and tunnel.
- Physical interfaces correspond to physical devices. Their type is always 'ethernet'.
- Tunnel interfaces correspond to protocols such as VLANs and MPLS, which run over a physical interface.
Every STM system has at least two physical Ethernet interfaces. Additional interfaces (for
example, VLANS) may be defined that make use of the physical Ethernet interfaces.
STM is usually operated as a “bump in the wire” (BITW), meaning that two interfaces pass
traffic between each other without making a routing decision. Two interfaces paired in this
way are called “peer interfaces”.
Packets – and hence flows – are received on a particular interface. This becomes the ingress
interface for the flow. An egress interface is selected, which is normally the peer of the
ingress interface. Depending upon the flow direction, the interface will act as an ingress or
egress interface for a particular flow, as depicted below.
Ingress and egress flows on an interface
An interface specifies a rate, which is the maximum total bandwidth that will be used
for all flows that use the interface as an egress interface. For a physical interface this rate
will normally be the physical bandwidth (either 10G or 1G), though a lower value can be
Access Control Lists
An ACL is used to match the five-tuple addressing information in a flow. An ACL contains
one or more access control list entries (ACE). Each ACE matches a specific set of addressing
information, taken from the five-tuple in the packet header.
The configuration attributes of an ACL and its ACEs are the same in the CLI, GUI, and
REST API. All screenshots are from the GUI.
ACL in STM GUI
Multiple ingress flow classes (IFC)s can select the same ACL. The rules in the ACL (as
defined in the ACEs) and the classification information in the IFC are used to match the
packet with a flow class.
The following screen capture of the STM GUI shows an ACE rule called 10.1.2.10-24.
ACE in STM GUI
An ACE has the attributes listed below:
The name of the ACE.
The port or port range that the flow's destination port must match in this ACE. A
port is a number in the range 0 to 65535. A range is written, for example, “20-21”. If
no destination port is specified, any port will match the ACE.
The subnet must include the flow's destination address. It is written in the form of
an IPv4 or IPv6 address, followed by a prefix length up to 32 for IPv4, or 128 for
IPv6; for example: “126.96.36.199/16”, or “fe00:1234:5678::/48”. If no destination address is
specified, the default setting "any" will be used, meaning any address will match the
The Differentiated Services Code Point (DSCP) value in the IP header that must
match with the DSCP value of the packets in the flow.
The Layer 2 ethertype that is required to match this ACE. If it is specified as 0, any
ethertype will match. By default, the ethertype must be either 0x0800 (IPv4) or
The layer 4 protocol required to match this ACE. This can be “tcp”, “udp”, “icmp”,
“any”, or a protocol number in the range 0-255. If 'ip' or blank, any IP protocol will
match the entry.
If this attribute is selected, then the ACE will only match a flow which is being
created automatically in response to a flow in the opposite direction. This provides
a way of blocking new flows in one direction, while allowing flows that are in
response to flows initiated in the other direction.
The port or port range that the flow's source port must match in this ACE. A port
is a number in the range 0 to 65535. A range is written, for example, “20-21”. If no
source port is specified, any port will match the ACE.
The subnet must include the flow's source address. It is written in the form of an
IPv4 or IPv6 address, followed by a prefix length up to 32 for IPv4, or 128 for IPv6;
for example: “188.8.131.52/16”, or “fe00:1234:5678::/48”. If no source address is specified,
the default setting "any" will be used, meaning any address will match the ACE
If this attribute is selected, the ACE will also match a flow in which the source and
destination are reversed, that is, if the entry says the source subnet and port are
184.108.40.206/24 port 80, it will match a flow in which these are the destination subnet and
Ingress Flow Classes
IFCs are the heart of the classification method of STM. An IFC specifies all the
characteristics that a flow must have in order to be selected as belonging to a class. These
characteristics are defined in the ACL selected in the IFC and in the IFC itself.
The configuration attributes of an IFC are the same in the CLI, GUI, and REST API. All
screenshots are from the GUI.
Ingress Flow Class in STM GUI
When creating an IFC, you must include an ACL. If the packet header information does not
match the ACL, the IFC will not be selected.
If an EFC is not specified, matching flows will be dropped.
An IFC may select flows that belong to it in many different ways, using the following
attributes which are specified at its creation .
An ACL allows flows to be selected based on the basic IP five-tuple of source and
destination IP address, layer 4 protocol, and layer 4 source and destination ports, if
applicable. For more information, see Access Control Lists (ACL).
If an application is specified, only flows that have been identified to that application
will be considered. For more information, see Applications.
If a geolocation is specified, only flows associated with that geolocation will be
selected. For more information, see Geolocations.
Group memberships can represent collections of applications, geolocations, users,
or other groups. Any flows associated with these collections will be required or
excluded, based on group membership. Group memberships can influence IFC
selection in two ways:
This attribute can specify one or more groups to which the flow must belong in order for this IFC to be selected
This attribute can specify one or more groups to which the flow must not belong in order for this IFC to be selected
Traffic characteristics are specified between a minimum and a maximum range.
Each of these traffic characteristic attributes can be specified or left blank. By
default, each of them is blank, meaning that it does not apply.
A flow can be selected based on any of the following traffic characteristics:
- Minimum or maximum duration of a flow
- Minimum or maximum number of bytes for a flow
- Minimum or maximum number of packets for a flow
- Minimum or maximum rate over flow lifetime
Ingress Policy Maps and Ingress Policies
Each interface specifies an ingress policy map (IPM) that contains the policies to be applied
to a flow based on the matching IFC. The same IPM can be attached to multiple interfaces,
or if different policies are required (for example, for different VLANs), separate IPMs can
The configuration attributes of an IPM are the same in the CLI, GUI, and REST API. All
screenshots are from the GUI.
IPM in STM GUI
An IPM contains one or more policies, each of which is associated with a single IFC. An
IPM also specifies a sequence number. If a flow matches more than one IFC (which will
often be the case), the ingress policy with the lowest sequence number is selected. In effect,
STM scans the policies beginning with the lowest sequence number and works toward the
highest sequence number.
Different policies may be selected over the duration of a flow, as its characteristics may
change and cause a different classification method to be chosen.
The figure below shows the ingress policy ‘base’. The name of an ingress policy also
indicates the name of the IFC that contains it.
Ingress Policy in STM GUI
The following attributes are defined in the ingress policy associated with the IFC:
This attribute sets a maximum rate that will be allowed for the flow, even if
conditions at the egress interface would permit a higher value.
If this attribute is set, matching flows will only be passed if the specified bandwidth
is available at the egress interface. If not, the flow will be dropped. This is useful for managing flows such as streaming video, which are of no use if any data is
If selected, packets for this flow will not be forwarded.
Policy Routing Address:
If set, it diverts the flow to a different routing address than would otherwise be
Policy Routing Interface:
If set, it diverts the flow to a different interface than would otherwise be used.
If this is specified, packets will be delayed up to the given time interval. This can be
useful for managing traffic that occasionally exceeds its rate limit.
If this is specified, all packets in the flow will be delayed by the given amount. This
can be useful for testing or to deliberately impose delay for certain traffic.
There are further attributes of specialized use, which are listed in the CLI documentation.
Egress Flow Classes
An egress flow class creates a link between one or more ingress flow classes, and the egress
policy associated with it for a particular interface. An egress flow class must be created
before it can be associated with an ingress flow class or have an egress policy configured
The configuration attributes of an EFC are the same in the CLI, GUI, and REST API. All
screenshots are from the GUI.
EFC in STM GUI
For ease of management, the EFC and the egress policy should have the same name, for
example, ‘mobile’. The 'EFC-egress policy' pair can be selected by multiple IFCs. An EFC
must be applied to an interface in order to allocate bandwidth for an egress flow on that
interface. If no EFCs are selected by an IFC, the IFC performs an automatic drop of ALL
packets in any flows associated with the IFC.
Egress Policy Maps and Egress Policies
Each interface specifies an EPM that contains the policies to be applied to a flow based on
the matching EFC, as specified in the selected IFC. The same EPM can be attached to more
than one interface, or if different policies are required (for example, for different VLANs),
separate EPMs can be used.
The configuration attributes of an egress policy map and and an egress policy are the same
in the CLI, GUI, and REST API. All screenshots are from the GUI.
EPM in STM GUI
An EPM contains one or more egress policies, each of which is associated with a single
Egress policy in STM GUI
An egress policy specifies how a flow is to be handled during its lifetime. Egress policies
specify aspects of flow handling that relate to collections of flows of the same kind, rather
than individual flows. In particular, they allow the bandwidth allocated to a collection of
flows to be managed for the combination of all the flows.
Normally, an EPM will contain a policy for every defined EFC. If a flow specifies an EFC
that does not exist in the EPM for its egress interface, it will be dropped.
The bandwidth attributes are:
Sets the maximum rate in Kbp (but also see Assured) that the collection of flows in
the EFC will be allowed to achieve.
If the assured attribute is selected for a policy, then the set rate becomes the assured
minimum bandwidth for the collection of flows, even if competing traffic would
otherwise reduce the rate available.
Host equalization can be used to distribute bandwidth equally among all the
hosts within an egress flow class. Host equalization prevents invasive traffic users
with a large number of flows from consuming a disproportionate amount of the
bandwidth and placing users with a small number of flows at a disadvantage.
Other attributes are used to manage hierarchies of EFCs. For more information, see the CLI
Chapter 4. Additional Concepts
This section provides more information about applications, geolocations, groups,
configuration, and dynamic bandwidth management.
An application is, generally, a service that is provided over the network.
Examples of applications include YouTube, Skype, Facebook, BitTorrent and specific web
services identified by their URL, such as www.bbc.co.uk. Many policies are intended to
apply to specific applications, so identifying the application in use is of great importance in
the configuration of STM.
Applications are represented in STM by application objects. These can be created through
the CLI, GUI, or REST API. They can also be created automatically.
Applications are recognized by STM in several ways. When the first packet of a flow is received, the TCP/UDP port numbers are matched against well-known ports (from
the Internet Assigned Numbers Authority (IANA) database. Heuristic identification is
performed based on prior DNS requests from the hosts and on the hosts past behavior.
Once the flow is established, the payload of TCP or UDP messages is examined, using a
deep packet inspection (DPI) engine. When application recognition is enabled, packets
of a flow are examined until a positive identification of the application has been made.
Subsequent packets are not examined, saving computing resources.
Several thousand applications are built into the system and can be recognized
automatically. Some of these are distinct protocols, while others are identified by
examination of the URI in the HTTP headers. If a website does not correspond to a known
application, an application object is automatically created, based on the host HTTP header.
Using the ‘server’ attribute, it is possible to create applications corresponding to a pattern
that matches one or more URLs, for example, '*bbc.*' to match any URL containing an
element that contains 'bbc.’.
An application object keeps statistics that track its usage, for example the current traffic
rates and the total traffic for the application.
Configuration and Partitions
Although only one configuration can be running at a time, several configurations can be
stored and activated as required. All manageable elements of the software are contained
within a configuration.
Configurations are held in permanent (disk) storage. When the software is started, any
configurations held on disk are loaded and can be edited.
By default, when STM is simply started and configured, the running configuration is called
"default_config". Whenever the configuration is saved, the corresponding file is updated.
It is not possible to switch to a different configuration in a running system. The only way
to switch to a different configuration is by specifying the configuration to be used after
restarting STM and then restarting.
In addition to named configurations, STM supports the notion of a current partition and
an alternate partition. This is primarily intended for use when upgrading software. The
new software is installed into the alternate partition, the configuration is saved there, and then the system is restarted from the alternate partition, by setting the 'boot_partition'
attribute to 'alternate' in the CLI or by selecting System > Reload System > Boot Partition
> Alternate in the STM software.
Boot partition in STM GUI
When the system restarts, the former alternate partition becomes current and the former
current partition becomes alternate. If the upgrade has been successful, the now alternate
partition can also be upgraded. But if there is a problem, it's possible to revert to the former
Dynamic Bandwidth Management
The bandwidth allotment for each flow is not static. The amount of bandwidth available
will vary as network flows are started and stopped.
STM constantly monitors traffic behavior and network usage so as to reallocate available
bandwidth in real-time. This means that when more bandwidth is available, every flow on
the network gets an increased allotment in line with the policy set for the flow.
This dynamic allotment of bandwidth combined with the constant management of each
flow in accordance with policy rules has many benefits:
- STM maintains the bandwidth used at an egress interface at the specified rate, by
controlling the individual flows assigned to the EFC.
- Delay is minimized so that the time delay from ingress to egress for flows that are
allowed to proceed without dropped packets is less than 10 micro seconds.
- Uncontrolled packet loss is eliminated.
- Voice, video and other critical applications can be allocated a guaranteed bandwidth.
A geolocation is a geographic location that is associated with one or more IP address ranges.
In normal use, a geolocation object is created for each country of interest. The STM
distribution contains a geolocation database which contains a current mapping of address
ranges to countries. Currently, it identifies about 130,000 distinct address ranges.
The database can be installed by setting the configuration attribute 'load_geolocations' to
'true’ in the CLI or by selecting System Load > geodata in the STM software. It typically
takes 30-60 seconds to load.
When geolocations have been installed, every non-internal host is associated with a
country. This can be used to set policy. For example, you can apply a different policy to
traffic originating from "friendly" countries than is applied to other countries.
A geolocation keeps information about current and historical traffic rates and flows.
Groups allow a policy to be applied to a number of entities (applications, users, or
geolocations) without configuring each one explicitly.
Groups collect usage information (for example, number of flows and bandwidth) for the
aggregate of all entities they contain. They are also used to select the policy to apply to
flows, through the ingress flow class.
For example, a group could be defined for all users in a particular department, or all
applications corresponding to peer-to-peer protocols. A single object can be associated with
up to three groups.
Groups can also be nested, to a depth of one. For example, a group could be defined for
each department in an organization, with users assigned to those groups. Then a group
could be defined for each executive-level function, with the departments assigned a
function. This would allow policies to be assigned at either the department or function
level, and will collect usage information at both levels.
STM manages bandwidth separately for flows and for hosts. It can also manage bandwidth
for flows associated wth each host, using host equalization (HE) and host policy maps
The simplest form of host management is host equalization (HE). When host equalization
is enabled for an egress flow class (EFC), its bandwidth is divided equally among all hosts.
Then, for each host, the bandwidth is divided equally among the flows.
This is a powerful way to control “greedy” hosts, which try to get the maximum
bandwidth by using a very large number of flows. Suppose there are two hosts, one with just a single flow (for example, a Netflix download) and another with 100 flows (for
example, a BitTorrent download). Without HE each flow will get equal bandwidth, so the
host with 100 flows gets virtually all the available bandwidth. But with HE, the bandwidth
is divided equally among the two hosts. Each of the 100 BitTorrent flows get just one
hundredth of the bandwidth given to the Netflix download.
Using the Rate Multiplier
In some cases, STM may be required to share bandwidth among hosts unequally, while
still taking advantage of the benefits of host equalization. This can be done using the
rate_multiplier attribute of the host. For example, if there are three hosts, A, B and C, with
rate multipliers of 2, 1, and 0.5 respectively, then A will get double the bandwidth of B,
who will get double the bandwidth of C, regardless of the absolute amount of bandwidth
Host Policy Maps
Generally, many hosts will be treated the same way. For example, there may be a few “rate
plans” with different properties, and all hosts will be assigned to one of these rate plans.
The host policy map (HPM) allows each of these plans to be defined just once, and applied
to individual hosts as appropriate.
A HPM has several attributes which control its assigned hosts:
- rate_multiplier: sets a rate multiplier, as described above
- rate_limit: sets an absolute limit to the bandwidth that will be made available to this host, even if more is available
- flow_limit: sets a limit to the number of simultaneous flows that this host is allowed to have
- flow_rate_cap: sets a limit to the maximum bandwidth available for a single flow for this host
The HPM assigned to a host is determined by the host's 'policy' attribute. There are several
ways this can be set:
- by an explicit management command, through the CLI, GUI or configuration script.
This is only practical for a small number of hosts
- by a Python script, which (for example) interrogates a provisioning database whenever
a new host is detected
- automatically when a new host is detected, using the 'initial_host_policy_map' attribute
of an ingress flow class
The third option can be used in a very simple way, to ensure that a host always has a
policy before a script or manual configuration has a chance to assign the right HPM. It can
also be used in a more elaborate way, to assign a different HPM based on the ingress flow
class (IFC) that the host matches.
For example, suppose that within an organization, desktop users are assigned to one
subnet: 10.1.0.0/16. Externally accessible servers are assigned to another: 10.2.0.0/16. IFCs
can be created that automatically assign hosts in the first group to a policy appropriate to
desktop users, and hosts in the second group to a policy appropriate to servers.
Using Host Policy Maps to Determine Flow Policy
A host's HPM assignment can also be used to influence the policy applied to flows for that
host. This is done using the 'match_host_policy_map' attribute of an ingress flow class.
For example, this could be used to allow incoming sessions only for designated servers. If
servers are assigned to the HPM 'server', while desktop users are assigned to 'desktop', an
IFC and ingress policy can be defined to block such flows unless the HPM is 'server'.
Using Host Policy Maps to Implement Per-Host Flow Classes
Sometimes it is required to manage bandwidth for different traffic types within a host. For
example, it may be required to protect voice and video, to give preferred service to some
applications, and to restrict others to a limited bandwidth regardless of what is available.
This is exactly analogous to using egress flow classes to manage different traffic classes,
and the solution is very similar.
Within a host policy map, different host policies can be defined corresponding to the
different types of traffic. A host policy is essentially the same as an egress policy, with
the same attributes. In particular, a host policy can be assigned a maximum available
bandwidth (the 'rate' attribute), or a minimum guaranteed bandwidth (the 'rate' attribute
combined with the 'assured' attribute). It can also be given a rate multiplier, to give
bandwidth preferentially to one class over another.
Traffic is associated with a particular policy based on its egress flow class. If a host has a
host policy map, and the flow's egress flow class has an entry in the policy map, then the
flow will be assigned to the host policy rather than an egress policy.
Host policies apply separately to each host, within the total bandwidth available to
that host. This figure varies continuously depending on the number of hosts and traffic
As an example, the policy outlined above could be implemented with the following set of
- video: assured, maximum bandwidth (rate) 6 Mbit/sec
- voice: assured, maximum bandwidth 250 kbit/sec
- preferred: no rate limit (rate set to something very large), rate_multiplier=4
- normal: no rate limit, rate_multiplier=1
- restricted: rate limit 2 Mbit/sec, rate_multiplier=0.5
Traffic is associated with the appropriate policy by the definition of suitable ingress flow
classes, in exactly the same way as for egress flow classes.
Setting Limits on Host Policies
Host policies have two additional attributes, compared to egress policies, which allow
limits to be set on flows associated with those policies.
- flow_limit: sets a maximum number of flows that will be permitted for each host in this
class. For example, the number of video flows could be limited to 3, ensuring that each
of them receives a minimum opf 2 Mbit/sec
- flow_rate_cap: sets the maximum rate for a single flow in this class
Sharing Bandwidth Between Different Host Policy Maps
By default, a host policy map gets its bandwidth share from the total bandwidth at the
interface. For example, if an interface has a configured bandwidth of 1 Gbit/sec and has
100 active hosts, each host will receive at least 10 Mbit/sec. In practice, not all hosts will be
using their available bandwidth, and the remainder will be shared amongst those that are
able to use it.
Sometimes, it may be required to control the way the interface bandwidth is shared
amongst different classes of hosts. For example:
- an upper limit (less than the interface bandwidth) is to be made available to the class of
- available bandwidth is to be explicitly distributed among different classes of host
This can be achieved by first creating egress policies for the interface, describing how the
bandwidth is to be partitioned. Then, the root_efc attribute of the host policy map is set to
indicate which EFC's bandwidth is to be distributed among the hosts in that policy.
For example, suppose that there are two types of host. The first, called 'sp-abc', is to be
limited collectively to 300 Mbit/sec, while the second, 'sp-def', is to be limited to 500 Mbit/
sec. The steps involved to implement this are:
- Create EFCs called efc-sp-abc and efc-sp-def
- Create egress policies for these, with 'rate' set to 300 and 500 Mbit/sec respectively
- Create a host policy map hpm-sp-abc, with root_efc set to efc-sp-abc
- Create a host policy map hpm-sp-def, with root_efc set to efc-sp-def
Now, the 'abc' hosts will all share 300 Mbit/sec, while the 'def' hosts will all share 500
All of the egress policy capabilities can be used to manage sharing bandwidth in this way.
For example, if the 'abc' hosts are to receive 300 Mbit/sec regardless of any other traffic on
the interface, the corresponding policy can be set to 'assured'.