Hardware and System Verification Tools and Commands including Certified Systems list

Hardware and System Verification Tools and Commands including Certified Systems list

Terminology and Acronyms

  • AR – Available Rate
  • BITW – Bump in the Wire
  • CAC – Call Admission Control
  • DPDK – Data Plane Development Kit
  • DPI – Deep Packet Inspection
  • EFC – Egress Flow Class
  • EMAP – Egress Policy Map
  • FACL – Flow Access Control List
  • FR – Fixed Rate
  • HE – Host Equalization
  • HMAP – Host Policy Map
  • IFD – Intelligent Flow Delivery
  • IFC – Ingress Flow Class
  • IMAP- Ingress Policy Map
  • P2P – Peer to Peer
  • QoS – Quality of Service
  • STM – Saisei Traffic Manager
  • TR – Total Rate
  • VoIP – Voice over IP
  • VM – A software based virtual Guest or virtual appliance
  • Hypervisor – a hardware abstraction layer enabling multiple VM guests share the resources of a hardware platform, generally an X86 COTS server.

Hardware Requirement

This document will list the expectations of any hardware platform on which the NPE software package can run, at the time of writing Saisei will only support software running on certified systems as defined in this document – should users deploy on hardware other than from the certified list this will be at the users own risk and Saisei will be unable to support that installation. 

Recommended Hardware

 

Model

CPU

Memory

Storage

NIC

Bypass

Max Throughput

Dell PowerEdge R200 Series

Intel® Xeon® 2.6GHz or higher,

4 Cores

16GB DDR4

500 GB SSD or higher

i350 Intel/

Silicom Dual Port Copper/

Silicom Dual Port Fiber

Yes, with Silicom NIC

Up to 1 Gbps

Intel® Xeon® 2.6GHz or higher,

6 Cores

16GB DDR4

500 GB SSD or higher

i350 Intel/

Silicom Quad Port Copper/

Silicom Quad Port Fiber

Yes, with Silicom NIC

Up to 2x1 Gbps

Dell PowerEdge R400 Series

Intel® Xeon® 2.5GHz or higher,

10 Cores or more

32GB DDR4

1 TB SSD or higher

X710 Intel/

Silicom Dual Port Copper/

Silicom Dual Port Fiber

Yes, with Silicom NIC

Up to 5 Gbps

Dell PowerEdge R400 Series

Intel® Xeon® 2.3GHz or higher,

14 Cores or more

64GB DDR4

1 TB SSD or higher

X710 Intel/

Silicom Dual Port Copper/

Silicom Dual Port Fiber

Yes, with Silicom NIC

Up to 10 Gbps

Lanner 7573

(only available through Saisei)

Intel® Atom® C2000 8 Core

16GB DDR3

500 GB SSD

Lanner Quad Port Copper

Yes, built-in

Up to 2x1 Gbps












Storage Requirements

While it is recommended that a Solid State Drive (SSD) be used it is possible to run with a 7200rpm HDD drive. A SSD will improve the performance of the Historical Data subsystem especially when producing historical charts – also at the current time there are NO requirements to use RAID based systems.

The drive should be GREATER than 400GB in size.

Collecting historical data requires that the SSD drive must be a write intensive device which can support almost infinite write cycles as can be found with the Samsung 863 drive or Dell 800GB SSD SATA Write Intensive MLC 6Gbps 2.5in Hot-plug Drive.

An enterprise grade hard drive may also be used, although historical data reporting will not be as responsive. 

Supported NIC Controllers

The requirements of the DPDK interface software means that only specific Intel based controllers are fully functional for DPDK operation – at the time of writing this document there are NO Broadcom based controllers that meet the requirements defined for successful operation in a DPDK environment. The current list of Intel based controllers is: -

Intel 8254x
Intel 82571..82576
Intel 82580
Intel 82583
Intel 82598..82599
Intel DH89xxcc
Intel ich8..ich10
Intel pch..pch2
Intel i210
Intel i211
Intel i350
Intel i354
Intel x520
Intel x540
Intel x710

Intel 82599 10Gbps should be used for high capacity requirements.

Supported SFPs and Optics

Intel based NIC controllers usually only support Intel SFP and SFP+ modules – please see the specific datasheet for the chosen Intel based NIC for full interoperability options including whether non-Intel based SFP(+) modules are supported.

Bypass Options

Both Internal and External bypass options are supported by the NPE software – in some cases these options are actually built into the base system – see the Small Model definitions above – in other cases bypass support is built into the base NIC module and finally for the most comprehensive bypass support the NPE has been certified to run with certain External Bypass units.
Internal Bypass NICs
 
PE2G4BPi35LA-SD – Quad port 1G copper bypass with i350 chip (dual segment)
 
PE210G2BPi40-T-SD – Dual port 10G copper bypass with X540 chip (single segment) 
 
PE210G2BPi9-SR-SD – Dual port 10G fiber SR bypass with i82599 chip (single segment)

External Bypass

The NPE has been tested and approved to work with Interface Masters Niagara 1G and 10G models. These systems typically offer more options for controlling the bypass function including dynamic handshakes between both sides of the links and require additional configuration options which are beyond the scope of this document – please refer to specific users manuals for configuration options.

Other manufacturers bypass switches may indeed work with the NPE, but these have not been tested by Saisei, typically they will require the default configuration “ethertype 0” classifier to pass any handshake message and if they do not work with the basic-config.sh configuration file it can be assured that the bypass is not compatible with the NPE’s operational modes.

System and Software Verification

While it is expected that all users purchase supported systems as defined above it is clearly possible for potential customers to build up or purchase their own systems for deployment either due to company purchase policies or because there is no desire to commit to a full system until after an evaluation period where an existing in house system would be utilized as the base hardware platform.

Additionally once a hardware platform has been certified for operation it is also necessary to ensure that the correct Operating software version has been loaded prior to the installation of the NPE software package.


Hardware Verification

Prior to loading the NPE software package it would be prudent to verify base hardware functionality is conformant with the system requirements for the selected model to be used. All verification commands require sudo access rights thus it is recommended that the user changes user rights to super user using the command: -
  • sudo su
 
If the initial user created during installation of the base Operating System was not entered into the sudoers file then it will be necessary to give the password to gain full access.
 
Hardware checklist: -
  1. Hyper Threading
  2. Core Count
  3. CPU Model and Flags
  4. Memory
  5. Hard Drive Size
  6. NIC Cards
  7. Physical Port Determination

Hyper Threading and Core Counts

Hyper Threading is NOT allowed for stm operation.
To check for the current state of Hyper Threading and Core Counts in a bare metal system use the command: -
  • lscpu
The output of the command will be as follows or similar when Hyper Threading is turned on: -

root@FlowCommand5-1:/home/malc# lscpu

Architecture:               x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:                 Little Endian
CPU(s):                        16
On-line CPU(s) list:    0-15
Thread(s) per core:       2
Core(s) per socket:       8
Socket(s):                     1
NUMA node(s):         1
Vendor ID:                   GenuineIntel
CPU family:                6
Model:                         45
Stepping:                     7
CPU MHz:                  2600.000
BogoMIPS:                 5200.00
Hypervisor vendor:     VMware
Virtualization type:         full
L1d cache:                        32K
L1i cache:                         32K
L2 cache:                          256K
L3 cache:                          20480K
NUMA node0 CPU(s):     0-7

Core counts requires is explicit in the output.

When Hyper Threading is enabled the “CPU” field will be twice the total “Core(s) per socket” field multiplied by the number of “Socket(s)” with an additional indicator being the “Thread(s) per core” field will be 2. In the previous output where there is 1 socket we see the Thread count to be 2 and the CPU(s) to be 16 which indicates that Hyper Threading is enabled.

To disable Hyper Threading in a Bare Metal environment it will be necessary to reboot the system and change the BIOS option that controls the feature.
To disable Hyper Threading in Dell system look for and disable the following: -
  1. Virtualization Technology - disable
  2. Logical Processor - disable
Once turned off the output will be of the form: -
Architecture:               x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:                 Little Endian
CPU(s):                       8
On-line CPU(s) list:   0-7
Thread(s) per core:      1
Core(s) per socket:      8
Socket(s):                    1
NUMA node(s):          1
Vendor ID:                  GenuineIntel
CPU family:                6
Model:                         45
Stepping:                     7
CPU MHz:                   2600.000
BogoMIPS:                  5200.00
Hypervisor vendor:     VMware
Virtualization type:      full
L1d cache:                    32K
L1i cache:                     32K
L2 cache:                      256K
L3 cache:                      20480K
NUMA node0 CPU(s): 0-7
If using ESXi there are two places to check the current state of Hyper Threading, to access these you must connect to the ESXi host using vSphere and select the ESXi host IP address in the Navigation pane, once selected you can choose the “Summary” tab which will result in the following display: -

The second location to view the Hyper Threading status is found under the “Configuration” tab which reveals two additional selection spaces, one for Hardware and a second for Software. To verify Hyper Threading here select the “Processors” entry in the Hardware list which results in the following display: -

In this display you will notice a “Properties” option in the top left corner of the display, if any installed VM is active this option will be greyed out making it necessary to shut down ALL active and powered on VM’s. Once they are powered down the “Properties” option becomes active – select this option to change the state of the Hyper Threading option and results in the following display: -

Unselect the “Enabled” option and select OK – the window clearly indicates that a system restart is required to fully disable this feature.
CPU Models and Flags
Small and Large systems rely on Intel XEON processors to reach maximum throughput in any particular system where obviously a higher speed socket with higher core counts would be the most effective for successful operation.
In all cases the primary requirement for all CPU’s is that they contain and can process/execute the sse4_2 option flag, without this flag operation of the Saisei NPE software would be impossible.
To view the CPU information for the cpu’s in use in a system merely execute the following command which will display an entry for every CPU in a system, the command is: -
cat /proc/cpuinfo

And the output is: -
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model             : 45
model name    : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
stepping          : 7
microcode       : 0x70d
cpu MHz         : 2600.000
cache size        : 20480 KB
physical id       : 0
siblings            : 8
core id             : 0
cpu cores         : 8
apicid              : 0
initial apicid    : 0
fpu                   : yes
fpu_exception : yes
cpuid level     : 13
wp                   : yes
flags                : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf eagerfpu pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 popcnt aes xsave avx hypervisor lahf_lm ida arat epb xsaveopt pln pts dtherm
bogomips        : 5200.00
clflush size      : 64
cache_alignment : 64
address sizes       : 40 bits physical, 48 bits virtual
power management:

In this example I the CPU model and flag have been highlighted.

Memory

The memory requirements for each model have already been defined and as expected higher speed memory will enhance the throughput capabilities of the NPE software, currently 1800MHz DDR3 memory is the most readily available however DDR4 is now coming online and could be used should the system motherboard support it – although it does comes at a price premium.

To determine the amount of memory installed the following command can be used: -
  • dmidecode –t 19
Which will produce the following output: -
# dmidecode 2.12
SMBIOS 2.4 present.
 
Handle 0x0124, DMI type 19, 15 bytes
Memory Array Mapped Address
        Starting Address: 0x00000000000
        Ending Address: 0x003FFFFFFFF
        Range Size: 16 GB
        Physical Array Handle: 0x00E2
        Partition Width: 64
This command will work for both bare metal and an ESXi installation via an SSH login. ESXi also allows a user to verify the configured status of an active VM by selecting the VM and choosing the “Getting Started Tab” and “Edit virtual machine options” – example windows follow: -
First the VM Status: -

And now the virtual machine settings: -

Where you can clearly see the configured memory – in this case the VM is configured for a Small model.

Hard Drive

An SSD of size 400GB or greater is required as specified in section 2.3 above. To check the hard drive size use the command: -
df -h
Which results in the following output: -
Filesystem                            Size  Used Avail Use% Mounted on
/dev/mapper/NPE--Controller--vg-root   83G  4.0G   75G   6% /
none                                  4.0K     0  4.0K   0% /sys/fs/cgroup
udev                                  7.9G  4.0K  7.9G   1% /dev
tmpfs                                 1.6G  584K  1.6G   1% /run
none                                  5.0M     0  5.0M   0% /run/lock
none                                  7.9G     0  7.9G   0% /run/shm
none                                  100M     0  100M   0% /run/user
/dev/sda1                             236M   37M  187M  17% /boot
The NPE start up scripts actually check for a single partition to be at least 400GB in size before starting and enabling the internal Historical Data collector.

ESXi systems from Release 5.1 onwards will be built as an installable OVA/OVF file set which will already have encoded a 500GB hard drive requirement, if this space is unavailable on the ESXi system installation will fail which ensures that the basic hard drive size requirements are met.

NIC Cards and Interfaces

As mentioned earlier in this document any NIC that is used in an NPE system must conform to the Intel specification as already defined.

To determine the installed hardware use the command: -
  • lspci (Display ALL PCI components)
  • lspci | grep Eth (Filtered to display only Ethernet controllers)

Resulting in the following output: -

02:00.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01)
02:01.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01)
02:02.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01)
02:04.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01)
02:05.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01)
02:06.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01)
02:07.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01)
02:08.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01)
02:09.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01)


Physical Port Discovery

Clearly in a single system there may be several NIC interfaces installed, the lspci command above will show all interfaces both DPDK capable and not. Once the interfaces are known I will be necessary to discover the 2 interfaces to be used for the basic NPE deployment.

From Release 5.1 onwards the NPE software will discover and display ALL DPDK capable interfaces and will also eliminate the Management interface. Tools have also been developed to enable the discovery of each interface by physical location where each interfaces link LED will be flashed.

There are of course Linux commands that can also be used prior to the installation and deployment of the NPE software to determine the list of interface, their actual name and where they are located in the system, the following commands may be useful: -

  • ifconfig –a (Displays ALL Ethernet interfaces whether up or down and including the internal name)
  • ip link show (Displays a condensed list of information per link including the internal name)
  • ethtool –p interface-name (This will flash the Link LED allowing for correct determination of the interface when viewed directly) The more natural way to discover which interface is which and which should be used is to install the NPE softare package and follow the steps to start it. Once running you can connect to the Graphical User Interface and expand the “System Interfaces” collection in the Navigation tree. Each specifc interface can be selected and a “FLASH LED” option used  or a Link can be added to each interface to determine the physical location in a chasis.
Typical ifconfig output would be: -
eth0      Link encap:Ethernet  HWaddr 00:0c:29:e7:c4:6b
          inet addr:10.1.10.204  Bcast:10.1.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fee7:c46b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6336 errors:0 dropped:0 overruns:0 frame:0
          TX packets:664 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:663695 (663.6 KB)  TX bytes:128957 (128.9 KB)
 
eth1      Link encap:Ethernet  HWaddr 00:0c:29:e7:c4:75
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
 
eth2      Link encap:Ethernet  HWaddr 00:0c:29:e7:c4:7f
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

While the output from the “ip link show” command would be: -
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:e7:c4:6b brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:0c:29:e7:c4:75 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:0c:29:e7:c4:7f brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:0c:29:e7:c4:89 brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:0c:29:e7:c4:93 brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:0c:29:e7:c4:9d brd ff:ff:ff:ff:ff:ff
8: eth6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:0c:29:e7:c4:a7 brd ff:ff:ff:ff:ff:ff
9: eth7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:0c:29:e7:c4:b1 brd ff:ff:ff:ff:ff:ff
10: eth8: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:0c:29:e7:c4:bb brd ff:ff:ff:ff:ff:ff
 

Software Verification

Prior to installing the NPE software package it would be prudent to verify the version of Ubuntu that has been installed to the system whether Bare Metal or ESXi.
For VMware ESXi operation the NPE is certified to run on ESXi 5.1 and 5.5 only.

It is vitally important the version of Ubuntu that is installed is 14.04 LTS which was the first LTS version to be released with absolutely NO upgrades applied. The installation file for this version is available from the Saisei Support web site and is available to download by all who have requested and been granted a user account.

To determine the base Operating system installed use the following command: -
  • lsb_release -r
And the output will be: -

Release:        14.04

Prior to installing any Saisei software it would be wise to check the base Kernel version which can be accomplished with the following command: -
uname –r

Whose output should be: -

3.13.0-24-generic

During the installation of the NPE software package the installer will connect to the internet and upgrade ALL relevant software to the appropriate version number without user input.