Configuring and Monitoring Virtual Machines for Vertica 7.2.x

Overview: Running Vertica in a Virtual Environment

The Vertica Analytics Platform software runs on a shared-nothing Massively Parallel Processing (MPP) cluster of peer nodes. Each peer node is independent from each other. A Vertica node is a hardware (physical) or software (virtual) host configured to run an instance of Vertica.  

This document provides recommendations for configuring a group of virtual machines (VMs) as a Vertica cluster. These recommendations describe how to create a cluster that is configured for optimal Vertica software performance. 

Although the primary focus is on VMware, these recommendations can be applied to other virtualization software applications. Most virtualization software application have similar configuration knobs and technologies. Wherever possible, this document explains the reason that you should implement the recommended configuration.

Vertica does not perform as fast in a virtualized environment as it does in a bare-metal environment. This happens primarily because of the overhead and resource constraints imposed by the virtualization software. For best performance, use bare-metal environments wherever possible.

Limitations

Vertica does not support suspending a virtual machine while Vertica is running on it.

Unsupported Features

Vertica does not support the following feature:

  • VMware Vmotion

 Minimum Software Recommendations

The recommendations in this document assume that your services, once you configure them, are running the following minimum software versions:

  • VMware ESX 5.5 Hypervisor. to virtualize the Vertica Analytics platform, with VMware Tools installed on each virtual machine.
  • Vertica 7.2.x Enterprise Edition. Vertica 7.2.x implements a new I/O profile that leverages the enhanced capabilities of modern hardware platforms.
  • For information about which Linux platforms Vertica supports, see Vertica Supported Platforms.

Hardware Recommendations for Hosts

All physical hosts that are hosting a virtualized Vertica cluster must have the same hardware configuration and same ESX version. Configure the physical (host) machine as follows:

  • One, two, or four sockets clocked at or above 2.2 GHz
  • 8 GB of RAM per physical core in each socket, populated evenly so that it does not slow down memory access.
  • Sufficient disk throughput to power the number of virtual machines running on the host
  • Geographic co-location on all nodes

Vertica comes with the tools vnetperf, vioperf, and vcpuperf. The virtual servers should reach at least these performance goals:

  • Networking:
    • 100 MB/s of UDP network traffic per node on the private network (as measured by vnetperf)
    • 20 MB/s per core of TCP network traffic on the private network (as measured by vnetperf)
    • Independent public network
  • I/O:
    • Measured by vioperf concurrently on all Vertica nodes:
    • 25 MB/s per core of write
    • 20+20 MB/s per core of rewrite
    • 40 MB/s per core of read
    • 150 seeks per second of latency (SkipRead)
    • Thick provisioned disk, or pass-through storage

For best performance:

  • Disable CPU scaling on the physical hosts.
  • Configure the disk blocks to align with the blocks that ESX creates. Unaligned blocks may cause reduced I/O performance during high load.

VMware provides recommendations for aligning VMFS partitions in the Recommendations for Aligning VMFS Partitions document. Follow these recommendations for the best I/O throughput. In more recent VMware versions, the vSphere Client automatically aligns the VMFS partitions along the 64 KB boundary. But if you need to align VMFS partitions manually, consult the aforementioned document for instructions.

Configuring the Virtual Machines for Your Hardware

All VMs in a virtualized Vertica cluster must be configured with the same specifications. Configure your virtual machine as follows:

  • One socket per VM and 4 GB of memory per core in that socket
  • Configure all volumes attached to each VM as:
    • Thick Provisioned Eager Zeroed
    • Independent
    • Persistent

If shared storage is used, run the I/O tests concurrently on all nodes. Doing so allows you to get an accurate picture of available throughput per node.

Advanced Memory and CPU Configuration

The VMware document Best Practices for Performance Tuning of Latency-Sensitive Workloads in vSphere VMs provides most of the recommendations described in this section.

Before you configure the advanced memory and CPU settings, if you have short queries and require that they return with low latency, disable C1E and C-State in the BIOS.

Disabling C1E and C-State depends on the make and model of your server. For HPE ProLiant servers, you must do the following:

  • Set the Power Regulator Mode to Static High Mode.
  • Disable Processor C-State Support.
  • Disable Processor C1E Support.
  • Disable QPI Power Management.
  • Enable Intel Turbo Boost.

In addition, you must configure several settings on the Resources tab of each VM. These settings accomplish the following:

  • Configure VMs with no oversubscription.
  • Bind VMs to a specific socket and NUMA node (memory dedicated to that socket).
  • Memory is reserved, and guest memory is not swapped to disk. (When the host swaps the Vertica guest memory to disk, Vertica performance can be significantly impacted.)

Right-click each VM, and select Edit Settings. Make the following changes in the Resources tab:

  • Advanced CPU:
      • Under Hyper-threaded Core Sharing, sent Mode to None.
      • Under Scheduling Affinity, specify the cores that correspond to the socket that the VM is using. On an 8-core socket with hyper-threading, specify 0-15 for the first socket, 16-31 for the second socket, and so on.
  • Advanced Memory: Under Use memory from nodes, select the socket that the VM will be using. For example, if you selected 0-15 for the first socket, specify 0.
  • Memory: CheckReserve All Guest Memory (all locked).
  • Verify that virtual memory ballooning is disabled for each VM. Never use virtual memory ballooning on a system that hosts Vertica nodes.

Linux I/O Subsystem Tuning

To support the maximum performance of your VM, review the following recommendations.

Vertica Data Location Volumes

Configure the following settings for the volumes that store your database files:

  • The recommended Linux file system is ext4.
  • The recommended Linux I/O Scheduler is deadline.
  • The recommended Linux readahead setting is 4096 512-byte sectors (4 MB).

Have more than one volume for better organization. For example, you may have one volume of operating system and Vertica software, and another volume for the database catalog and data. This recommendation has no performance implications in a virtualized environment, unless the volumes are on different physical disks.

System administrators should configure the deadline scheduler and the readahead settings for the Vertica data volume so that these settings persist each time that a server restarts. To make sure those settings persist, add the following lines to the /etc/rc.local file:

echo deadline>/sys/block/sdb/queue/scheduler 
 blockdev --setra 4096 /dev/sdb 
 echo never>/sys/kernel/mm/redhat_transparent_hugepage/enabled 
 echo never>/sys/kernel/mm/redhat_transparent_hugepage/defrag 
 echo no>/sys/kernel/mm/redhat_transparent_hugepage/khugepaged/defrag

Caution Failure to follow the recommended Linux I/O subsystem settings can adversely affect Vertica performance.

I/O Performance

After you configure the storage, run vioperf concurrently on all nodes. Doing so allows you to accurately determine the I/O throughput of each VM.

The minimum required I/O is 20 MB/s read and write per physical processor core on each node.

This value is based on running in full duplex, reading and writing at this rate simultaneously, concurrently on all nodes of the cluster.

The recommended I/O is 40 MB/s per physical core on each node. For example, the I/O rate for a server node with two hyperthreaded six-core CPUs is a required minimum of 240 MB/s. For best performance, use 480 MB/s.

For more information, see vioperf in the Vertica product documentation.

Network Configuration

After you have configured your network, run vnetperf to understand the baseline network performance. For more information, see vnetperf in the Vertica documentation.

Network Latency

The maximum recommended RTT (round-trip time) latency is 1000 microseconds. The ideal RTT latency is 200 microseconds or less. Keep clock skew under 1 second. The minimum recommended throughput is 500 MB/s. Ideal throughput is 800 MB/s or more.

Tuning the TCP/IP Stack

Depending on your workload, number of connections, and client connect rates, you need to tune the Linux TCP/IP stack to provide you with adequate network performance and throughput.

The following script represents the recommended network (TCP/IP) tuning parameters for a Vertica cluster. Other network characteristics may affect how much these parameters optimize your throughput.

Add the following parameters to the /etc/sysctl.conf file. The changes take effect after the next reboot.

##### /etc/sysctl.conf
 # Increase number of incoming connections 
 net.core.somaxconn = 1024 
 #Sets the send socket buffer maximum size in bytes.
 net.core.wmem_max = 16777216
 #Sets the receive socket buffer maximum size in bytes.
 net.core.rmem_max = 16777216
 #Sets the receive socket buffer default size in bytes.
 net.core.wmem_default = 262144
 #Sets the receive socket buffer maximum size in bytes.
 net.core.rmem_default = 262144
 #Sets the maximum number of packets allowed to queue when a particular
 #interface receives packets faster than the kernel can process them.
 #increase the length of the processor input queue 
 net.core.netdev_max_backlog = 100000 
 net.ipv4.tcp_mem = 16777216 16777216 16777216 
 net.ipv4.tcp_wmem = 8192 262144 8388608 
 net.ipv4.tcp_rmem = 8192 262144 8388608
 net.ipv4.udp_mem = 16777216 16777216 16777216 
 net.ipv4.udp_rmem_min = 16384 
 net.ipv4.udp_wmem_min = 16384

Configuring Vertica on Your Virtualized Cluster

To maintain high availability of your virtualized Vertica cluster, configure clusters of three or fewer nodes on different physical hosts.

For virtual clusters with more than three nodes, distribute the nodes evenly onto three or more physical hosts. Make sure to configure those nodes for fault groups, as described in the next section.

Vertica K-safety

Verify that your K-safety setting is greater than 0 for all databases running on a virtual or bare-metal environment.

Vertica cluster K-safety is designed to provide high availability and fault tolerance in the event that a physical node fails. When using VMs, K-safety provides such protection against the failure of virtual machines. K-safety does not provide protection if the ESX host or hosting infrastructure fails.

For best results, cconfigure Vertica fault groups. Fault groups may provide a level of resilience against the failure of the underlying infrastructure. For detailed information, see Fault Groups in the Vertica product documentation.

You can also use VMware High Availability and VMware Fault Tolerance. These features minimize downtime and data loss if you experience hardware failure.

Monitoring Your Virtualized Cluster

Perform the following steps regularly to monitor your virtualized Vertica cluster.

Linux Utilities

To monitor your Vertica virtual machines, run the following Linux utilities on your Vertica virtual machines weekly:

  • vnetperf
  • vcpuperf
  • vioperf. Run vioperf concurrently on all the Vertica virtual machines in your cluster.

Save and review the results. For example, if you run vioperf and the %IO Wait value is greater than 20%, that may indicate high disk latency. This can negatively impact Vertica database performance.

Important The v*perf utilities are hardware validation tools when you set up an Vertica cluster. They can also provide useful information after the cluster is up and operating. However, these tools generate artificial and possibly heavy workloads on a cluster. These workloads may interfere with normal database operations. 

Monitor CPU Ready and CPU Usage

The vSphere CPU ready value indicates the percentage of time that the VM could not run on the physical CPU. The CPU usage value indicates the amount of time the virtual CPU was used as a percentage of the available physical CPU. If the CPU ready values are consistently greater than 20% or the CPU usage values greater than 90%, the VM CPUs may be overcommitted or CPU starvation may be occurring.

For more information about vSphere CPU metrics, see CPU Usage.

Monitor Disk Latency

To monitor disk I/0 for your host and your virtual machine, use the vSphere Client disk performance charts. To monitor the disk latency, use the Advanced performance charts.

For more information about the vSphere disk metrics, see Disk (KBps).

Monitor Memory Utilization on Each Host

To monitor memory usage on your hosts, use the Linux free or the vmstat command. Alternatively, to monitor memory usage, you can use vSphere, as described in Monitoring Inventory Objects with Performance Charts.

Memory usage on the Vertica virtual machine should never exceed 100%. In addition, there should also be no memory swapping on the Vertica virtual machine. Run free to see information on memory usage. In the following example, you can see that only 17.5% of memory is in use, and memory swapping is less than 1%:  

$ free
              total       used       free   shared   buffers     cached
 Mem:       3924632     161940    3762692       16    181504    2595864
 -/+ buffers/cache:     985324    2939308
 Swap:      2097148       6248    2090900

In addition, you need to verify that memory usage is never more than 100% and that there is minimal swapping on the Linux virtual machine running on a VMware host.

For more information about vSphere memory metrics, see Memory (MB).

Monitor Network Usage

Make sure to monitor the following network usage characteristics for both the host and the virtual machine:

  • Network usage, KBps send/receive
  • Broadcast receives and transmits
  • Number of receive packets dropped and number of send packets dropped
  • Packet receive errors

In addition:

For more information about vSphere network metrics, see Networks (Mbps).

VMware Monitoring Tools

VMware provides several system-health and performance-monitoring metrics. For details, see vSphere Monitoring and Performance (PDF) or About vSphere Monitoring and Performance (HTML).