Vertica Hardware Guide

Selecting and Configuring a Physical Server as a Vertica Node

The Vertica Analytics Platform software runs on a shared-nothing MPP cluster of peer nodes. Each peer node is independent, and processing is massively parallel. A Vertica node is a hardware host configured to run an instance of Vertica.

This document provides recommendations for selecting and configuring an individual physical server as a Vertica node. These recommendations help guide you in a discussion with your hardware vendor to create a cluster with the highest possible Vertica software performance.

Recommended Software

This document assumes that your servers, after you configure them, are running the following minimum software versions:

  • Vertica 9.x Enterprise Edition. Vertica recommends using Vertica 9.3.1 (or later) Enterprise Edition.
  • Red Hat Enterprise Linux 7.6. Recommending Red Hat Linux is not an indication that Vertica has a preference for this product over other supported Linux distributions. The purpose of this recommendation is solely to enhance discussion points in this document.

For more information, see Supported Operating Systems and Operating System Versions in the Vertica documentation.

Selecting a Server Model

Through a combined process of lab and customer testing, Vertica has determined that for most customers, the best server model for individual Vertica nodes is a 2-socket server that can support at least 256 GB of RAM and at least 10 (preferably more then 20) internal disk drives.

Many vendors offer server models that fit into this recommendation. Engage with your hardware provider to select the exact model and parts required for your deployment.

Selecting a Processor

Vertica recommends Intel-based processors to provide the best overall performance for your cluster, and Intel offers many options for the CPU choice. For the best price/performance tradeoff, consider the following CPU models:

  • Intel Xeon Gold 6246 Processor 3.3 GHz: 12 cores
  • Intel Xeon Platinum 8268 Processor 2.9 GHz: 24 cores

Currently, Intel offers CPU options with more cores. However, some database actions in Vertica cannot take advantage of these options, leading to slower core speeds, which can reduce overall performance. The processor’s faster clock speed directly affects the Vertica database response time. Additional cores enhance the cluster’s ability to simultaneously execute multiple MPP queries and data loads. Vertica has determined that the choice of 12 physical cores per CPU is the optimal number based on real-life customer deployments and average usage models. These two processors are the fastest 12-core processors available from Intel at the time of this writing (April 2018). These processors allow Vertica to deliver the fastest response time across a wide spectrum of concurrent database workloads.

Make the choice to enable Hyper-Threading on the CPU based on the individual workload. Hyper-Threading increases the number of logical cores per CPU by allowing each core to process 2 threads simultaneously. This can be very effective for short, fast processes, but detrimental for long-running processes because a single process can cause the second process thread to wait.

Vertica recommends that you perform your own testing to determine the best settings for your data and workloads.

Selecting Memory

For maximum performance, Vertica nodes should include at least 256 GB of RAM. The good rule of thumb is to have 8–12 GBs of RAM per physical core in the server. Check with your hardware vendors, but it is common practice to have multiple memory channels within a physical server to allow for several memory DIMMs to be installed. Doing so allows you to configure memory in servers in a variety of ways to optimize costs. Typically, the most cost-effective approach is using a lot of DIMMs of a smaller memory size to fill the available memory slots and channels. However, on most servers, specifically older models, when you expand the memory beyond the second channel, the speed of the memory may degrade. This decrease in memory speed can have an adverse effect on Vertica performance.

Vertica recommends that you check with your hardware vendor and select DIMMs in appropriate sizes to avoid a situation where the memory speed may degrade.

Many vendors are now implementing additional memory redundancy and correction technologies to their hardware. As of this writing (April 2018), Vertica has not yet evaluated any of these advanced memory technologies. All these technologies, such as mirroring or sparing, offer additional system redundancies but come with an associated cost. These technologies can increase the reliability of the hardware, but in doing so, may increase the cost of the server or decrease the performance. If you are planning to implement any of these technologies, have a detailed discussion with your hardware vendor about the pros and cons, and check with your operating system vendor to verify that they are supported.

Selecting and Configuring Storage

As stated earlier in this document in “Selecting a Server Model”, Vertica recommends that you deploy at least 10 disks, preferably 20 or more, per server on a Vertica node. This recommendation is based on the performance advantages of having multiple disks servicing the I/O requirements of typical Vertica workloads.

Hardware vendors offer a large choice of hard drive types. However, based on testing and usage in existing customer deployments, Vertica recommends that you choose enterprise class 12 Gb/s SAS drives that have a rotational spin of 10k or higher. SSDs can also be used, but they tend to raise the costs of a Vertica node, while not offering significant performance increases for Vertica.

Additionally, deploy a disk array controller that

  • Supports at least four (4) 12 Gb/s SAS lanes.
  • Has a minimum of 1 GB of configuration cache.
  • Can support RAID 1, RAID 5, RAID 10, or RAID 50.

A Vertica installation requires at least two storage locations―one for the operating system and catalog, and the other for data. Place these data locations on a dedicated, contiguous-storage volume.

Vertica is a multithreaded application. The Vertica data location I/O profile is best characterized as large block random I/O.

The following specifications constitute a generic example of the storage hardware configuration for maximum performance of aVerticanode:

  • 2x 300 GB 12 G SAS 10K enterprise class drives (configured as RAID 1 for the operating system and the Vertica catalog location)
  • 22x 1.2 TB 12 G SAS 10K cnterprise class drives (configured as one RAID 10 device for the Vertica data location, for approximately 13 TB total formatted storage capacity per Vertica node)

You can configure a Vertica node with less storage capacity:

  • Substitute 22x 1.2 TB 12 G SAS 10K enterprise class drives with 22x 600 GB 12 G SAS 15K enterprise class drives.
  • Configure the drives as one RAID 10 device for the Vertica data location, for approximately 6 TB of total data storage capacity per Vertica node.

Alternatively, if your disk array controller supports it, consider using additional disks as hot spares for added redundancy. However, such a configuration is unnecessary with a RAID 1+0 configuration.

Vertica can operate on any storage type: internal storage, a SAN array, a NAS storage unit, or a DAS enclosure. In each case, the storage appears to the host as a file system and should be capable of providing sufficient I/O bandwidth. Internal storage in a RAID configuration offers the best price/performance/availability characteristics at the lowest TCO. If you are considering using SAN- or NAS-based storage for Vertica, make sure to consider the overall load created on the storage device and connecting pathways by the cluster as a whole. In cases where a shared storage device is deployed to support Vertica, test I/O throughput by running vioperf on all the nodes connecting to the shared storage simultaneously to understand the baseline I/O performance. This testing provides a much more accurate picture of overall I/O performance.

The minimum required I/O is 20 MB/s read and write per physical processor core on each node. Verticarecommends 40 MB/s read and write per physical core or more. This value is based on running in full duplex and reading and writing at this rate simultaneously, concurrently on all nodes of the cluster.

For more information, see vioperf in the Vertica documentation.

Data RAID Configuration

For best performance, all data drives should be configured as one RAID 10 device with a default strip size of 512 KB. However, if you are willing to trade some disk performance to increase storage space and provide more redundancy, you can alternately choose RAID 5 or RAID 50.

The controller cache ratio setting should favor writes heavily as opposed to reads (10/90, if available).

The logical drive should be partitioned with a single primary partition spanning the entire drive.

Place the Vertica data location on a dedicated physical storage volume. Do not co-locate the Vertica data location with the Vertica catalog location. The Vertica catalog location on a Vertica node should be either co-located with the operating system drive, or configured on an additional drive.

For more information, read Before You Install Vertica in the product documentation, particularly the discussion of Verticastorage locations.

Note Vertica only supports storage configured with the Linux Logical Volume Manager in the I/O path on LVM version 2.02.66 or later, and must include device-mapper version 1.02.48 or later. This limitation applies to all Vertica storage locations, including the catalog, which is typically placed on the operating system drive.

Linux I/O Subsystem Tuning

To support maximum performance on a Vertica node, Vertica recommends the following Linux I/O configuration settings for the Vertica data location volumes:

  • The recommended Linux file system is ext4.
  • The recommended Linux I/O Scheduler is deadline.
  • The recommended Linux readahead setting is 8192 512-byte sectors (4 MB).

System administrators should durably configure the deadline scheduler and the readahead settings for the Vertica data volume so that these settings persist across server restarts.

Caution Failing to use the recommended Linux I/O subsystem settings adversely affects the performance of your Vertica database.

Data RAID Configuration Example

The following configuration and tuning instructions pertain to the Vertica data storage location after the disk array has been created using the appropriate tools for your specific controller.

Note The following steps are provided as an example, and may not be correct for your machine. Verify the drive numbers and population for your machine before running these commands.

  1. Partition and format the RAID 10 data drive:

    # parted -s /dev/sdb mklabel gpt mkpart primary ext4 0% 100%
    # mkfs.ext4 /dev/sdb1
  2. Create a /data mount point, add a line to the /etc/fstab file, and mount the Vertica data volume:

    # mkdir /data
    [add line to /etc/fstab]: /dev/sdb1 /data ext4 defaults,noatime 0 0
    # mount /data
  3. So that the Linux I/O scheduler, Linux readahead, and hugepage defragmentation settings persist across system restarts, add the following lines to the /etc/rc.local file. Apply these steps to every drive in your system.

    Note The following commands assume that sdb is the data drive, and sda is the operating system/catalog drive.

    echo deadline > /sys/block/sdb/queue/scheduler
    blockdev --setra 8192 /dev/sdb
    echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
    echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
    echo no > /sys/kernel/mm/redhat_transparent_hugepage/khugepaged/defrag
    echo deadline > /sys/block/sda/queue/scheduler
    blockdev –setra 2048 /dev/sda
  4. After you have configured the storage, run vioperf to understand the baseline I/O performance.

The following numbers are good target goals for disk I/O performance where the data disk count is greater than 20:

  • 2,200 MB/s for read and write when using 15K RPM drives
  • 2,200 MB/s write and 1,500 MB/s read when using 10 K RPM drives
  • 800+800 MB/s for rewrite
  • 7k+ for seeks

If your results are significantly lower, review the preceding steps to verify that you have configured your data storage location and disk array controller correctly.

Selecting a Network Adapter

To support the maximum performance MPP cluster operations, Vertica nodes should include at least two 10-gigabit Ethernet ports, bonded together for performance and redundancy.

A Vertica cluster is formed with Vertica nodes, associated network switches, and Vertica software.

Vertica recommends that you consider connecting the Vertica nodes to two separate Ethernet networks:

  • The private network, such as a cluster interconnect, is used exclusively for internal cluster communications. This network must be the same subnet, dedicated switch, or VLAN, 10 Gb Ethernet. Vertica performs TCP P2P communications and UDP broadcasts on this network. IP addresses for the private network interfaces must be assigned statically. Do not allow external traffic over the private cluster network.
  • The public network is used for database client (i.e., application) connectivity, and it should be 10 Gb Ethernet. Vertica has no rigid requirements for public network configuration. However, Vertica recommends that you assign static IP addresses for the public network interfaces.

The private network interconnect should have Ethernet redundancy. Otherwise, the interconnect (specifically, the switch) would be a single point of a cluster-wide failure.

Cluster operations are not affected, even in the event of a complete failure of a public network. Thus, public network redundancy is not technically required. However, if a failure occurs, the application connectivity to the database is affected. Therefore, consider public network redundancy for continuous availability of the entire environment.

To achieve redundancy on both the private and public networks:

  1. Take the two ports from the Ethernet card on the server and run one to each of the two top-of-rack switches (which are bonded together in an IRF).
  2. Bond the links together using LACP.
  3. Using VLANs, divide the links into public and private networks.

Configuring the Network

The following figure illustrates a typical network setup that achieves high throughput and high availability. (This figure is for demonstration purposes only.)

This figure shows that the bonding of the adapters allows for one adapter to fail without the connection failing. This bonding provides high availability of the network ports. Bonding the adapters also doubles the throughput.

Furthermore, this configuration allows for high availability with respect to the switch. If a switch fails, the cluster does not go down. However, it may reduce the network throughput by 50%.

Tuning the TCP/IP Stack

Depending on your workload, the number of connections, and client connect rates, tune the Linux TCP/IP stack to provide adequate network performance and throughput.

The following script represents the recommended network (TCP/IP) tuning parameters for a Vertica cluster. Other network characteristics may affect how much these parameters optimize your throughput.

Add the following parameters to the /etc/sysctl.conf file. The changes take effect after the next reboot.

##### /etc/sysctl.conf
# Increase number of incoming connections
net.core.somaxconn = 1024
#Sets the send socket buffer maximum size in bytes.
net.core.wmem_max = 16777216
#Sets the receive socket buffer maximum size in bytes.
net.core.rmem_max = 16777216
#Sets the receive socket buffer default size in bytes.
net.core.wmem_default = 262144
#Sets the receive socket buffer maximum size in bytes.
net.core.rmem_default = 262144
#Sets the maximum number of packets allowed to queue when a particular interface receives packets faster than the kernel can process them.
# increase the length of the processor input queue
net.core.netdev_max_backlog = 2000
net.ipv4.tcp_mem = 16777216 16777216 16777216
net.ipv4.tcp_wmem = 8192 262144 8388608
net.ipv4.tcp_rmem = 8192 262144 8388608
net.ipv4.udp_mem = 16777216 16777216 16777216
net.ipv4.udp_rmem_min = 16384
net.ipv4.udp_wmem_min = 16384

If you have a high concurreny workload and if Vertica is CPU bound, you can increase the memory for the queue and the queue depth by using the following command:

sudo sysctl -w net.core.netdev_max_backlog=2000

Additional (BIOS) Settings

Vertica performs best when the hardware is tuned for a high-performance, low-latency application. Intel-based processors offer a wide array of power-saving technologies. These technologies can reduce the overall power consumption of the server, but can also affect high-end performance.

To obtain the very best performance, follow the hardware manufacturer’s guide to disable CPU scaling and power-saving features in the BIOS of the Vertica nodes. Many hardware vendors offer documented guides for tuning their specific servers for high-performance, low-latency applications like Vertica.

Additionally, you must modify the max_cstate value. Set it to 0 to disable CPU C-states.

For Red Hat Enterprise Linux 7.x, /etc/grub.conf has become /boot/efi/EFI/[centos or redhat]grub/conf. Append the following arguments to the following kernel command:

intel_idle.max_cstate=0 processor.max_cstate=1 intel_pstate=disable

Vertica recommends that you deploy on Red Hat Enterprise Linux 7.x. However, if you are deploying a non-current version of Vertica, check the product documentation for that Vertica version to determine the supported operating systems.

For Red Hat Enterprise Linux versions prior to 7.0, modify the kernel entry in the /etc/grub.conf file by appending the following to the kernel command:

intel_idle.max_cstate=0 processor.max_cstate=0

VerticaValidation Utilities

Included with every Vertica installation is a set of validation utilities, typically located in the /opt/vertica/bin directory. These tools, as mentioned previously, can help you determine the overall performance of your Vertica nodes and cluster.

  • Use the vcpuperf utility to measure your Vertica node’s CPU processing speed and check the high and low load times, which determine if CPU throttling is enabled.
  • The vioperf utility tests the performance of your Vertica nodes’ input and output subsystems, allowing you to identify I/O bottlenecks.
  • The vnetperf utility allows you to measure the network performance between your Vertica nodes, providing an indication of networking issues that may be impacting your cluster performance.

Vertica recommends that you use these tools during initial deployments, but also to run the utilities periodically to determine if performance has changed over the course of time. This allows you to identify if any system changes are having an adverse effect on Vertica performance.

For More Information

If you follow the hardware recommendations in this documentation, you should experience excellent performance from your Vertica database. Additional resources for tuning your Vertica database can be found in the Vertica Knowledge Base.

If you have problems, please contact Verticasupport.