ODBC Connection Failover
If a client application attempts to connect to a host in the Vertica Analytic Database cluster that is down, the connection attempt fails when using the default connection configuration. This failure usually returns an error to the user. The user must either wait until the host recovers and retry the connection or manually edit the connection settings to choose another host.
Due to Vertica Analytic Database's distributed architecture, you usually do not care which database host handles a client application's connection. You can use the client driver's connection failover feature to prevent the user from getting connection errors when the host specified in the connection settings is unreachable. It gives you two ways to let the client driver automatically attempt to connect to a different host if the one specified in the connection parameters is unreachable:
- Configure your DNS server to return multiple IP addresses for a host name. When you use this host name in the connection settings, the client attempts to connect to the first IP address from the DNS lookup. If the host at that IP address is unreachable, the client tries to connect to the second IP, and so on until it either manages to connect to a host or it runs out of IP addresses.
- Supply a list of backup hosts for the client driver to try if the primary host you specify in the connection parameters is unreachable.
For both methods, the process of failover is transparent to the client application (other than specifying the list of backup hosts, if you choose to use the list method of failover). If the primary host is unreachable, the client driver automatically tries to connect to other hosts.
Failover only applies to the initial establishment of the client connection. If the connection breaks, the driver does not automatically try to reconnect to another host in the database.
Choosing a Failover Method
You usually choose to use one of the two failover methods. However, they do work together. If your DNS server returns multiple IP addresses and you supply a list of backup hosts, the client first tries all of the IPs returned by the DNS server, then the hosts in the backup list.
If a host name in the backup host list resolves to multiple IP addresses, the client does not try all of them. It just tries the first IP address in the list.
The DNS method of failover centralizes the configuration client failover. As you add new nodes to your Vertica Analytic Database cluster, you can choose to add them to the failover list by editing the DNS server settings. All client systems that use the DNS server to connect to Vertica Analytic Database automatically use connection failover without having to change any settings. However, this method does require administrative access to the DNS server that all clients use to connect to the Vertica Analytic Database cluster. This may not be possible in your organization.
Using the backup server list is easier than editing the DNS server settings. However, it decentralizes the failover feature. You may need to update the application settings on each client system if you make changes to your Vertica Analytic Database cluster.
Using DNS Failover
To use DNS failover, you need to change your DNS server's settings to map a single host name to multiple IP addresses of hosts in your Vertica Analytic Database cluster. You then have all client applications use this host name to connect to Vertica Analytic Database.
You can choose to have your DNS server return as many IP addresses for the host name as you want. In smaller clusters, you may choose to have it return the IP addresses of all of the hosts in your cluster. However, for larger clusters, you should consider choosing a subset of the hosts to return. Otherwise there can be a long delay as the client driver tries unsuccessfully to connect to each host in a database that is down.
Using the Backup Host List
To enable backup list-based connection failover, your client application has to specify at least one IP address or host name of a host in the BackupServerNode
parameter. The host name or IP can optionally be followed by a colon and a port number. If not supplied, the driver defaults to the standard Vertica port number (5433). To list multiple hosts, separate them by a comma.
The following example demonstrates setting the BackupServerNode
connection parameter to specify additional hosts for the connection attempt. The connection string intentionally has a non-existent node, so that the initial connection fails. The client driver has to resort to trying the backup hosts to establish a connection to Vertica.
// Demonstrate using connection failover. // Standard i/o library #include <stdlib.h> #include <iostream> #include <assert.h> // Only needed for Windows clients // #include <windows.hgt; // SQL include files that define data types and ODBC API // functions #include <sql.h> #include <sqlext.h> #include <sqltypes.h> using namespace std; int main() { SQLRETURN ret; // Stores return value from ODBC API calls SQLHENV hdlEnv; // Handle for the SQL environment object // Allocate an a SQL environment object ret = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &hdlEnv); assert(SQL_SUCCEEDED(ret)); // Set the ODBC version we are going to use to // 3. ret = SQLSetEnvAttr(hdlEnv, SQL_ATTR_ODBC_VERSION, (SQLPOINTER) SQL_OV_ODBC3, SQL_IS_UINTEGER); assert(SQL_SUCCEEDED(ret)); // Allocate a database handle. SQLHDBC hdlDbc; ret = SQLAllocHandle(SQL_HANDLE_DBC, hdlEnv, &hdlDbc); assert(SQL_SUCCEEDED(ret)); /* DSN for this connection specifies a bad node, and good backup nodes: [VMartBadNode] Description=VMart Vertica Database Driver=/opt/vertica/lib64/libverticaodbc.so Database=VMart Servername=badnode.example.com BackupServerNode=v_vmart_node0002.example.com,v_vmart_node0003.example.com */ // Connect to the database using SQLConnect cout << "Connecting to database." << endl; const char *dsnName = "VMartBadNode"; // Name of the DSN const char* userID = "ExampleUser"; // Username const char* passwd = "password123"; // password ret = SQLConnect(hdlDbc, (SQLCHAR*)dsnName, SQL_NTS,(SQLCHAR*)userID,SQL_NTS, (SQLCHAR*)passwd, SQL_NTS); if(!SQL_SUCCEEDED(ret)) { cout << "Could not connect to database." << endl; exit(EXIT_FAILURE); } else { cout << "Connected to database." << endl; } // We're connected. Query the v_monitor.current_session table to // find the name of the node we've connected to. // Set up a statement handle SQLHSTMT hdlStmt; SQLAllocHandle(SQL_HANDLE_STMT, hdlDbc, &hdlStmt); assert(SQL_SUCCEEDED(ret)); ret = SQLExecDirect( hdlStmt, (SQLCHAR*)"SELECT node_name FROM " "v_monitor.current_session;", SQL_NTS ); if(SQL_SUCCEEDED(ret)) { // Bind varible to column in result set. SQLTCHAR node_name[256]; ret = SQLBindCol(hdlStmt, 1, SQL_C_TCHAR, (SQLPOINTER)node_name, sizeof(node_name), NULL); while(SQL_SUCCEEDED(ret = SQLFetchScroll(hdlStmt, SQL_FETCH_NEXT,1))) { // Print the bound variables, which now contain the values from the // fetched row. cout << "Connected to node " << node_name << endl; } } cout << "Disconnecting." << endl; ret = SQLDisconnect( hdlDbc ); assert(SQL_SUCCEEDED(ret)); // When done, free all of the handles to close them // in an orderly fashion. cout << endl << "Freeing handles..." << endl; SQLFreeHandle(SQL_HANDLE_STMT,hdlStmt); SQLFreeHandle(SQL_HANDLE_DBC, hdlDbc); SQLFreeHandle(SQL_HANDLE_ENV, hdlEnv); cout << "Done!" << endl; exit(EXIT_SUCCESS); }
When run, the example's output on the system console is similar to the following:
Connecting to database. Connected to database. Connected to node v_vmart_node0002 Disconnecting. Freeing handles... Done!
Notice that the connection was made to the first node in the backup list (node 2).
When native connection load balancing is enabled, the additional servers specified in the BackupServerNode connection parameter are only used for the initial connection to a Vertica host. If host redirects the client to another host in the database cluster to handle its connection request, the second connection does not use the backup node list. This is rarely an issue, since native connection load balancing is aware of which nodes are currently up in the database. See Enabling Native Connection Load Balancing in ODBC