Vertica Analytics Platform Version 9.2.x Documentation
Creates a storage location where Vertica can store data. After you create the location, you create storage policies that assign the storage location to the database objects that will store data in the location.
While no technical issue prevents you from using
CREATE LOCATION to add one or more Network File System (NFS) storage locations, Vertica does not support NFS data or catalog storage except for MapR mount points. You will be unable to run queries against any other NFS data. When creating locations on MapR file systems, you must specify
ALL NODES SHARED.
If you use HDFS storage locations, the HDFS data must be available when you start Vertica. Your HDFS cluster must be operational, and the ROS files must be present. If you moved data files, or they are corrupted, or your HDFS cluster is not responsive, Vertica cannot start.
CREATE LOCATION 'path' [NODE 'nodename' | ALL NODES] [SHARED] [USAGE 'use‑type'] [LABEL 'label'] [LIMIT 'max‑size']
Specifies where to store this location's data. The type of file system on which the location is based determines the path format. See Location Paths below.
Specifies the node or nodes on which the storage location is defined, one of the following:
Indicates the location set by the path is shared (used by all nodes) rather than local to each node. For details, see Shared Versus Local Storage.
If path is set to S3 comunal storage,
The type of data the storage location can hold, where use‑type is one of the following:
A label for the storage location, used when assigning the storage location to data objects. You use this name later when assigning the storage location to data objects.
Maximum size of the storage location. You can only set this parameter for locations with the USAGE type of DEPOT.
The max-size value is an integer followed by a size unit. Valid size units are kilobytes (
File System Permissions
The Vertica process must have read and write permissions to the location where date will be stored. Each type of file system has its own requirements:
|Linux||Database superuser account (usually named dbadmin) must have full read and write access to the directory in the path argument.|
|HDFS without Kerberos||Hadoop user whose username matches the Vertica database administrator username (usually dbadmin). This Hadoop user must have read and write access to the HDFS directory specified in the path argument|
|HDFS with Kerberos||Hadoop user whose username matches the principal in the keytab file on each Vertica node. This is not the same as the database administrator username. This Hadoop user must have read and write access to the HDFS directory stored in the path argument|
Location path formats vary, depending on the storage location's file system:
|Linux||Absolute path to the directory where Vertica can write the storage location's data.|
|HDFS||URL in the
URLs with the following form:
|Google Cloud storage||
URLs with the following form:
Create a storage location in the local Linux file system for temporary data storage.
=> CREATE LOCATION '/home/dbadmin/testloc' USAGE 'TEMP' LABEL 'tempfiles';
Create a storage location on HDFS in the
/user/dbadmin directory. The HDFS cluster does not use Kerberos.
=> CREATE LOCATION 'hdfs://hadoopNS/vertica/colddata' ALL NODES SHARED USAGE 'data' LABEL 'coldstorage';
Create the same storage location, but on a Hadoop cluster that uses Kerberos. Note the output that reports the principal being used.
=> CREATE LOCATION 'hdfs://hadoopNS/vertica/colddata' ALL NODES SHARED USAGE 'data' LABEL 'coldstorage'; NOTICE 0: Performing HDFS operations using kerberos principal [vertica/hadoop.example.com] CREATE LOCATION
Create a location for user data, grant access to it, and use it to create an external table.
=> CREATE LOCATION '/tmp' ALL NODES USAGE 'user'; CREATE LOCATION => GRANT ALL ON LOCATION '/tmp' to Bob; GRANT PRIVILEGE => CREATE EXTERNAL TABLE ext1 (x integer) AS COPY FROM '/tmp/data/ext1.dat' DELIMITER ','; CREATE TABLE
For an example of a USER storage location using S3, see Browsing S3 Data Using External Tables in Using Eon Mode.
- Managing Storage Locations
- Using HDFS Storage Locations
Was this topic helpful?