EXPORT TO PARQUET

Exports a table, columns from a table, or query results to files in the Parquet format. You can use an OVER() clause to partition the data before export. Partitioning data can improve query performance by enabling partition pruning; see Improving Query Performance.

You can export data stored in Vertica in ROS format and data from external tables.

EXPORT TO PARQUET returns the number of rows written.

During an export to HDFS or an NFS mount point, Vertica writes files to a temporary directory in the same location as the destination and renames the directory when the export is complete. Do not attempt to use the files in the temporary directory. During an export to S3, Vertica writes files directly to the destination path, so you must wait for the export to finish before reading the files. (For more about special S3 considerations, see Exporting to S3.)

Syntax

EXPORT TO PARQUET ( directory = 'path'
		    [, compression = 'compression‑type' ]
		    [, rowGroupSizeMB = size ]
		    [, fileSizeMB = file-size-hint )
	[ OVER (over‑clause ) ] 
	AS SELECT query‑expression;

Parameters

path

The destination directory for the Parquet files. The directory must not exist, and the current user must have permission to write it. The destination can be in HDFS, S3, or an NFS mount point on the local file system.

compression‑type

Column compression type, one of Snappy or Uncompressed. The default is Snappy.

size

The uncompressed size of exported row groups, in MB (integer). The maximum value is 512 and the default is 64. The row groups in the exported files are smaller because Parquet files are compressed on write. For best performance when exporting to HDFS, set size to be smaller than the HDFS block size.

file-size The maximum file size of a single output Parquet file. By default, Vertica limits exports to a file size of 10GB. This value is a hint, not a hard limit. A value of 0 means there is no limit.

Arguments

over‑clause

Specifies how to partition table data using PARTITION BY. Within partitions you can sort by using ORDER BY. See SQL Analytics. This clause may contain column references but not expressions.

This clause, if present, can contain only column references, not an expression.

If you partition data, Vertica creates a Hive-style partition directory structure. See Using Partition Columns for a description of the structure.

If you omit this clause, Vertica optimizes for maximum parallelism.

query‑expression

Specifies the data to be exported. See SELECT for the syntax.

You must provide an alias column label for selected column targets that are expressions.

Privileges

Data Types

EXPORT TO PARQUET does not support the following data types:

Decimal precision must be <= 38.

Vertica does not convert TIMESTAMP values to UTC. To avoid problems arising from time zones, use TIMESTAMPTZ instead of TIMESTAMP.

The exported Hive types might not be identical to the Vertica types. For example, a Vertica INT is exported as a Hive BIGINT. When defining Hive external tables to read exported data, you might have to adjust column definitions.

This operation exports raw Flex columns as binary data.

Output Files

You must use a shared file location for output. If you use a directory in the local file system, it must be an NFS-mounted directory.

For output to the local file system, you must have a USER storage location.

Output file names follow the pattern: [8-character hash]-[node name]-[thread_id].parquet.

When exporting to S3 the maximum size of all output is 5TB. You might have to divide large exports into more than one piece.

Vertica does not support simultaneous exports to the same directory in HDFS or S3. The results are undefined.

When exporting to the local file system, the permission mode is 700 for directories and 600 for files. You cannot override these values.

Parquet files exported to a local file system by any Vertica user are owned by the Vertica superuser. Parquet files exported to HDFS or S3 are owned by the Vertica user who exported the data.

Examples

The following example demonstrates exporting all columns from the T1 table in the public schema, using Snappy compression (the default).

=> EXPORT TO PARQUET(directory = 'hdfs:///user1/data') 
  AS SELECT * FROM public.T1;

 Rows Exported
---------------
         87436
(1 row)

The following example demonstrates exporting the results of a query using more than one table.

=> EXPORT TO PARQUET(directory='s3://DataLake/sales_by_region') 
   AS SELECT sale.price, sale.date, store.region 
   FROM public.sales sale 
   JOIN public.vendor store ON sale.distribID = store.ID;
 Rows Exported
---------------
         23301
(1 row)

The following example demonstrates partitioning and exporting data. EXPORT TO PARQUET first partitions the data on 'b' and then, within each partition, sorts by 'd'.

=> EXPORT TO PARQUET(directory = 's3://DataLake/user2/data') 
  OVER(PARTITION BY b ORDER BY d) AS SELECT b, d FROM public.T2;

 Rows Exported
---------------
        120931
(1 row)

The following example uses an alias column label for a selected column target that is an expression.

=> EXPORT TO PARQUET(directory = 'hdfs:///user3/data') 
  OVER(ORDER BY col1) AS SELECT col1 + col1 AS A, col2
  FROM public.T3;
 Rows Exported
---------------
         14336
(1 row)