Loading JSON Data

Use the FJSONPARSER to load data in JSON format. This parser supports both columnar and Flex tables.

The schema for JSON data is the set of property names in the property:value pairs. When you load JSON data into a columnar table, the property names in the data must match the column names in the table. You do not need to load all of the columns in the data.

If you load JSON data into a Flex table, Vertica loads all data into the __raw__ (VMap) column, including complex types found in the data. If you load JSON data into a columnar table, you specify individual columns but you can load a JSON complex type into a Vertica flexible complex type without fully specifying the schema. You define these columns in the table as LONG VARBINARY, and you can use Flex functions to extract values from it. See Using Flexible Complex Types.

In the COPY statement, use the PARSER parameter to specify the JSON parser as in the following example:

=> COPY sales FROM '/data/2019.json' PARSER fjsonparser();

This parser has several optional parameters, some of which are specific to use with Flex tables and flexible complex types. See FJSONPARSER (Parser).

Before loading JSON data, consider using a tool such as JSONLint to verify that the data is valid.

Using flatten_maps and flatten_arrays Parameters

The JSON parser uses the flatten_maps and flatten_arrays parameters to control how the parser handles the data it is loading. Here are the default settings for these two parameters: 

Parameter Default Change Default
flatten_maps TRUE: Flatten all maps. flatten_maps=FALSE
flatten_arrays FALSE: Do not flatten arrays. flatten_arrays=TRUE

For JSON maps, the parser flattens all submaps, separating the levels with a period (.). Consider the following input data with a submap:

 { grade: { level: 4 } }  

The default parser behavior results in the following map:

{ "grade.level" -> "4" } 

To use the bracket operators ([]) to access deeply nested JSON in VMap data, including when loading complex columns into columnar tables using LONG VARBINARY, you must load the data with flatten_maps=FALSE, as described in Querying Nested Data.

For JSON arrays, the parser maintains the array. Consider the following input data containing a 2-element array, with values 1 and 2:

{ grade: [ 1 2 ] }

The default parser behavior results in the following array:

{ "grade": { "0" -> "1", "1" -> "2" } }

Using the parameters flatten_maps and flatten_arrays is recursive, and affects all data.

Flexible Types

The following example demonstrates the use of flexible complex types. Consider a JSON file containing the following data:

{
    "name" : "Bob's pizzeria",
    "cuisine" : "Italian",
    "location_city" : ["Cambridge", "Pittsburgh"],
    "menu" : [{"item" : "cheese pizza", "price" : "$8.25"},
              {"item" : "spinach pizza", "price" : "$10.50"}]
}
{
    "name" : "Bakersfield Tacos",
    "cuisine" : "Mexican",
    "location_city" : ["Pittsburgh"],
    "menu" : [{"item" : "veggie taco", "price" : "$9.95"},
              {"item" : "steak taco", "price" : "$10.95"}]
}

Create a table, specifying the location_city and menu columns (the complex types) as LONG VARBINARY:

=> CREATE TABLE restaurant(name VARCHAR, cuisine VARCHAR, location_city LONG VARBINARY, menu LONG VARBINARY);

Load the data using the JSON parser:

=> COPY restaurant FROM '/data/restaurant.json' PARSER FJSONPARSER (flatten_maps=false, flatten_arrays=false);

Because we loaded the complex columns as LONG VARBINARY, directly querying the columns produces binary results:

=> SELECT * FROM restaurant;
       name        | cuisine |                                                                     location_city                                                                     |                                                                                                                                                                                                                         menu                                                                                                                                                                                                                         
-------------------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Bob's pizzeria    | Italian | \001\000\000\000\037\000\000\000\002\000\000\000\014\000\000\000\025\000\000\000CambridgePittsburgh\002\000\000\000\014\000\000\000\015\000\000\00001 | \001\000\000\000\202\000\000\000\002\000\000\000\014\000\000\000F\000\000\000\001\000\000\000\035\000\000\000\002\000\000\000\014\000\000\000\030\000\000\000cheese pizza$8.25\002\000\000\000\014\000\000\000\020\000\000\000itemprice\001\000\000\000\037\000\000\000\002\000\000\000\014\000\000\000\031\000\000\000spinach pizza$10.50\002\000\000\000\014\000\000\000\020\000\000\000itemprice\002\000\000\000\014\000\000\000\015\000\000\00001
 Bakersfield Tacos | Mexican | \001\000\000\000\022\000\000\000\001\000\000\000\010\000\000\000Pittsburgh\001\000\000\000\010\000\000\0000                                           | \001\000\000\000~\000\000\000\002\000\000\000\014\000\000\000E\000\000\000\001\000\000\000\034\000\000\000\002\000\000\000\014\000\000\000\027\000\000\000veggie taco$9.95\002\000\000\000\014\000\000\000\020\000\000\000itemprice\001\000\000\000\034\000\000\000\002\000\000\000\014\000\000\000\026\000\000\000steak taco$10.95\002\000\000\000\014\000\000\000\020\000\000\000itemprice\002\000\000\000\014\000\000\000\015\000\000\00001
(2 rows)

However, you can use Flex functions and direct access (through indices) to return readable values:

=> SELECT MAPTOSTRING(location_city), MAPTOSTRING(menu) FROM restaurant;
                   maptostring                   |             maptostring       
-------------------------------------------------+--------------------------------------------------------
 {
    "0": "Cambridge",
    "1": "Pittsburgh"
} | {
    "0": {
        "item": "cheese pizza",
        "price": "$8.25"
    },
    "1": {
        "item": "spinach pizza",
        "price": "$10.50"
    }
}
 {
    "0": "Pittsburgh"
}                       | {
    "0": {
        "item": "veggie taco",
        "price": "$9.95"
    },
    "1": {
        "item": "steak taco",
        "price": "$10.95"
    }
}
(2 rows)
 
=> SELECT menu['0']['item'] FROM restaurant;
     menu    
--------------
 cheese pizza
 veggie taco
(2 rows)

The COPY statement shown in this example sets flatten_maps to false. Without that change, the keys for the complex columns would not work as expected, because record and array keys would be "flattened" at the top level. Querying menu['0']['item'] would produce no results. Instead, query flattened values as in the following example:

=> SELECT menu['0.item'] FROM restaurant;
     menu
--------------
 veggie taco
 cheese pizza
(2 rows)

Flattening directives apply to the entire COPY statement. You cannot flatten some columns and not others, or prevent flattening values in a complex column that is itself within a flattened flex table.

If a key (field name) is missing in the JSON data, by default the JSON parser loads it anyway (as an empty string). You can use the omit_empty_keys and reject_on_empty_key parameters to modify this behavior.

Loading from a Specific Start Point

You can use the fjsonparser start_point parameter to load JSON data beginning at a specific key, rather than at the beginning of a file. Data is parsed from after the start_point key until the end of the file, or to the end of the first start_point's value. The fjsonparser ignores any subsequent instance of the start_point, even if that key appears multiple times in the input file. If the input data contains only one copy of the start_point key, and that value is a list of JSON elements, the parser loads each element in the list as a row.

This section uses the following sample JSON data, saved to a file (alphanums.json):

 { "A": { "B": { "C": [ { "d": 1, "e": 2, "f": 3 }, { "g": 4, "h": 5, "i": 6 }, 
{ "j": 7, "k": 8, "l": 9 } ] } } }
  1. Create a flex table, start_json
  2. => CREATE FLEX TABLE start_json();
    CREATE TABLE
    
  3. Load alphanums.json into start_json using the fjsonparser without any parameters:  
  4. => COPY start_json FROM '/home/dbadmin/data/flex/alphanums.json' PARSER fjsonparser();
     Rows Loaded
    -------------
               1
    (1 row)
    
  5. Use maptostring to see the results of loading all of alphanums.json:
  6. => SELECT maptostring(__raw__) FROM start_json;
                            maptostring        
    -------------------------------------------------------------------------------
     {
       "A.B.C" : {
          "0.d" : "1",
          "0.e" : "2",
          "0.f" : "3",
          "1.g" : "4",
          "1.h" : "5",
          "1.i" : "6",
          "2.j" : "7",
          "2.k" : "8",
          "2.l" : "9"
       }
    }
    
    (1 row)
    
  7. Truncate start_json and load alphanums.json with the start_point parameter: 
  8. => TRUNCATE TABLE start_json;
    TRUNCATE TABLE
    =>  COPY start_json FROM '/home/dbadmin/data/flex/alphanums.json' PARSE 
    -> fjsonparser(start_point='B');
     Rows Loaded
    -------------
               1
    (1 row)
  9. Next, call maptostring again to compare the results of loading alphanums.jsonfrom start_point='B'
  10. => SELECT maptostring(__raw__) FROM start_json;
                             maptostring          
    --------------------------------------------------------------------------------
     {
       "C" : {
          "0.d" : "1",
          "0.e" : "2",
          "0.f" : "3",
          "1.g" : "4",
          "1.h" : "5",
          "1.i" : "6",
          "2.j" : "7",
          "2.k" : "8",
          "2.l" : "9"
       }
    }
    
    (1 row)
    

Parsing From a Start Point Occurrence

If a start_point value occurs in multiple locations in your JSON data, you can use the start_point_occurrence integer parameter to specify the occurrence at which to start parsing. By defining start_point_occurrence, fjsonparser begins at the nth occurrence of start_point.

Controlling Column Name Separators

By default, when loading into a flex table fjsonparser produces column names by concatenating JSON field names with a period (.). You can change the default separator by specifying a different character with the key_separator parameter.

Handling Special Characters

Some input JSON data can have special characters in field names. You can replace these characters by setting the suppress_nonalphanumeric_key_chars to TRUE. With this parameter setting, all special characters are converted to an underscore (_) character.

Dealing with Invalid JSON Records

If your JSON data is not perfectly formatted, your load may fail due to invalid records. You can use the RECORD_TERMINATOR parameter to skip these invalid records if your JSON records are consistently delimited by a character like a line break. Setting a record terminator will allow the FJSONPARSER to skip over invalid records and continue parsing the rest of the data.

If your records are not consistently marked by a character, you can use the COPY parameter ERROR TOLERANCE. ERROR TOLERANCE skips entire source files with invalid JSON records, while RECORD_TERMINATOR skips individual malformed JSON records. Using both ERROR TOLERANCE and RECORD_TERMINATOR within the statement will work, but if your records are consistently marked, RECORD_TERMINATOR should sufficiently deal with imperfect records.

  1. Create a flex table named "fruits".
    => CREATE FLEX TABLE fruits();
    CREATE TABLE
  2. Use the FJSONPARSER and call the RECORD_TERMINATOR parameter with a termination key of E'\n' (which denotes a new line). Insert records, including an invalid record.
    => COPY fruits FROM STDIN PARSER FJSONPARSER(RECORD_TERMINATOR=E'\n');
    Enter data to be copied followed by a newline.
    End with a backslash and a period on a line by itself
    >> {"name": "orange", "type": "fruit", "color": "orange", "rating": 5 }
    >> {"name": "apple", "type": "fruit", "color": "green" }
    >> {"name": "blueberry", "type": "fruit", "color": "blue", "rating": 10 }
    >> "type": "fruit", "rating": 7 }
    >> {"name": "banana", "type" : "fruit", "color": "yellow", "rating": 3 }
    >> \.
    
  3. View the flex table using MAPTOSTRING to confirm that the invalid record was skipped while the rest of the records were successfully loaded.
    => SELECT MAPTOSTRING(__raw__) FROM fruits;
    maptostring
    --------------------------------------------------------------------------------------------
    {
    "color" : "orange",
    "name" : "orange",
    "rating" : "5",
    "type" : "fruit"
    }
    {
    "color" : "green",
    "name" : "apple",
    "type" : "fruit"
    }
    {
    "color" : "blue",
    "name" : "blueberry",
    "rating" : "10",
    "type" : "fruit"
    }
    {
    "color" : "yellow",
    "name" : "banana",
    "rating" : "3",
    "type" : "fruit"
    }
    (4 rows)

Rejecting Duplicate Values

You can reject duplicate values by using the reject_on_duplicate=true option with the fjsonparser. The next example uses this option while loading data and then displays the specified exception and rejected data files. Saving rejected data to a table, rather than a file, includes both the data and its exception.

=> CREATE FLEX TABLE json_dupes();
CREATE TABLE
=> COPY json_dupes FROM stdin PARSER fjsonparser(reject_on_duplicate=true) 
exceptions '/home/dbadmin/load_errors/json_e.out' 
rejected data '/home/dbadmin/load_errors/json_r.out';
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> {"a":"1","a":"2","b":"3"}
>> \.
=>  \!cat /home/dbadmin/load_errors/json_e.out
COPY: Input record 1 has been rejected (Rejected by user-defined parser).  
Please see /home/dbadmin/load_errors/json_r.out, record 1 for the rejected record.
COPY: Loaded 0 rows, rejected 1 rows.

Rejecting Data on Materialized Column Type Errors

Both the fjsonparser and fdelimitedparser parsers have a Boolean parameter, reject_on_materialized_type_error. Setting this parameter to true causes rows to be rejected if the input data:

  • Includes keys matching an existing materialized column
  • Has a key value that cannot be coerced into the materialized column's data type.

The following examples illustrate setting this parameter.

  1. Create a table, reject_true_false, with two real columns:
  2. => CREATE FLEX TABLE reject_true_false(one VARCHAR, two INT);
    CREATE TABLE
  3. Load JSON data into the table (from STDIN), using the fjsonparser with reject_on_materialized_type_error=false. While false is the default value, the following example specifies it explicitly for illustration: 
  4. => COPY reject_true_false FROM stdin PARSER 
    -> fjsonparser(reject_on_materialized_type_error=false);
    Enter data to be copied followed by a newline.
    End with a backslash and a period on a line by itself.
    >> {"one": 1, "two": 2}
    >> {"one": "one", "two": "two"}
    >> {"one": "one", "two": 2}
    >> \.
  5. Invoke maptostring to display the table values after loading data:
  6. =>  SELECT maptostring(__raw__), one, two FROM reject_true_false;          
    maptostring | one | two
    ----------------------------------+-----+-----
    { "one" : "one", "two" : "2" }
    | one | 2 { "one" : "1", "two" : "2" }
    | 1 | 2
    { "one" : "one",
    "two" : "two" }
    | one |
    (3 rows)
  7. Truncate the table: 
  8. => TRUNCATE TABLE reject_true_false;
  9. Reload the same data again, but this time, set reject_on_materialized_type_error=true:
  10. => COPY reject_true_false FROM stdin PARSER fjsonparser(reject_on_materialized_type_error=true);
    Enter data to be copied followed by a newline.
    End with a backslash and a period on a line by itself.
    >> {"one": 1, "two": 2}
    >> {"one": "one", "two": "two"}
    >> {"one": "one", "two": 2}
    >> \.
  11. Call maptostring to display the table contents. Only two rows were loaded, whereas the previous results had three rows: 
  12. => SELECT maptostring(__raw__), one, two FROM reject_true_false;
                  maptostring              | one | two
    ---------------------------------------+-----+-----
     {
       "one" : "1",
       "two" : "2"
    }
       | 1   |   2
     {
       "one" : "one",
       "two" : "2"
    }
     | one |   2
    (2 rows)
    

Rejecting or Omitting Empty Rows

Valid JSON files OpenText can include empty key and value pairs, such as this one: 

{"": 1 "}

Such rows are invalid for SQL. To prevent this situation, you can control the behavior for empty rows, either rejecting or omitting them. You do so using two boolean parameters for the parsers FDELIMITEDPARSER or FJSONPARSER

  • reject_on_empty_key
  • omit_empty_keys

See Also