PostgreSQL connector
Zipstack Cloud features a powerful SQL querying engine on top of many types of connectors, including those from Trino, some custom connectors and connectors from the open source Airbyte project. The underlying native connectors are Trino's connectors. Additionally, some parts of the documentation for these connectors have been adapted from the connector documentation found in Trino's open source project.
Please reach out to [email protected] if you need PostgreSQL with keystore based authentication. This requires provisioning Zipstack Cloud with extra modules/properties.
The PostgreSQL connector allows querying and creating tables in an external PostgreSQL database. This can be used to join data between different systems like PostgreSQL and Hive, or between different PostgreSQL instances.
Requirements
To connect to PostgreSQL, you need:
PostgreSQL 10.x or higher.
Network access from Zipstack Cloud to PostgreSQL. Port 5432 is the default port.
Configuration
The connector can query a database on a PostgreSQL server. Create a data source with the following minimum configuration. Replace the connection properties as appropriate for your setup:
connection-url=jdbc:postgresql://example.net:5432/database
connection-user=root
connection-password=secret
The connection-url defines the connection information and parameters
to pass to the PostgreSQL JDBC driver. The parameters for the URL are
available in the PostgreSQL JDBC driver
documentation.
Some parameters can have adverse effects on the connector behavior or
not work with the connector.
The connection-user and connection-password are typically required
and determine the user credentials for the connection, often a service
user. You can use secrets </security/secrets> to avoid actual values
in the catalog properties files.
Connection security
If you have TLS configured with a globally-trusted certificate installed
on your data source, you can enable TLS between your cluster and the
data source by appending a parameter to the JDBC connection string set
in the connection-url catalog configuration property.
For example, with version 42 of the PostgreSQL JDBC driver, enable TLS
by appending the ssl=true parameter to the connection-url
configuration property:
connection-url=jdbc:postgresql://example.net:5432/database?ssl=true
For more information on TLS configuration options, see the PostgreSQL JDBC driver documentation.
Data source authentication
The connector can provide credentials for the data source connection in multiple ways:
inline, in the connector configuration file
in a separate properties file
in a key store file
as extra credentials set when connecting to Trino
You can use secrets </security/secrets> to avoid storing sensitive
values in the catalog properties files.
The following table describes configuration properties for connection credentials:
| Property name | Description |
|---|---|
credential-provider.type | Type of the credential provider. Must be one of INLINE, FILE, or KEYSTORE; defaults to INLINE. |
connection-user | Connection user name. |
connection-password | Connection password. |
user-credential-name | Name of the extra credentials property, whose value to use as the user name. See extraCredentials in Parameter reference. |
password-credential-name | Name of the extra credentials property, whose value to use as the password. |
connection-credential-file | Location of the properties file where credentials are present. It must contain the connection-user and connection-password properties. |
keystore-file-path | The location of the Java Keystore file, from which to read credentials. |
keystore-type | File format of the keystore file, for example JKS or PEM. |
keystore-password | Password for the key store. |
keystore-user-credential-name | Name of the key store entity to use as the user name. |
keystore-user-credential-password | Password for the user name key store entity. |
keystore-password-credential-name | Name of the key store entity to use as the password. |
keystore-password-credential-password | Password for the password key store entity. |
Multiple PostgreSQL databases or servers
The PostgreSQL connector can only access a single database within a PostgreSQL server. Thus, if you have multiple PostgreSQL databases, or want to connect to multiple PostgreSQL servers, you must configure mutiple data sources.
General configuration properties
The following table describes general catalog configuration properties for the connector:
| Property name | Description | Default value |
|---|---|---|
case-insensitive-name-matching | Support case insensitive schema and table names. | false |
case-insensitive-name-matching.cache-ttl | This value should be a duration. | 1m |
case-insensitive-name-matching.config-file | Path to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases. | null |
case-insensitive-name-matching.config-file.refresh-period | Frequency with which Trino checks the name matching configuration file for changes. This value should be a duration. | (refresh disabled) |
metadata.cache-ttl | The duration for which metadata, including table and column statistics, is cached. | 0s (caching disabled) |
metadata.cache-missing | Cache the fact that metadata, including table and column statistics, is not available | false |
metadata.cache-maximum-size | Maximum number of objects stored in the metadata cache | 10000 |
write.batch-size | Maximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance. | 1000 |
dynamic-filtering.enabled | Push down dynamic filters into JDBC queries | true |
dynamic-filtering.wait-timeout | Maximum duration for which Trino will wait for dynamic filters to be collected from the build side of joins before starting a JDBC query. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries. | 20s |
Domain compaction threshold
Pushing down a large list of predicates to the data source can
compromise performance. Trino compacts large predicates into a simpler
range predicate by default to ensure a balance between performance and
predicate pushdown. If necessary, the threshold for this compaction can
be increased to improve performance when the data source is capable of
taking advantage of large predicates. Increasing this threshold may
improve pushdown of large dynamic filters </admin/dynamic-filtering>.
The domain-compaction-threshold catalog configuration property or the
domain_compaction_threshold
catalog session property <session-properties-definition> can be used
to adjust the default value of 32 for this threshold.
Procedures
system.flush_metadata_cache()Flush JDBC metadata caches. For example, the following system call flushes the metadata caches for all schemas in the
examplecatalogUSE example.example_schema;
CALL system.flush_metadata_cache();
Case insensitive matching
When case-insensitive-name-matching is set to true, Trino is able to
query non-lowercase schemas and tables by maintaining a mapping of the
lowercase name to the actual name in the remote system. However, if two
schemas and/or tables have names that differ only in case (such as
\"customers\" and \"Customers\") then Trino fails to query them due to
ambiguity.
In these cases, use the case-insensitive-name-matching.config-file
catalog configuration property to specify a configuration file that maps
these remote schemas/tables to their respective Trino schemas/tables:
{
"schemas": [
{
"remoteSchema": "CaseSensitiveName",
"mapping": "case_insensitive_1"
},
{
"remoteSchema": "cASEsENSITIVEnAME",
"mapping": "case_insensitive_2"
}],
"tables": [
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "tablex",
"mapping": "table_1"
},
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "TABLEX",
"mapping": "table_2"
}]
}
Queries against one of the tables or schemes defined in the mapping
attributes are run against the corresponding remote entity. For example,
a query against tables in the case_insensitive_1 schema is forwarded
to the CaseSensitiveName schema and a query against case_insensitive_2
is forwarded to the cASEsENSITIVEnAME schema.
At the table mapping level, a query on case_insensitive_1.table_1 as
configured above is forwarded to CaseSensitiveName.tablex, and a query
on case_insensitive_1.table_2 is forwarded to
CaseSensitiveName.TABLEX.
By default, when a change is made to the mapping configuration file,
Trino must be restarted to load the changes. Optionally, you can set the
case-insensitive-name-mapping.refresh-period to have Trino refresh the
properties without requiring a restart:
case-insensitive-name-mapping.refresh-period=30s
Non-transactional INSERT
The connector supports adding rows using
INSERT statements </sql/insert>. By default, data insertion is
performed by writing data to a temporary table. You can skip this step
to improve performance and write directly to the target table. Set the
insert.non-transactional-insert.enabled catalog property or the
corresponding non_transactional_insert catalog session property to
true.
Note that with this property enabled, data can be corrupted in rare cases where exceptions occur during the insert operation. With transactions disabled, no rollback can be performed.
Type mapping
Because Trino and PostgreSQL each support types that the other does not,
this connector modifies some types <type-mapping-overview> when
reading or writing data. Data types may not map the same way in both
directions between Trino and the data source. Refer to the following
sections for type mapping in each direction.
PostgreSQL type to Trino type mapping
The connector maps PostgreSQL types to the corresponding Trino types following this table:
| PostgreSQL type | Trino type | Notes |
|---|---|---|
BIT | BOOLEAN | |
BOOLEAN | BOOLEAN | |
SMALLINT | SMALLINT | |
INTEGER | INTEGER | |
BIGINT | BIGINT | |
REAL | REAL | |
DOUBLE | DOUBLE | |
NUMERIC(p, s) | DECIMAL(p, s) | DECIMAL(p, s) is an alias of NUMERIC(p, s). See Decimal type handling for more information. |
CHAR(n) | CHAR(n) | |
VARCHAR(n) | VARCHAR(n) | |
ENUM | VARCHAR | |
BYTEA | VARBINARY | |
DATE | DATE | |
TIME(n) | TIME(n) | |
TIMESTAMP(n) | TIMESTAMP(n) | |
TIMESTAMPTZ(n) | TIMESTAMP(n) WITH TIME ZONE | |
MONEY | VARCHAR | |
UUID | UUID | |
JSON | JSON | |
JSONB | JSON | |
HSTORE | MAP(VARCHAR, VARCHAR) | |
ARRAY | Disabled, ARRAY, or JSON | See Array type handling for more information. |
No other types are supported.
Trino type to PostgreSQL type mapping
The connector maps Trino types to the corresponding PostgreSQL types following this table:
| Trino type | PostgreSQL type | Notes |
|---|---|---|
BOOLEAN | BOOLEAN | |
SMALLINT | SMALLINT | |
TINYINT | SMALLINT | |
INTEGER | INTEGER | |
BIGINT | BIGINT | |
DOUBLE | DOUBLE | |
DECIMAL(p, s) | NUMERIC(p, s) | DECIMAL(p, s) is an alias of NUMERIC(p, s). See Decimal type handling for more information. |
CHAR(n) | CHAR(n) | |
VARCHAR(n) | VARCHAR(n) | |
VARBINARY | BYTEA | |
DATE | DATE | |
TIME(n) | TIME(n) | |
TIMESTAMP(n) | TIMESTAMP(n) | |
TIMESTAMP(n) WITH TIME ZONE | TIMESTAMPTZ(n) | |
UUID | UUID | |
JSON | JSONB | |
ARRAY | ARRAY | See Array type handling for more information. |
No other types are supported.
Decimal type handling
DECIMAL types with unspecified precision or scale are mapped to a
Trino DECIMAL with a default precision of 38 and default scale of 0.
The scale can be changed by setting the decimal-mapping configuration
property or the decimal_mapping session property to allow_overflow.
The scale of the resulting type is controlled via the
decimal-default-scale configuration property or the
decimal-rounding-mode session property. The precision is always 38.
By default, values that require rounding or truncation to fit will cause
a failure at runtime. This behavior is controlled via the
decimal-rounding-mode configuration property or the
decimal_rounding_mode session property, which can be set to
UNNECESSARY (the default), UP, DOWN, CEILING, FLOOR,
HALF_UP, HALF_DOWN, or HALF_EVEN (see
RoundingMode).
Array type handling
The PostgreSQL array implementation does not support fixed dimensions
whereas Trino support only arrays with fixed dimensions. You can
configure how the PostgreSQL connector handles arrays with the
postgresql.array-mapping configuration property in your catalog file
or the array_mapping session property. The following values are
accepted for this property:
DISABLED(default): array columns are skipped.AS_ARRAY: array columns are interpreted as TrinoARRAYtype, for array columns with fixed dimensions.AS_JSON: array columns are interpreted as TrinoJSONtype, with no constraint on dimensions.
Type mapping configuration properties
The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.
| Property name | Description | Default value |
|---|---|---|
unsupported-type-handling | Configure how unsupported column data types are handled:IGNORE, column is not accessible.CONVERT_TO_VARCHAR, column is converted to unbounded VARCHAR.The respective catalog session property is unsupported_type_handling. | IGNORE |
jdbc-types-mapped-to-varchar | Allow forced mapping of comma separated lists of data types to convert to unbounded VARCHAR |
Querying PostgreSQL
The PostgreSQL connector provides a schema for every PostgreSQL schema.
You can see the available PostgreSQL schemas by running SHOW SCHEMAS:
SHOW SCHEMAS FROM example;
If you have a PostgreSQL schema named web, you can view the tables in
this schema by running SHOW TABLES:
SHOW TABLES FROM example.web;
You can see a list of the columns in the clicks table in the web
database using either of the following:
DESCRIBE example.web.clicks;
SHOW COLUMNS FROM example.web.clicks;
Finally, you can access the clicks table in the web schema:
SELECT * FROM example.web.clicks;
If you used a different name for your catalog properties file, use that
catalog name instead of example in the above examples.
SQL support
The connector provides read access and write access to data and metadata
in PostgreSQL. In addition to the
globally available <sql-globally-available> and
read operation <sql-read-operations> statements, the connector
supports the following features:
/sql/insert/sql/delete/sql/truncatesql-schema-table-management
SQL DELETE
If a WHERE clause is specified, the DELETE operation only works if
the predicate in the clause can be fully pushed down to the data source.
ALTER TABLE
The connector does not support renaming tables across multiple schemas. For example, the following statement is supported:
ALTER TABLE example.schema_one.table_one RENAME TO example.schema_one.table_two
The following statement attempts to rename a table across schemas, and therefore is not supported:
ALTER TABLE example.schema_one.table_one RENAME TO example.schema_two.table_two
ALTER SCHEMA
The connector supports renaming a schema with the ALTER SCHEMA RENAME
statement. ALTER SCHEMA SET AUTHORIZATION is not supported.
Table functions
The connector provides specific table functions </functions/table> to
access PostgreSQL.
query(varchar) -> table
The query function allows you to query the underlying database
directly. It requires syntax native to PostgreSQL, because the full
query is pushed down and processed in PostgreSQL. This can be useful for
accessing native features which are not available in Trino or for
improving query performance in situations where running a query natively
may be faster.
::: note ::: title Note :::
Polymorphic table functions may not preserve the order of the query
result. If the table function contains a query with an ORDER BY
clause, the function result may not be ordered as expected.
:::
As a simple example, query the example catalog and select an entire
table:
SELECT
*
FROM
TABLE(
example.system.query(
query => 'SELECT
*
FROM
tpch.nation'
)
);
As a practical example, you can leverage frame exclusion from PostgresQL when using window functions:
SELECT
*
FROM
TABLE(
example.system.query(
query => 'SELECT
*,
array_agg(week) OVER (
ORDER BY
week
ROWS
BETWEEN 2 PRECEDING
AND 2 FOLLOWING
EXCLUDE GROUP
) AS week,
array_agg(week) OVER (
ORDER BY
day
ROWS
BETWEEN 2 PRECEDING
AND 2 FOLLOWING
EXCLUDE GROUP
) AS all
FROM
test.time_data'
)
);
Performance
The connector includes a number of performance improvements, detailed in the following sections.
Table statistics
The PostgreSQL connector can use
table and column statistics </optimizer/statistics> for
cost based optimizations </optimizer/cost-based-optimizations>, to
improve query processing performance based on the actual data in the
data source.
The statistics are collected by PostgreSQL and retrieved by the connector.
To collect statistics for a table, execute the following statement in PostgreSQL.
ANALYZE table_schema.table_name;
Refer to PostgreSQL documentation for additional ANALYZE options.
Pushdown
The connector supports pushdown for a number of operations:
join-pushdownlimit-pushdowntopn-pushdown
Aggregate pushdown <aggregation-pushdown> for the following functions:
avgcountmaxminsumstddevstddev_popstddev_sampvariancevar_popvar_sampcovar_popcovar_sampcorrregr_interceptregr_slope
The connector performs pushdown where performance may be improved, but in order to preserve correctness an operation may not be pushed down. When pushdown of an operation may result in better performance but risks correctness, the connector prioritizes correctness.
Cost-based join pushdown
The connector supports cost-based join-pushdown to make intelligent
decisions about whether to push down a join operation to the data
source.
When cost-based join pushdown is enabled, the connector only pushes down
join operations if the available /optimizer/statistics suggest that
doing so improves performance. Note that if no table statistics are
available, join operation pushdown does not occur to avoid a potential
decrease in query performance.
The following table describes catalog configuration properties for join pushdown:
| Property name | Description | Default value |
|---|---|---|
join-pushdown.enabled | Enable join pushdown. Equivalent catalog session property is join_pushdown_enabled. | true |
join-pushdown.strategy | Strategy used to evaluate whether join operations are pushed down. Set to AUTOMATIC to enable cost-based join pushdown, or EAGER to push down joins whenever possible. Note that EAGER can push down joins even when table statistics are unavailable, which may result in degraded query performance. Because of this, EAGER is only recommended for testing and troubleshooting purposes. | AUTOMATIC |
Predicate pushdown support
Predicates are pushed down for most types, including UUID and temporal
types, such as DATE.
The connector does not support pushdown of range predicates, such as
>, <, or BETWEEN, on columns with
character string types <string-data-types> like CHAR or VARCHAR.
Equality predicates, such as IN or =, and inequality predicates,
such as != on columns with textual types are pushed down. This ensures
correctness of results since the remote data source may sort strings
differently than Trino.
In the following example, the predicate of the first query is not pushed
down since name is a column of type VARCHAR and > is a range
predicate. The other queries are pushed down.
-- Not pushed down
SELECT * FROM nation WHERE name > 'CANADA';
-- Pushed down
SELECT * FROM nation WHERE name != 'CANADA';
SELECT * FROM nation WHERE name = 'CANADA';
There is experimental support to enable pushdown of range predicates on
columns with character string types which can be enabled by setting the
postgresql.experimental.enable-string-pushdown-with-collate catalog
configuration property or the corresponding
enable_string_pushdown_with_collate session property to true.
Enabling this configuration will make the predicate of all the queries
in the above example get pushed down.