Skip to main content

Redshift connector

Zipstack Cloud features a powerful SQL querying engine on top of many types of connectors, including those from Trino, some custom connectors and connectors from the open source Airbyte project. The underlying native connectors are Trino's connectors. Additionally, some parts of the documentation for these connectors have been adapted from the connector documentation found in Trino's open source project.

info

Please reach out to [email protected] if you need Redshift with keystore based authentication. This requires provisioning Zipstack Cloud with extra modules/properties.

The Redshift connector allows querying and creating tables in an external Amazon Redshift cluster. This can be used to join data between different systems like Redshift and Hive, or between two different Redshift clusters.

Requirements

To connect to Redshift, you need:

  • Network access from Zipstack Cloud to Redshift. Port 5439 is the default port.

Configuration

To configure the Redshift connector, create a data source with the following minimum properties. Replace the connection properties as appropriate for your setup:

connection-url=jdbc:redshift://example.net:5439/database
connection-user=root
connection-password=secret

The connection-user and connection-password are typically required and determine the user credentials for the connection, often a service user. You can use secrets </security/secrets> to avoid actual values in the catalog properties files.

Connection security

If you have TLS configured with a globally-trusted certificate installed on your data source, you can enable TLS between your cluster and the data source by appending a parameter to the JDBC connection string set in the connection-url catalog configuration property.

For example, on version 2.1 of the Redshift JDBC driver, TLS/SSL is enabled by default with the SSL parameter. You can disable or further configure TLS by appending parameters to the connection-url configuration property:

connection-url=jdbc:redshift://example.net:5439/database;SSL=TRUE;

For more information on TLS configuration options, see the Redshift JDBC driver documentation.

Data source authentication

The connector can provide credentials for the data source connection in multiple ways:

  • inline, in the connector configuration file

  • in a separate properties file

  • in a key store file

  • as extra credentials set when connecting to Trino

You can use secrets </security/secrets> to avoid storing sensitive values in the catalog properties files.

The following table describes configuration properties for connection credentials:

Property nameDescription
credential-provider.typeType of the credential provider. Must be one of INLINE, FILE, or KEYSTORE; defaults to INLINE.
connection-userConnection user name.
connection-passwordConnection password.
user-credential-nameName of the extra credentials property, whose value to use as the user name. See extraCredentials in Parameter reference.
password-credential-nameName of the extra credentials property, whose value to use as the password.
connection-credential-fileLocation of the properties file where credentials are present. It must contain the connection-user and connection-password properties.
keystore-file-pathThe location of the Java Keystore file, from which to read credentials.
keystore-typeFile format of the keystore file, for example JKS or PEM.
keystore-passwordPassword for the key store.
keystore-user-credential-nameName of the key store entity to use as the user name.
keystore-user-credential-passwordPassword for the user name key store entity.
keystore-password-credential-nameName of the key store entity to use as the password.
keystore-password-credential-passwordPassword for the password key store entity.

Multiple Redshift databases or clusters

The Redshift connector can only access a single database within a Redshift cluster. Thus, if you have multiple Redshift databases, or want to connect to multiple Redshift clusters, you must configure multiple data sources.

General configuration properties

The following table describes general catalog configuration properties for the connector:

Property nameDescriptionDefault value
case-insensitive-name-matchingSupport case insensitive schema and table names.false
case-insensitive-name-matching.cache-ttlThis value should be a duration.1m
case-insensitive-name-matching.config-filePath to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases.null
case-insensitive-name-matching.config-file.refresh-periodFrequency with which Trino checks the name matching configuration file for changes. This value should be a duration.(refresh disabled)
metadata.cache-ttlThe duration for which metadata, including table and column statistics, is cached.0s (caching disabled)
metadata.cache-missingCache the fact that metadata, including table and column statistics, is not availablefalse
metadata.cache-maximum-sizeMaximum number of objects stored in the metadata cache10000
write.batch-sizeMaximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance.1000
dynamic-filtering.enabledPush down dynamic filters into JDBC queriestrue
dynamic-filtering.wait-timeoutMaximum duration for which Trino will wait for dynamic filters to be collected from the build side of joins before starting a JDBC query. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries.20s

Domain compaction threshold

Pushing down a large list of predicates to the data source can compromise performance. Trino compacts large predicates into a simpler range predicate by default to ensure a balance between performance and predicate pushdown. If necessary, the threshold for this compaction can be increased to improve performance when the data source is capable of taking advantage of large predicates. Increasing this threshold may improve pushdown of large dynamic filters </admin/dynamic-filtering>. The domain-compaction-threshold catalog configuration property or the domain_compaction_threshold catalog session property <session-properties-definition> can be used to adjust the default value of 32 for this threshold.

Procedures

  • system.flush_metadata_cache()

    Flush JDBC metadata caches. For example, the following system call flushes the metadata caches for all schemas in the example catalog

    USE example.example_schema;
    CALL system.flush_metadata_cache();

Case insensitive matching

When case-insensitive-name-matching is set to true, Trino is able to query non-lowercase schemas and tables by maintaining a mapping of the lowercase name to the actual name in the remote system. However, if two schemas and/or tables have names that differ only in case (such as \"customers\" and \"Customers\") then Trino fails to query them due to ambiguity.

In these cases, use the case-insensitive-name-matching.config-file catalog configuration property to specify a configuration file that maps these remote schemas/tables to their respective Trino schemas/tables:

{
"schemas": [
{
"remoteSchema": "CaseSensitiveName",
"mapping": "case_insensitive_1"
},
{
"remoteSchema": "cASEsENSITIVEnAME",
"mapping": "case_insensitive_2"
}],
"tables": [
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "tablex",
"mapping": "table_1"
},
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "TABLEX",
"mapping": "table_2"
}]
}

Queries against one of the tables or schemes defined in the mapping attributes are run against the corresponding remote entity. For example, a query against tables in the case_insensitive_1 schema is forwarded to the CaseSensitiveName schema and a query against case_insensitive_2 is forwarded to the cASEsENSITIVEnAME schema.

At the table mapping level, a query on case_insensitive_1.table_1 as configured above is forwarded to CaseSensitiveName.tablex, and a query on case_insensitive_1.table_2 is forwarded to CaseSensitiveName.TABLEX.

By default, when a change is made to the mapping configuration file, Trino must be restarted to load the changes. Optionally, you can set the case-insensitive-name-mapping.refresh-period to have Trino refresh the properties without requiring a restart:

case-insensitive-name-mapping.refresh-period=30s

Non-transactional INSERT

The connector supports adding rows using INSERT statements </sql/insert>. By default, data insertion is performed by writing data to a temporary table. You can skip this step to improve performance and write directly to the target table. Set the insert.non-transactional-insert.enabled catalog property or the corresponding non_transactional_insert catalog session property to true.

Note that with this property enabled, data can be corrupted in rare cases where exceptions occur during the insert operation. With transactions disabled, no rollback can be performed.

Querying Redshift

The Redshift connector provides a schema for every Redshift schema. You can see the available Redshift schemas by running SHOW SCHEMAS:

SHOW SCHEMAS FROM example;

If you have a Redshift schema named web, you can view the tables in this schema by running SHOW TABLES:

SHOW TABLES FROM example.web;

You can see a list of the columns in the clicks table in the web database using either of the following:

DESCRIBE example.web.clicks;
SHOW COLUMNS FROM example.web.clicks;

Finally, you can access the clicks table in the web schema:

SELECT * FROM example.web.clicks;

If you used a different name for your catalog properties file, use that catalog name instead of example in the above examples.

Type mapping

Type mapping configuration properties

The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.

Property nameDescriptionDefault value
unsupported-type-handlingConfigure how unsupported column data types are handled:IGNORE, column is not accessible.CONVERT_TO_VARCHAR, column is converted to unbounded VARCHAR.The respective catalog session property is unsupported_type_handling.IGNORE
jdbc-types-mapped-to-varcharAllow forced mapping of comma separated lists of data types to convert to unbounded VARCHAR

SQL support

The connector provides read access and write access to data and metadata in Redshift. In addition to the globally available <sql-globally-available> and read operation <sql-read-operations> statements, the connector supports the following features:

  • /sql/insert

  • /sql/delete

  • /sql/truncate

  • sql-schema-table-management

SQL DELETE

If a WHERE clause is specified, the DELETE operation only works if the predicate in the clause can be fully pushed down to the data source.

ALTER TABLE

The connector does not support renaming tables across multiple schemas. For example, the following statement is supported:

ALTER TABLE example.schema_one.table_one RENAME TO example.schema_one.table_two

The following statement attempts to rename a table across schemas, and therefore is not supported:

ALTER TABLE example.schema_one.table_one RENAME TO example.schema_two.table_two

ALTER SCHEMA

The connector supports renaming a schema with the ALTER SCHEMA RENAME statement. ALTER SCHEMA SET AUTHORIZATION is not supported.

Table functions

The connector provides specific table functions </functions/table> to access Redshift.

query(varchar) -> table

The query function allows you to query the underlying database directly. It requires syntax native to Redshift, because the full query is pushed down and processed in Redshift. This can be useful for accessing native features which are not implemented in Trino or for improving query performance in situations where running a query natively may be faster.

::: note ::: title Note :::

Polymorphic table functions may not preserve the order of the query result. If the table function contains a query with an ORDER BY clause, the function result may not be ordered as expected. :::

For example, query the example catalog and select the top 10 nations by population:

SELECT
*
FROM
TABLE(
example.system.query(
query => 'SELECT
TOP 10 *
FROM
tpch.nation
ORDER BY
population DESC'
)
);