Skip to main content

ClickHouse connector

Zipstack Cloud features a powerful SQL querying engine on top of many types of connectors, including those from Trino, some custom connectors and connectors from the open source Airbyte project. The underlying native connectors are Trino's connectors. Additionally, some parts of the documentation for these connectors have been adapted from the connector documentation found in Trino's open source project.

The ClickHouse connector allows querying tables in an external ClickHouse server. This can be used to query data in the databases on that server, or combine it with other data from different catalogs accessing ClickHouse or any other supported data source.

Requirements

To connect to a ClickHouse server, you need:

  • ClickHouse (version 21.8 or higher) or Altinity (version 20.8 or higher).

  • Network access from the Zipstack Cloud to the ClickHouse server. Port 8123 is the default port.

Configuration

connection-url=jdbc:clickhouse://host1:8123/
connection-user=exampleuser
connection-password=examplepassword

The connection-url defines the connection information and parameters to pass to the ClickHouse JDBC driver. The supported parameters for the URL are available in the ClickHouse JDBC driver configuration.

The connection-user and connection-password are typically required and determine the user credentials for the connection, often a service user. You can use secrets </security/secrets> to avoid actual values in the catalog properties files.

Connection security

If you have TLS configured with a globally-trusted certificate installed on your data source, you can enable TLS between your cluster and the data source by appending a parameter to the JDBC connection string set in the connection-url catalog configuration property.

For example, with version 2.6.4 of the ClickHouse JDBC driver, enable TLS by appending the ssl=true parameter to the connection-url configuration property:

connection-url=jdbc:clickhouse://host1:8443/?ssl=true

For more information on TLS configuration options, see the Clickhouse JDBC driver documentation

Data source authentication

The connector can provide credentials for the data source connection in multiple ways:

  • inline, in the connector configuration file

  • as extra credentials set when connecting to Trino

The following table describes configuration properties for connection credentials:

Property nameDescription
credential-provider.typeType of the credential provider. Must be one of INLINE, FILE, or KEYSTORE; defaults to INLINE.
connection-userConnection user name.
connection-passwordConnection password.
user-credential-nameName of the extra credentials property, whose value to use as the user name. See extraCredentials in Parameter reference.
password-credential-nameName of the extra credentials property, whose value to use as the password.
connection-credential-fileLocation of the properties file where credentials are present. It must contain the connection-user and connection-password properties.
keystore-file-pathThe location of the Java Keystore file, from which to read credentials.
keystore-typeFile format of the keystore file, for example JKS or PEM.
keystore-passwordPassword for the key store.
keystore-user-credential-nameName of the key store entity to use as the user name.
keystore-user-credential-passwordPassword for the user name key store entity.
keystore-password-credential-nameName of the key store entity to use as the password.
keystore-password-credential-passwordPassword for the password key store entity.

Multiple ClickHouse servers

If you have multiple ClickHouse servers you need to configure one catalog (data source) for each server.

General configuration properties

The following table describes general catalog configuration properties for the connector:

Property nameDescriptionDefault value
case-insensitive-name-matchingSupport case insensitive schema and table names.false
case-insensitive-name-matching.cache-ttlThis value should be a duration.1m
case-insensitive-name-matching.config-filePath to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases.null
case-insensitive-name-matching.config-file.refresh-periodFrequency with which Trino checks the name matching configuration file for changes. This value should be a duration.(refresh disabled)
metadata.cache-ttlThe duration for which metadata, including table and column statistics, is cached.0s (caching disabled)
metadata.cache-missingCache the fact that metadata, including table and column statistics, is not availablefalse
metadata.cache-maximum-sizeMaximum number of objects stored in the metadata cache10000
write.batch-sizeMaximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance.1000
dynamic-filtering.enabledPush down dynamic filters into JDBC queriestrue
dynamic-filtering.wait-timeoutMaximum duration for which Trino will wait for dynamic filters to be collected from the build side of joins before starting a JDBC query. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries.20s

Domain compaction threshold

Pushing down a large list of predicates to the data source can compromise performance. Trino compacts large predicates into a simpler range predicate by default to ensure a balance between performance and predicate pushdown. If necessary, the threshold for this compaction can be increased to improve performance when the data source is capable of taking advantage of large predicates. Increasing this threshold may improve pushdown of large dynamic filters </admin/dynamic-filtering>. The domain-compaction-threshold catalog configuration property or the domain_compaction_threshold catalog session property <session-properties-definition> can be used to adjust the default value of 1000 for this threshold.

Procedures

  • system.flush_metadata_cache()

    Flush JDBC metadata caches. For example, the following system call flushes the metadata caches for all schemas in the example catalog

    USE example.example_schema;
    CALL system.flush_metadata_cache();

Case insensitive matching

When case-insensitive-name-matching is set to true, Trino is able to query non-lowercase schemas and tables by maintaining a mapping of the lowercase name to the actual name in the remote system. However, if two schemas and/or tables have names that differ only in case (such as \"customers\" and \"Customers\") then Trino fails to query them due to ambiguity.

In these cases, use the case-insensitive-name-matching.config-file catalog configuration property to specify a configuration file that maps these remote schemas/tables to their respective Trino schemas/tables:

{
"schemas": [
{
"remoteSchema": "CaseSensitiveName",
"mapping": "case_insensitive_1"
},
{
"remoteSchema": "cASEsENSITIVEnAME",
"mapping": "case_insensitive_2"
}],
"tables": [
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "tablex",
"mapping": "table_1"
},
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "TABLEX",
"mapping": "table_2"
}]
}

Queries against one of the tables or schemes defined in the mapping attributes are run against the corresponding remote entity. For example, a query against tables in the case_insensitive_1 schema is forwarded to the CaseSensitiveName schema and a query against case_insensitive_2 is forwarded to the cASEsENSITIVEnAME schema.

At the table mapping level, a query on case_insensitive_1.table_1 as configured above is forwarded to CaseSensitiveName.tablex, and a query on case_insensitive_1.table_2 is forwarded to CaseSensitiveName.TABLEX.

By default, when a change is made to the mapping configuration file, Trino must be restarted to load the changes. Optionally, you can set the case-insensitive-name-mapping.refresh-period to have Trino refresh the properties without requiring a restart:

case-insensitive-name-mapping.refresh-period=30s

Non-transactional INSERT

The connector supports adding rows using INSERT statements </sql/insert>. By default, data insertion is performed by writing data to a temporary table. You can skip this step to improve performance and write directly to the target table. Set the insert.non-transactional-insert.enabled catalog property or the corresponding non_transactional_insert catalog session property to true.

Note that with this property enabled, data can be corrupted in rare cases where exceptions occur during the insert operation. With transactions disabled, no rollback can be performed.

Querying ClickHouse

The ClickHouse connector provides a schema for every ClickHouse database. Run SHOW SCHEMAS to see the available ClickHouse databases:

SHOW SCHEMAS FROM example;

If you have a ClickHouse database named web, run SHOW TABLES to view the tables in this database:

SHOW TABLES FROM example.web;

Run DESCRIBE or SHOW COLUMNS to list the columns in the clicks table in the web databases:

DESCRIBE example.web.clicks;
SHOW COLUMNS FROM example.web.clicks;

Run SELECT to access the clicks table in the web database:

SELECT * FROM example.web.clicks;
note

If you used a different name for your catalog properties file, use that catalog name instead of example in the above examples.

Table properties

Table property usage example:

CREATE TABLE default.trino_ck (
id int NOT NULL,
birthday DATE NOT NULL,
name VARCHAR,
age BIGINT,
logdate DATE NOT NULL
)
WITH (
engine = 'MergeTree',
order_by = ARRAY['id', 'birthday'],
partition_by = ARRAY['toYYYYMM(logdate)'],
primary_key = ARRAY['id'],
sample_by = 'id'
);

The following are supported ClickHouse table properties from https://clickhouse.tech/docs/en/engines/table-engines/mergetree-family/mergetree/

Property nameDefault valueDescription
engineLogName and parameters of the engine.
order_by(none)Array of columns or expressions to concatenate to create the sorting key. Required if engine is MergeTree.
partition_by(none)Array of columns or expressions to use as nested partition keys. Optional.
primary_key(none)Array of columns or expressions to concatenate to create the primary key. Optional.
sample_by(none)An expression to use for sampling. Optional.

Currently the connector only supports Log and MergeTree table engines in create table statement. ReplicatedMergeTree engine is not yet supported.

Type mapping

Because Trino and ClickHouse each support types that the other does not, this connector modifies some types <type-mapping-overview> when reading or writing data. Data types may not map the same way in both directions between Trino and the data source. Refer to the following sections for type mapping in each direction.

ClickHouse type to Trino type mapping

The connector maps ClickHouse types to the corresponding Trino types according to the following table:

ClickHouse typeTrino typeNotes
Int8TINYINTTINYINT, BOOL, BOOLEAN, and INT1 are aliases of Int8
Int16SMALLINTSMALLINT and INT2 are aliases of Int16
Int32INTEGERINT, INT4, and INTEGER are aliases of Int32
Int64BIGINTBIGINT is an alias of Int64
UInt8SMALLINT
UInt16INTEGER
UInt32BIGINT
UInt64DECIMAL(20,0)
Float32REALFLOAT is an alias of Float32
Float64DOUBLEDOUBLE is an alias of Float64
DecimalDECIMAL
FixedStringVARBINARYEnabling clickhouse.map-string-as-varchar config property changes the mapping to VARCHAR
StringVARBINARYEnabling clickhouse.map-string-as-varchar config property changes the mapping to VARCHAR
DateDATE
DateTime[(timezone)]TIMESTAMP(0) [WITH TIME ZONE]
IPv4IPADDRESS
IPv6IPADDRESS
Enum8VARCHAR
Enum16VARCHAR
UUIDUUID

: ClickHouse type to Trino type mapping

No other types are supported.

Trino type to ClickHouse type mapping

The connector maps Trino types to the corresponding ClickHouse types according to the following table:

Trino typeClickHouse typeNotes
BOOLEANUInt8
TINYINTInt8TINYINT, BOOL, BOOLEAN, and INT1 are aliases of Int8
SMALLINTInt16SMALLINT and INT2 are aliases of Int16
INTEGERInt32INT, INT4, and INTEGER are aliases of Int32
BIGINTInt64BIGINT is an alias of Int64
REALFloat32FLOAT is an alias of Float32
DOUBLEFloat64DOUBLE is an alias of Float64
DECIMAL(p,s)Decimal(p,s)
VARCHARString
CHARString
VARBINARYStringEnabling clickhouse.map-string-as-varchar config property changes the mapping to VARCHAR
DATEDate
TIMESTAMP(0)DateTime
UUIDUUID

: Trino type to ClickHouse type mapping

No other types are supported.

Type mapping configuration properties

The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.

Property nameDescriptionDefault value
unsupported-type-handlingConfigure how unsupported column data types are handled: IGNORE, column is not accessible. CONVERT_TO_VARCHAR, column is converted to unbounded VARCHAR. The respective catalog session property is unsupported_type_handling.IGNORE
jdbc-types-mapped-to-varcharAllow forced mapping of comma separated lists of data types to convert to unbounded VARCHAR

SQL support

The connector provides read and write access to data and metadata in a ClickHouse catalog. In addition to the globally available <sql-globally-available> and read operation <sql-read-operations> statements, the connector supports the following features:

  • /sql/insert

  • /sql/truncate

  • sql-schema-table-management

ALTER SCHEMA

The connector supports renaming a schema with the ALTER SCHEMA RENAME statement. ALTER SCHEMA SET AUTHORIZATION is not supported.

Performance

The connector includes a number of performance improvements, detailed in the following sections.

Pushdown

The connector supports pushdown for a number of operations:

  • limit-pushdown

Aggregate pushdown <aggregation-pushdown> for the following functions:

  • avg

  • count

  • max

  • min

  • sum

note

The connector performs pushdown where performance may be improved, but in order to preserve correctness an operation may not be pushed down. When pushdown of an operation may result in better performance but risks correctness, the connector prioritizes correctness.

Predicate pushdown support

The connector does not support pushdown of any predicates on columns with textual types <string-data-types> like CHAR or VARCHAR. This ensures correctness of results since the data source may compare strings case-insensitively.

In the following example, the predicate is not pushed down for either query since name is a column of type VARCHAR:

SELECT * FROM nation WHERE name > 'CANADA';
SELECT * FROM nation WHERE name = 'CANADA';