Skip to main content

Druid connector

Zipstack Cloud features a powerful SQL querying engine on top of many types of connectors, including those from Trino, some custom connectors and connectors from the open source Airbyte project. The underlying native connectors are Trino's connectors. Additionally, some parts of the documentation for these connectors have been adapted from the connector documentation found in Trino's open source project.

The Druid connector allows querying an Apache Druid database from Trino.

Requirements

To connect to Druid, you need:

  • Druid version 0.18.0 or higher.

  • Network access from the Zipstack Cloud to your Druid broker. Port 8082 is the default port.

Configuration

Configuring the connection-url with the JDBC string to connect to Druid.

For example, to access a database as example, create the file etc/catalog/example.properties. Replace BROKER:8082 with the correct host and port of your Druid broker.

connection-url=jdbc:avatica:remote:url=http://BROKER:8082/druid/v2/sql/avatica/

You can add authentication details to connect to a Druid deployment that is secured by basic authentication by updating the URL and adding credentials:

connection-url=jdbc:avatica:remote:url=http://BROKER:port/druid/v2/sql/avatica/;authentication=BASIC
connection-user=root
connection-password=secret

Now you can access your Druid database in Trino with the example catalog name from the properties file.

The connection-user and connection-password are typically required and determine the user credentials for the connection, often a service user. You can use secrets </security/secrets> to avoid actual values in the catalog properties files.

Data source authentication

The connector can provide credentials for the data source connection in multiple ways:

  • inline, in the connector configuration file

  • in a separate properties file

  • in a key store file

  • as extra credentials set when connecting to Trino

You can use secrets </security/secrets> to avoid storing sensitive values in the catalog properties files.

The following table describes configuration properties for connection credentials:

Property nameDescription
credential-provider.typeType of the credential provider. Must be one of INLINE, FILE, or KEYSTORE; defaults to INLINE.
connection-userConnection user name.
connection-passwordConnection password.
user-credential-nameName of the extra credentials property, whose value to use as the user name. See extraCredentials in Parameter reference.
password-credential-nameName of the extra credentials property, whose value to use as the password.
connection-credential-fileLocation of the properties file where credentials are present. It must contain the connection-user and connection-password properties.
keystore-file-pathThe location of the Java Keystore file, from which to read credentials.
keystore-typeFile format of the keystore file, for example JKS or PEM.
keystore-passwordPassword for the key store.
keystore-user-credential-nameName of the key store entity to use as the user name.
keystore-user-credential-passwordPassword for the user name key store entity.
keystore-password-credential-nameName of the key store entity to use as the password.
keystore-password-credential-passwordPassword for the password key store entity.

General configuration properties

The following table describes general catalog configuration properties for the connector:

Property nameDescriptionDefault value
case-insensitive-name-matchingSupport case insensitive schema and table names.false
case-insensitive-name-matching.cache-ttlThis value should be a duration.1m
case-insensitive-name-matching.config-filePath to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases.null
case-insensitive-name-matching.config-file.refresh-periodFrequency with which Trino checks the name matching configuration file for changes. This value should be a duration.(refresh disabled)
metadata.cache-ttlThe duration for which metadata, including table and column statistics, is cached.0s (caching disabled)
metadata.cache-missingCache the fact that metadata, including table and column statistics, is not availablefalse
metadata.cache-maximum-sizeMaximum number of objects stored in the metadata cache10000
write.batch-sizeMaximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance.1000
dynamic-filtering.enabledPush down dynamic filters into JDBC queriestrue
dynamic-filtering.wait-timeoutMaximum duration for which Trino will wait for dynamic filters to be collected from the build side of joins before starting a JDBC query. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries.20s

Domain compaction threshold

Pushing down a large list of predicates to the data source can compromise performance. Trino compacts large predicates into a simpler range predicate by default to ensure a balance between performance and predicate pushdown. If necessary, the threshold for this compaction can be increased to improve performance when the data source is capable of taking advantage of large predicates. Increasing this threshold may improve pushdown of large dynamic filters </admin/dynamic-filtering>. The domain-compaction-threshold catalog configuration property or the domain_compaction_threshold catalog session property <session-properties-definition> can be used to adjust the default value of 32 for this threshold.

Procedures

  • system.flush_metadata_cache()

    Flush JDBC metadata caches. For example, the following system call flushes the metadata caches for all schemas in the example catalog

    USE example.example_schema;
    CALL system.flush_metadata_cache();

Case insensitive matching

When case-insensitive-name-matching is set to true, Trino is able to query non-lowercase schemas and tables by maintaining a mapping of the lowercase name to the actual name in the remote system. However, if two schemas and/or tables have names that differ only in case (such as \"customers\" and \"Customers\") then Trino fails to query them due to ambiguity.

In these cases, use the case-insensitive-name-matching.config-file catalog configuration property to specify a configuration file that maps these remote schemas/tables to their respective Trino schemas/tables:

{
"schemas": [
{
"remoteSchema": "CaseSensitiveName",
"mapping": "case_insensitive_1"
},
{
"remoteSchema": "cASEsENSITIVEnAME",
"mapping": "case_insensitive_2"
}],
"tables": [
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "tablex",
"mapping": "table_1"
},
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "TABLEX",
"mapping": "table_2"
}]
}

Queries against one of the tables or schemes defined in the mapping attributes are run against the corresponding remote entity. For example, a query against tables in the case_insensitive_1 schema is forwarded to the CaseSensitiveName schema and a query against case_insensitive_2 is forwarded to the cASEsENSITIVEnAME schema.

At the table mapping level, a query on case_insensitive_1.table_1 as configured above is forwarded to CaseSensitiveName.tablex, and a query on case_insensitive_1.table_2 is forwarded to CaseSensitiveName.TABLEX.

By default, when a change is made to the mapping configuration file, Trino must be restarted to load the changes. Optionally, you can set the case-insensitive-name-mapping.refresh-period to have Trino refresh the properties without requiring a restart:

case-insensitive-name-mapping.refresh-period=30s

Type mapping

Because Trino and Druid each support types that the other does not, this connector modifies some types <type-mapping-overview> when reading data.

Druid type to Trino type mapping

The connector maps Druid types to the corresponding Trino types according to the following table:

Druid typeTrino typeNotes
STRINGVARCHAR
FLOATREAL
DOUBLEDOUBLE
LONGBIGINTExcept for the special _time column, which is mapped to TIMESTAMP.
TIMESTAMPTIMESTAMPOnly applicable to the special _time column.

: Druid type to Trino type mapping

No other data types are supported.

Druid does not have a real NULL value for any data type. By default, Druid treats NULL as the default value for a data type. For example, LONG would be 0, DOUBLE would be 0.0, STRING would be an empty string '', and so forth.

Type mapping configuration properties

The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.

Property nameDescriptionDefault value
unsupported-type-handlingConfigure how unsupported column data types are handled:IGNORE, column is not accessible.CONVERT_TO_VARCHAR, column is converted to unbounded VARCHAR.The respective catalog session property is unsupported_type_handling.IGNORE
jdbc-types-mapped-to-varcharAllow forced mapping of comma separated lists of data types to convert to unbounded VARCHAR

SQL support

The connector provides globally available <sql-globally-available> and read operation <sql-read-operations> statements to access data and metadata in the Druid database.

Table functions

The connector provides specific table functions </functions/table> to access Druid.

query(varchar) -> table

The query function allows you to query the underlying database directly. It requires syntax native to Druid, because the full query is pushed down and processed in Druid. This can be useful for accessing native features which are not available in Trino or for improving query performance in situations where running a query natively may be faster.

::: note ::: title Note :::

Polymorphic table functions may not preserve the order of the query result. If the table function contains a query with an ORDER BY clause, the function result may not be ordered as expected. :::

As an example, query the example catalog and use STRING_TO_MV and MV_LENGTH from Druid SQL's multi-value string functions to split and then count the number of comma-separated values in a column:

SELECT
num_reports
FROM
TABLE(
example.system.query(
query => 'SELECT
MV_LENGTH(
STRING_TO_MV(direct_reports, ",")
) AS num_reports
FROM company.managers'
)
);