SQL Server connector
Zipstack Cloud features a powerful SQL querying engine on top of many types of connectors, including those from Trino, some custom connectors and connectors from the open source Airbyte project. The underlying native connectors are Trino's connectors. Additionally, some parts of the documentation for these connectors have been adapted from the connector documentation found in Trino's open source project.
Please reach out to [email protected] if you need MSSQL with keystore based authentication. This requires provisioning Zipstack Cloud with extra modules/properties.
The SQL Server connector allows querying and creating tables in an external Microsoft SQL Server database. This can be used to join data between different systems like SQL Server and Hive, or between two different SQL Server instances.
Requirements
To connect to SQL Server, you need:
SQL Server 2012 or higher, or Azure SQL Database.
Network access from Zipstack Cloud to SQL Server. Port 1433 is the default port.
Configuration
The connector can query a single database on a given SQL Server instance. Create a data source with the following minimum properties. Replace the connection properties as appropriate for your setup:
connection-url=jdbc:sqlserver://<host>:<port>;databaseName=<databaseName>;encrypt=false
connection-user=root
connection-password=secret
The connection-url defines the connection information and parameters
to pass to the SQL Server JDBC driver. The supported parameters for the
URL are available in the SQL Server JDBC driver
documentation.
The connection-user and connection-password are typically required
and determine the user credentials for the connection, often a service
user. You can use secrets </security/secrets> to avoid actual values
in the catalog properties files.
Connection security
The JDBC driver, and therefore the connector, automatically use Transport Layer Security (TLS) encryption and certificate validation. This requires a suitable TLS certificate configured on your SQL Server database host.
If you do not have the necessary configuration established, you can
disable encryption in the connection string with the encrypt property:
connection-url=jdbc:sqlserver://<host>:<port>;databaseName=<databaseName>;encrypt=false
Further parameters like trustServerCertificate,
hostNameInCertificate, trustStore, and trustStorePassword are
details in the TLS section of SQL Server JDBC driver
documentation.
Data source authentication
The connector can provide credentials for the data source connection in multiple ways:
inline, in the connector configuration file
in a separate properties file
in a key store file
as extra credentials set when connecting to Trino
You can use secrets </security/secrets> to avoid storing sensitive
values in the catalog properties files.
The following table describes configuration properties for connection credentials:
| Property name | Description |
|---|---|
credential-provider.type | Type of the credential provider. Must be one of INLINE, FILE, or KEYSTORE; defaults to INLINE. |
connection-user | Connection user name. |
connection-password | Connection password. |
user-credential-name | Name of the extra credentials property, whose value to use as the user name. See extraCredentials in Parameter reference. |
password-credential-name | Name of the extra credentials property, whose value to use as the password. |
connection-credential-file | Location of the properties file where credentials are present. It must contain the connection-user and connection-password properties. |
keystore-file-path | The location of the Java Keystore file, from which to read credentials. |
keystore-type | File format of the keystore file, for example JKS or PEM. |
keystore-password | Password for the key store. |
keystore-user-credential-name | Name of the key store entity to use as the user name. |
keystore-user-credential-password | Password for the user name key store entity. |
keystore-password-credential-name | Name of the key store entity to use as the password. |
keystore-password-credential-password | Password for the password key store entity. |
Multiple SQL Server databases or servers
The SQL Server connector can only access a single SQL Server database within a single catalog. Thus, if you have multiple SQL Server databases, or want to connect to multiple SQL Server instances, you must configure multiple instances of the SQL Server connector.
To add another catalog, simply add another properties file to
etc/catalog with a different name, making sure it ends in
.properties. For example, if you name the property file
sales.properties, Trino creates a catalog named sales using the
configured connector.
General configuration properties
The following table describes general catalog configuration properties for the connector:
| Property name | Description | Default value |
|---|---|---|
case-insensitive-name-matching | Support case insensitive schema and table names. | false |
case-insensitive-name-matching.cache-ttl | This value should be a duration. | 1m |
case-insensitive-name-matching.config-file | Path to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases. | null |
case-insensitive-name-matching.config-file.refresh-period | Frequency with which Trino checks the name matching configuration file for changes. This value should be a duration. | (refresh disabled) |
metadata.cache-ttl | The duration for which metadata, including table and column statistics, is cached. | 0s (caching disabled) |
metadata.cache-missing | Cache the fact that metadata, including table and column statistics, is not available | false |
metadata.cache-maximum-size | Maximum number of objects stored in the metadata cache | 10000 |
write.batch-size | Maximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance. | 1000 |
dynamic-filtering.enabled | Push down dynamic filters into JDBC queries | true |
dynamic-filtering.wait-timeout | Maximum duration for which Trino will wait for dynamic filters to be collected from the build side of joins before starting a JDBC query. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries. | 20s |
Domain compaction threshold
Pushing down a large list of predicates to the data source can
compromise performance. Trino compacts large predicates into a simpler
range predicate by default to ensure a balance between performance and
predicate pushdown. If necessary, the threshold for this compaction can
be increased to improve performance when the data source is capable of
taking advantage of large predicates. Increasing this threshold may
improve pushdown of large dynamic filters </admin/dynamic-filtering>.
The domain-compaction-threshold catalog configuration property or the
domain_compaction_threshold
catalog session property <session-properties-definition> can be used
to adjust the default value of 32 for this threshold.
Specific configuration properties
The SQL Server connector supports additional catalog properties to configure the behavior of the connector and the issues queries to the database.
| Property name | Description |
|---|---|
sqlserver.snapshot-isolation.disabled | Control the automatic use of snapshot isolation for transactions issued by Trino in SQL Server. Defaults to false, which means that snapshot isolation is enabled. |
Procedures
system.flush_metadata_cache()Flush JDBC metadata caches. For example, the following system call flushes the metadata caches for all schemas in the
examplecatalogUSE example.example_schema;
CALL system.flush_metadata_cache();
Case insensitive matching
When case-insensitive-name-matching is set to true, Trino is able to
query non-lowercase schemas and tables by maintaining a mapping of the
lowercase name to the actual name in the remote system. However, if two
schemas and/or tables have names that differ only in case (such as
\"customers\" and \"Customers\") then Trino fails to query them due to
ambiguity.
In these cases, use the case-insensitive-name-matching.config-file
catalog configuration property to specify a configuration file that maps
these remote schemas/tables to their respective Trino schemas/tables:
{
"schemas": [
{
"remoteSchema": "CaseSensitiveName",
"mapping": "case_insensitive_1"
},
{
"remoteSchema": "cASEsENSITIVEnAME",
"mapping": "case_insensitive_2"
}],
"tables": [
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "tablex",
"mapping": "table_1"
},
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "TABLEX",
"mapping": "table_2"
}]
}
Queries against one of the tables or schemes defined in the mapping
attributes are run against the corresponding remote entity. For example,
a query against tables in the case_insensitive_1 schema is forwarded
to the CaseSensitiveName schema and a query against case_insensitive_2
is forwarded to the cASEsENSITIVEnAME schema.
At the table mapping level, a query on case_insensitive_1.table_1 as
configured above is forwarded to CaseSensitiveName.tablex, and a query
on case_insensitive_1.table_2 is forwarded to
CaseSensitiveName.TABLEX.
By default, when a change is made to the mapping configuration file,
Trino must be restarted to load the changes. Optionally, you can set the
case-insensitive-name-mapping.refresh-period to have Trino refresh the
properties without requiring a restart:
case-insensitive-name-mapping.refresh-period=30s
Non-transactional INSERT
The connector supports adding rows using
INSERT statements </sql/insert>. By default, data insertion is
performed by writing data to a temporary table. You can skip this step
to improve performance and write directly to the target table. Set the
insert.non-transactional-insert.enabled catalog property or the
corresponding non_transactional_insert catalog session property to
true.
Note that with this property enabled, data can be corrupted in rare cases where exceptions occur during the insert operation. With transactions disabled, no rollback can be performed.
Querying SQL Server
The SQL Server connector provides access to all schemas visible to the
specified user in the configured database. For the following examples,
assume the SQL Server catalog is example.
You can see the available schemas by running SHOW SCHEMAS:
SHOW SCHEMAS FROM example;
If you have a schema named web, you can view the tables in this schema
by running SHOW TABLES:
SHOW TABLES FROM example.web;
You can see a list of the columns in the clicks table in the web
database using either of the following:
DESCRIBE example.web.clicks;
SHOW COLUMNS FROM example.web.clicks;
Finally, you can query the clicks table in the web schema:
SELECT * FROM example.web.clicks;
If you used a different name for your catalog properties file, use that
catalog name instead of example in the above examples.
Type mapping
Because Trino and SQL Server each support types that the other does not,
this connector modifies some types <type-mapping-overview> when
reading or writing data. Data types may not map the same way in both
directions between Trino and the data source. Refer to the following
sections for type mapping in each direction.
SQL Server type to Trino type mapping
The connector maps SQL Server types to the corresponding Trino types following this table:
| SQL Server database type | Trino type | Notes |
|---|---|---|
BIT | BOOLEAN | |
TINYINT | SMALLINT | SQL Server TINYINT is actually unsigned tinyint |
SMALLINT | SMALLINT | |
INTEGER | INTEGER | |
BIGINT | BIGINT | |
DOUBLE PRECISION | DOUBLE | |
FLOAT[(n)] | REAL or DOUBLE | See Numeric type mapping |
REAL | REAL | |
DECIMAL[(p[, s])], NUMERIC[(p[, s])] | DECIMAL(p, s) | |
CHAR[(n)] | CHAR(n) | 1 <= n <= 8000 |
NCHAR[(n)] | CHAR(n) | 1 <= n <= 4000 |
| `VARCHAR[(n | max)], NVARCHAR[(n | max)]` |
TEXT | VARCHAR(2147483647) | |
NTEXT | VARCHAR(1073741823) | |
| `VARBINARY[(n | max)]` | VARBINARY |
DATE | DATE | |
TIME[(n)] | TIME(n) | 0 <= n <= 7 |
DATETIME2[(n)] | TIMESTAMP(n) | 0 <= n <= 7 |
SMALLDATETIME | TIMESTAMP(0) | |
DATETIMEOFFSET[(n)] | TIMESTAMP(n) WITH TIME ZONE | 0 <= n <= 7 |
Trino type to SQL Server type mapping
The connector maps Trino types to the corresponding SQL Server types following this table:
| Trino type | SQL Server type | Notes |
|---|---|---|
BOOLEAN | BIT | |
TINYINT | TINYINT | Trino only supports writing values belonging to [0, 127] |
SMALLINT | SMALLINT | |
INTEGER | INTEGER | |
BIGINT | BIGINT | |
REAL | REAL | |
DOUBLE | DOUBLE PRECISION | |
DECIMAL(p, s) | DECIMAL(p, s) | |
CHAR(n) | NCHAR(n) or NVARCHAR(max) | See Character type mapping |
VARCHAR(n) | NVARCHAR(n) or NVARCHAR(max) | See Character type mapping |
VARBINARY | VARBINARY(max) | |
DATE | DATE | |
TIME(n) | TIME(n) | 0 <= n <= 7 |
TIMESTAMP(n) | DATETIME2(n) | 0 <= n <= 7 |
Complete list of SQL Server data types.
Numeric type mapping
For SQL Server FLOAT[(n)]:
If
nis not specified maps to TrinoDoubleIf
1 <= n <= 24maps to TrinoREALIf
24 < n <= 53maps to TrinoDOUBLE
Character type mapping
For Trino CHAR(n):
If
1 <= n <= 4000maps SQL ServerNCHAR(n)If
n > 4000maps SQL ServerNVARCHAR(max)
For Trino VARCHAR(n):
If
1 <= n <= 4000maps SQL ServerNVARCHAR(n)If
n > 4000maps SQL ServerNVARCHAR(max)
Type mapping configuration properties
The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.
| Property name | Description | Default value |
|---|---|---|
unsupported-type-handling | Configure how unsupported column data types are handled:IGNORE, column is not accessible.CONVERT_TO_VARCHAR, column is converted to unbounded VARCHAR.The respective catalog session property is unsupported_type_handling. | IGNORE |
jdbc-types-mapped-to-varchar | Allow forced mapping of comma separated lists of data types to convert to unbounded VARCHAR |
SQL support
The connector provides read access and write access to data and metadata
in SQL Server. In addition to the
globally available <sql-globally-available> and
read operation <sql-read-operations> statements, the connector
supports the following features:
/sql/insert/sql/delete/sql/truncatesql-schema-table-management
SQL DELETE
If a WHERE clause is specified, the DELETE operation only works if
the predicate in the clause can be fully pushed down to the data source.
ALTER TABLE
The connector does not support renaming tables across multiple schemas. For example, the following statement is supported:
ALTER TABLE example.schema_one.table_one RENAME TO example.schema_one.table_two
The following statement attempts to rename a table across schemas, and therefore is not supported:
ALTER TABLE example.schema_one.table_one RENAME TO example.schema_two.table_two
Table functions
The connector provides specific table functions </functions/table> to
access SQL Server.
query(varchar) -> table
The query function allows you to query the underlying database
directly. It requires syntax native to SQL Server, because the full
query is pushed down and processed in SQL Server. This can be useful for
accessing native features which are not implemented in Trino or for
improving query performance in situations where running a query natively
may be faster.
::: note ::: title Note :::
Polymorphic table functions may not preserve the order of the query
result. If the table function contains a query with an ORDER BY
clause, the function result may not be ordered as expected.
:::
For example, query the example catalog and select the top 10 percent
of nations by population:
SELECT
*
FROM
TABLE(
example.system.query(
query => 'SELECT
TOP(10) PERCENT *
FROM
tpch.nation
ORDER BY
population DESC'
)
);
Performance
The connector includes a number of performance improvements, detailed in the following sections.
Table statistics
The SQL Server connector can use
table and column statistics </optimizer/statistics> for
cost based optimizations </optimizer/cost-based-optimizations>, to
improve query processing performance based on the actual data in the
data source.
The statistics are collected by SQL Server and retrieved by the connector.
The connector can use information stored in single-column statistics. SQL Server Database can automatically create column statistics for certain columns. If column statistics are not created automatically for a certain column, you can create them by executing the following statement in SQL Server Database.
CREATE STATISTICS example_statistics_name ON table_schema.table_name (column_name);
SQL Server Database routinely updates the statistics. In some cases, you may want to force statistics update (e.g. after defining new column statistics or after changing data in the table). You can do that by executing the following statement in SQL Server Database.
UPDATE STATISTICS table_schema.table_name;
Refer to SQL Server documentation for information about options, limitations and additional considerations.
Pushdown
The connector supports pushdown for a number of operations:
join-pushdownlimit-pushdowntopn-pushdown
Aggregate pushdown <aggregation-pushdown> for the following functions:
avgcountmaxminsumstddevstddev_popstddev_sampvariancevar_popvar_samp
::: note ::: title Note :::
The connector performs pushdown where performance may be improved, but in order to preserve correctness an operation may not be pushed down. When pushdown of an operation may result in better performance but risks correctness, the connector prioritizes correctness. :::
Cost-based join pushdown
The connector supports cost-based join-pushdown to make intelligent
decisions about whether to push down a join operation to the data
source.
When cost-based join pushdown is enabled, the connector only pushes down
join operations if the available /optimizer/statistics suggest that
doing so improves performance. Note that if no table statistics are
available, join operation pushdown does not occur to avoid a potential
decrease in query performance.
The following table describes catalog configuration properties for join pushdown:
| Property name | Description | Default value |
|---|---|---|
join-pushdown.enabled | Enable join pushdown. Equivalent catalog session property is join_pushdown_enabled. | true |
join-pushdown.strategy | Strategy used to evaluate whether join operations are pushed down. Set to AUTOMATIC to enable cost-based join pushdown, or EAGER to push down joins whenever possible. Note that EAGER can push down joins even when table statistics are unavailable, which may result in degraded query performance. Because of this, EAGER is only recommended for testing and troubleshooting purposes. | AUTOMATIC |
Predicate pushdown support
The connector does not support pushdown of any predicates on columns
with textual types <string-data-types> like CHAR or VARCHAR. This
ensures correctness of results since the data source may compare strings
case-insensitively.
In the following example, the predicate is not pushed down for either
query since name is a column of type VARCHAR:
SELECT * FROM nation WHERE name > 'CANADA';
SELECT * FROM nation WHERE name = 'CANADA';
Bulk insert
You can optionally use the bulk copy API to drastically speed up write operations.
Enable bulk copying and a lock on the destination table to meet minimal logging requirements.
The following table shows the relevant catalog configuration properties and their default values:
| Property name | Description | Default |
|---|---|---|
sqlserver.bulk-copy-for-write.enabled | Use the SQL Server bulk copy API for writes. The corresponding catalog session property is bulk_copy_for_write. | false |
sqlserver.bulk-copy-for-write.lock-destination-table | Obtain a bulk update lock on the destination table for write operations. The corresponding catalog session property is bulk_copy_for_write_lock_destination_table. Setting is only used when bulk-copy-for-write.enabled=true. | false |
Limitations:
- Column names with leading and trailing spaces are not supported.
Data compression
You can specify the data compression policy for SQL Server
tables
with the data_compression table property. Valid policies are NONE,
ROW or PAGE.
Example:
CREATE TABLE example_schema.scientists (
recordkey VARCHAR,
name VARCHAR,
age BIGINT,
birthday DATE
)
WITH (
data_compression = 'ROW'
);