SingleStore connector
Zipstack Cloud features a powerful SQL querying engine on top of many types of connectors, including those from Trino, some custom connectors and connectors from the open source Airbyte project. The underlying native connectors are Trino's connectors. Additionally, some parts of the documentation for these connectors have been adapted from the connector documentation found in Trino's open source project.
The SingleStore (formerly known as MemSQL) connector allows querying and creating tables in an external SingleStore database.
Requirements
To connect to SingleStore, you need:
SingleStore version 7.1.4 or higher.
Network access from Zipstack Cloud to SingleStore. Port 3306 is the default port.
Configuration
To configure the SingleStore connector, create a data source with MemSQL/SingleStore connector. The following minimum configuration is required. Replace the connection properties as appropriate for your setup:
connection-url=jdbc:singlestore://example.net:3306
connection-user=root
connection-password=secret
The connection-url defines the connection information and parameters
to pass to the SingleStore JDBC driver. The supported parameters for the
URL are available in the SingleStore JDBC driver
documentation.
The connection-user and connection-password are typically required
and determine the user credentials for the connection, often a service
user. You can use secrets </security/secrets> to avoid actual values
in the catalog properties files.
Connection security
If you have TLS configured with a globally-trusted certificate installed
on your data source, you can enable TLS between your cluster and the
data source by appending a parameter to the JDBC connection string set
in the connection-url catalog configuration property.
Enable TLS between your cluster and SingleStore by appending the
useSsl=true parameter to the connection-url configuration property:
connection-url=jdbc:singlestore://example.net:3306/?useSsl=true
For more information on TLS configuration options, see the JDBC driver documentation.
Multiple SingleStore servers
You can have as many catalogs as you need, so if you have additional SingleStore servers, simply add more data source with the SingleStore/MemSQL type.
General configuration properties
The following table describes general catalog configuration properties for the connector:
| Property name | Description | Default value |
|---|---|---|
case-insensitive-name-matching | Support case insensitive schema and table names. | false |
case-insensitive-name-matching.cache-ttl | This value should be a duration. | 1m |
case-insensitive-name-matching.config-file | Path to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases. | null |
case-insensitive-name-matching.config-file.refresh-period | Frequency with which Trino checks the name matching configuration file for changes. This value should be a duration. | (refresh disabled) |
metadata.cache-ttl | The duration for which metadata, including table and column statistics, is cached. | 0s (caching disabled) |
metadata.cache-missing | Cache the fact that metadata, including table and column statistics, is not available | false |
metadata.cache-maximum-size | Maximum number of objects stored in the metadata cache | 10000 |
write.batch-size | Maximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance. | 1000 |
dynamic-filtering.enabled | Push down dynamic filters into JDBC queries | true |
dynamic-filtering.wait-timeout | Maximum duration for which Trino will wait for dynamic filters to be collected from the build side of joins before starting a JDBC query. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries. | 20s |
Domain compaction threshold
Pushing down a large list of predicates to the data source can
compromise performance. Trino compacts large predicates into a simpler
range predicate by default to ensure a balance between performance and
predicate pushdown. If necessary, the threshold for this compaction can
be increased to improve performance when the data source is capable of
taking advantage of large predicates. Increasing this threshold may
improve pushdown of large dynamic filters </admin/dynamic-filtering>.
The domain-compaction-threshold catalog configuration property or the
domain_compaction_threshold
catalog session property <session-properties-definition> can be used
to adjust the default value of 32 for this threshold.
Procedures
system.flush_metadata_cache()Flush JDBC metadata caches. For example, the following system call flushes the metadata caches for all schemas in the
examplecatalogUSE example.example_schema;
CALL system.flush_metadata_cache();
Case insensitive matching
When case-insensitive-name-matching is set to true, Trino is able to
query non-lowercase schemas and tables by maintaining a mapping of the
lowercase name to the actual name in the remote system. However, if two
schemas and/or tables have names that differ only in case (such as
\"customers\" and \"Customers\") then Trino fails to query them due to
ambiguity.
In these cases, use the case-insensitive-name-matching.config-file
catalog configuration property to specify a configuration file that maps
these remote schemas/tables to their respective Trino schemas/tables:
{
"schemas": [
{
"remoteSchema": "CaseSensitiveName",
"mapping": "case_insensitive_1"
},
{
"remoteSchema": "cASEsENSITIVEnAME",
"mapping": "case_insensitive_2"
}],
"tables": [
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "tablex",
"mapping": "table_1"
},
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "TABLEX",
"mapping": "table_2"
}]
}
Queries against one of the tables or schemes defined in the mapping
attributes are run against the corresponding remote entity. For example,
a query against tables in the case_insensitive_1 schema is forwarded
to the CaseSensitiveName schema and a query against case_insensitive_2
is forwarded to the cASEsENSITIVEnAME schema.
At the table mapping level, a query on case_insensitive_1.table_1 as
configured above is forwarded to CaseSensitiveName.tablex, and a query
on case_insensitive_1.table_2 is forwarded to
CaseSensitiveName.TABLEX.
By default, when a change is made to the mapping configuration file,
Trino must be restarted to load the changes. Optionally, you can set the
case-insensitive-name-mapping.refresh-period to have Trino refresh the
properties without requiring a restart:
case-insensitive-name-mapping.refresh-period=30s
Non-transactional INSERT
The connector supports adding rows using
INSERT statements </sql/insert>. By default, data insertion is
performed by writing data to a temporary table. You can skip this step
to improve performance and write directly to the target table. Set the
insert.non-transactional-insert.enabled catalog property or the
corresponding non_transactional_insert catalog session property to
true.
Note that with this property enabled, data can be corrupted in rare cases where exceptions occur during the insert operation. With transactions disabled, no rollback can be performed.
Querying SingleStore
The SingleStore connector provides a schema for every SingleStore
database. You can see the available SingleStore databases by running
SHOW SCHEMAS:
SHOW SCHEMAS FROM example;
If you have a SingleStore database named web, you can view the tables
in this database by running SHOW TABLES:
SHOW TABLES FROM example.web;
You can see a list of the columns in the clicks table in the web
database using either of the following:
DESCRIBE example.web.clicks;
SHOW COLUMNS FROM example.web.clicks;
Finally, you can access the clicks table in the web database:
SELECT * FROM example.web.clicks;
If you used a different name for your catalog properties file, use that
catalog name instead of example in the above examples.
Type mapping
Because Trino and Singlestore each support types that the other does
not, this connector modifies some types <type-mapping-overview> when
reading or writing data. Data types may not map the same way in both
directions between Trino and the data source. Refer to the following
sections for type mapping in each direction.
Singlestore to Trino type mapping
The connector maps Singlestore types to the corresponding Trino types following this table:
| Singlestore type | Trino type | Notes |
|---|---|---|
BIT | BOOLEAN | |
BOOLEAN | BOOLEAN | |
TINYINT | TINYINT | |
TINYINT UNSIGNED | SMALLINT | |
SMALLINT | SMALLINT | |
SMALLINT UNSIGNED | INTEGER | |
INTEGER | INTEGER | |
INTEGER UNSIGNED | BIGINT | |
BIGINT | BIGINT | |
BIGINT UNSIGNED | DECIMAL(20, 0) | |
DOUBLE | DOUBLE | |
REAL | DOUBLE | |
DECIMAL(p, s) | DECIMAL(p, s) | See Singlestore DECIMAL type handling |
CHAR(n) | CHAR(n) | |
TINYTEXT | VARCHAR(255) | |
TEXT | VARCHAR(65535) | |
MEDIUMTEXT | VARCHAR(16777215) | |
LONGTEXT | VARCHAR | |
VARCHAR(n) | VARCHAR(n) | |
LONGBLOB | VARBINARY | |
DATE | DATE | |
TIME | TIME(0) | |
TIME(6) | TIME(6) | |
DATETIME | TIMESTAMP(0) | |
DATETIME(6) | TIMESTAMP(6) | |
JSON | JSON |
No other types are supported.
Trino to Singlestore type mapping
The connector maps Trino types to the corresponding Singlestore types following this table:
| Trino type | Singlestore type | Notes |
|---|---|---|
BOOLEAN | BOOLEAN | |
TINYINT | TINYINT | |
SMALLINT | SMALLINT | |
INTEGER | INTEGER | |
BIGINT | BIGINT | |
DOUBLE | DOUBLE | |
REAL | FLOAT | |
DECIMAL(p, s) | DECIMAL(p, s) | See Singlestore DECIMAL type handling |
CHAR(n) | CHAR(n) | |
VARCHAR(65535) | TEXT | |
VARCHAR(16777215) | MEDIUMTEXT | |
VARCHAR | LONGTEXT | |
VARCHAR(n) | VARCHAR(n) | |
VARBINARY | LONGBLOB | |
DATE | DATE | |
TIME(0) | TIME | |
TIME(6) | TIME(6) | |
TIMESTAMP(0) | DATETIME | |
TIMESTAMP(6) | DATETIME(6) | |
JSON | JSON |
No other types are supported.
Decimal type handling
DECIMAL types with unspecified precision or scale are mapped to a
Trino DECIMAL with a default precision of 38 and default scale of 0.
The scale can be changed by setting the decimal-mapping configuration
property or the decimal_mapping session property to allow_overflow.
The scale of the resulting type is controlled via the
decimal-default-scale configuration property or the
decimal-rounding-mode session property. The precision is always 38.
By default, values that require rounding or truncation to fit will cause
a failure at runtime. This behavior is controlled via the
decimal-rounding-mode configuration property or the
decimal_rounding_mode session property, which can be set to
UNNECESSARY (the default), UP, DOWN, CEILING, FLOOR,
HALF_UP, HALF_DOWN, or HALF_EVEN (see
RoundingMode).
Type mapping configuration properties
The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.
| Property name | Description | Default value |
|---|---|---|
unsupported-type-handling | Configure how unsupported column data types are handled:IGNORE, column is not accessible.CONVERT_TO_VARCHAR, column is converted to unbounded VARCHAR.The respective catalog session property is unsupported_type_handling. | IGNORE |
jdbc-types-mapped-to-varchar | Allow forced mapping of comma separated lists of data types to convert to unbounded VARCHAR |
SQL support
The connector provides read access and write access to data and metadata
in a SingleStore database. In addition to the
globally available <sql-globally-available> and
read operation <sql-read-operations> statements, the connector
supports the following features:
/sql/insert/sql/delete/sql/truncate/sql/create-table/sql/create-table-as/sql/drop-table/sql/alter-table/sql/create-schema/sql/drop-schema
SQL DELETE
If a WHERE clause is specified, the DELETE operation only works if
the predicate in the clause can be fully pushed down to the data source.
ALTER TABLE
The connector does not support renaming tables across multiple schemas. For example, the following statement is supported:
ALTER TABLE example.schema_one.table_one RENAME TO example.schema_one.table_two
The following statement attempts to rename a table across schemas, and therefore is not supported:
ALTER TABLE example.schema_one.table_one RENAME TO example.schema_two.table_two
Performance
The connector includes a number of performance improvements, detailed in the following sections.
Pushdown
The connector supports pushdown for a number of operations:
join-pushdownlimit-pushdowntopn-pushdown
::: note ::: title Note :::
The connector performs pushdown where performance may be improved, but in order to preserve correctness an operation may not be pushed down. When pushdown of an operation may result in better performance but risks correctness, the connector prioritizes correctness. :::
Join pushdown
The join-pushdown.enabled catalog configuration property or
join_pushdown_enabled
catalog session property <session-properties-definition> control
whether the connector pushes down join operations. The property defaults
to false, and enabling join pushdowns may negatively impact
performance for some queries.
Predicate pushdown support
The connector does not support pushdown of any predicates on columns
with textual types <string-data-types> like CHAR or VARCHAR. This
ensures correctness of results since the data source may compare strings
case-insensitively.
In the following example, the predicate is not pushed down for either
query since name is a column of type VARCHAR:
SELECT * FROM nation WHERE name > 'CANADA';
SELECT * FROM nation WHERE name = 'CANADA';