
Add credentials
- Create a new pipeline or open an existing pipeline.
- Expand the left side of your screen to view the file browser.
- Scroll down and click on a file named
io_config.yaml. - Enter the following keys and values under the key named
default(you can have multiple profiles, add it under whichever is relevant to you)
Using SQL block
- Create a new pipeline or open an existing pipeline.
- Add a data loader, transformer, or data exporter block.
- Select
SQL. - Under the
Data providerdropdown, selectClickHouse. - Under the
Profiledropdown, selectdefault(or the profile you added credentials underneath). - Enter the optional table name of the table to write to.
- Under the
Write policydropdown, selectReplaceorAppend(please see SQL blocks guide for more information on write policies). - Enter in this test query:
SELECT 1. - Run the block.
Using Python block
- Create a new pipeline or open an existing pipeline.
- Add a data loader, transformer, or data exporter block (the code snippet below is for a data loader).
- Select
Generic (no template). - Enter this code snippet (note: change the
config_profilefromdefaultif you have a different profile):
- Run the block.
Destination table in Data Exporter
If the destination table does not exist and theWrite policy is set
to Replace, data exporter will automatically create a table in ClickHouse
with Engine = Memory and a default schema inferred from the data.
However, this may not be optimal for various use cases. Since table creation
in ClickHouse can involve numerous details, it is strongly advised to
create the destination table manually before loading data to ensure it
meets specific requirements.