Skip to main content
Snowflake is a cloud data platform that separates storage and compute, offering elastic scalability and strong security. It is widely used for analytics and data warehousing, making it a great source to bring operational data into Nekt.

Configuring Snowflake as a Source

In the Sources tab, click on the “Add source” button located on the top right of your screen. Then, select the Snowflake option from the list of connectors. Click Next and you’ll be prompted to add your access.

1. Add account access

You’ll need to provide your Snowflake connection details to authorize Nekt to access your data. The following configurations are available:
  • Account identifier (account): Your Snowflake account identifier. You can use the format <account_locator>.<cloud_region_id>.<cloud>. Check Snowflake’s documentation for more details. This field is required.
  • Warehouse (warehouse): Your Snowflake warehouse to execute queries. This field is required.
  • User (user): The login name for your Snowflake user. This field is required.
  • Password (password): The password for your Snowflake user. Do not use this field if you want to login with a private key.
  • Private key (private_key): The private key for your Snowflake user. Do not use this field if you want to login with a password.
  • Database name (database): The Snowflake database if you want to filter the discovered tables.
  • Schema (schema): The schema from your database if you want to filter the discovered tables.
  • Role (role): The role to use when fetching data.
  • Chunk size (chunk_size): The number of rows to fetch at a time. If set to 0, the tap will fetch all rows at once (no chunking). Default is 25000.
  • SSH Tunnel (ssh_tunnel): SSH Tunnel to be used in your database connection.
    • Enable SSH Tunnel (enable): Enable SSH tunnel to use in your database connection (default: false).
    • SSH host (host): The host for accessing your SSH tunnel.
    • SSH username (username): The user name for accessing your SSH tunnel.
    • SSH port (port): The port used for connecting to your SSH tunnel. The default SSH port is 22, but your port might be different.
    • SSH password (password): The password for accessing your SSH tunnel. Use either password or private key.
    • SSH private key (private_key): The private key for accessing your SSH tunnel. Use either password or private key.
Once you’re done, click Next.

2. Select streams

Choose which data streams you want to sync. You can select entire groups of streams or pick specific ones.
Tip: The stream can be found more easily by typing its name.
Select the streams and click Next.

3. Configure data streams

Customize how you want your data to appear in your catalog. Select the desired layer where the data will be placed, a name for each table (which will effectively contain the fetched data) and the type of sync.
  • Layer: companies in the Growth plan can choose in which layer the tables with the extracted data will be placed.
  • Table name: we suggest a name, but feel free to customize it. You have the option to add a prefix to all tables at once and make this process faster!
  • Sync Type: depending on the data you are bringing to the lake, you can choose between INCREMENTAL and FULL_TABLE. Read more about Sync Types here.
Once you are done configuring, click Next.

4. Configure data source

Describe your data source for easy identification within your organization. You can inform things like what data it brings, to which team it belongs, etc. To define your Trigger, consider how often you want data to be extracted from this source. This decision usually depends on how frequently you need the new table data updated (every day, once a week, or only at specific times). Optionally, you can define some additional settings (if available):
  • Configure Delta Log Retention and determine for how long we should store old states of this table as it gets updated.
  • Determine when to execute an Additional Full Sync. This will complement the incremental data extractions, ensuring that your data is completely synchronized with your source every once in a while.
Once you are ready, click Next to finalize the setup.

5. Check your new source

You can view your new source on the Sources page. Now, for you to be able to see it on your Catalog, you have to wait for the pipeline to run. You can now monitor it on the Sources page to see its execution and completion. If needed, manually trigger the pipeline by clicking on the refresh icon. Once executed, your new table will appear in the Catalog section.
For you to be able to see it on your Catalog, you need at least one successful source run.
If you encounter any issues, reach out to us via Slack, and we’ll gladly assist you!

Skills for agents

Download Snowflake skills file

Snowflake connector documentation as plain markdown, for use in AI agent contexts.