Skip to main content
Connect Streamlit to Nekt using the Nekt SDK and a scoped access token. The token grants read access to the specific tables you choose, and can be used across all cloud deployments.
Available
Nekt ExpressYes
GCPYes
AWSYes

Generate a token

1

Create new access token

In the Nekt platform, go to the Access Tokens page.Click Create a token.
2

Select tables

Select the tables you want to give access to. You can select tables from any layer of your Lakehouse.
The generated token grants access only to the tables selected here. You can create multiple tokens to give access to different sets of tables — useful for scoping access per application or user.
Click Create token.
3

Copy the token

A success message will confirm the token was created. Go to the Access Tokens page to view it and copy your token.

Connect with the Nekt SDK

The Nekt SDK lets your Streamlit app load tables directly from your Lakehouse as Spark DataFrames.

Project setup

Your Streamlit project needs three files alongside your app code: requirements.txt — Python dependencies:
streamlit
git+https://github.com/nektcom/nekt-sdk-py.git#egg=nekt-sdk
packages.txt — System packages required by the SDK’s Spark runtime:
openjdk-17-jdk
The packages.txt file is used by Streamlit Community Cloud to install system-level dependencies. If you deploy elsewhere, install Java 17 through your platform’s package manager or Docker image.

Store the token as a secret

Never hardcode your token. Use Streamlit secrets management to store it securely. Create a .streamlit/secrets.toml file in your project root:
DATA_ACCESS_TOKEN = "your-nekt-token-here"
Add .streamlit/secrets.toml to your .gitignore to keep the token out of version control.
When deploying to Streamlit Community Cloud, add the same key-value pair in your app’s Secrets settings.

Load data in your app

Initialize the SDK with the token from secrets, then call nekt.load_table() to fetch tables from your Lakehouse:
import streamlit as st
import nekt

nekt.data_access_token = st.secrets["DATA_ACCESS_TOKEN"]

df = nekt.load_table(layer_name="Raw", table_name="orders")

st.dataframe(df.toPandas())
load_table returns a Spark DataFrame. Call .toPandas() to convert it for use with Streamlit’s display components. You can also select specific columns before converting:
df = nekt.load_table(
    layer_name="Raw",
    table_name="orders"
).select("id", "customer_name", "total")

st.dataframe(df.toPandas())

Full example

import streamlit as st
import nekt

nekt.data_access_token = st.secrets["DATA_ACCESS_TOKEN"]

st.title("Sales Dashboard")

orders_df = nekt.load_table(
    layer_name="Raw",
    table_name="orders"
).select("id", "customer_name", "total")

st.metric("Total orders", orders_df.count())
st.dataframe(orders_df.toPandas())
For a complete working project you can fork and deploy, see the streamlit-demo repository.

Deploy to Streamlit Community Cloud

1

Push your code to GitHub

Your repository should contain at least streamlit_app.py, requirements.txt, and packages.txt. Do not commit .streamlit/secrets.toml.
2

Create a new app in Streamlit Community Cloud

Go to share.streamlit.io, click New app, and select your repository, branch, and main file path.
3

Add your Nekt token to Secrets

In the app’s Advanced settings, paste your secret:
DATA_ACCESS_TOKEN = "your-nekt-token-here"
4

Deploy

Click Deploy. Streamlit will install the system packages from packages.txt, the Python dependencies from requirements.txt, and start your app.
Once deployed, you’re ready to go!

SDK reference

The following SDK methods are available for use in your Streamlit app. See the full SDK documentation for details on all methods.
MethodDescription
nekt.load_table(layer_name, table_name)Load a table from your Lakehouse as a Spark DataFrame
nekt.get_spark_session()Access the Spark session for advanced operations

Need help?

Contact our support team if you encounter issues during setup.