Configuring Zendesk as a Source
In the Sources tab, click on the “Add source” button located on the top right of your screen. Then, select the Zendesk option from the list of connectors. Click Next and you’ll be prompted to add your access.1. Add account access
Authenticate Nekt against your Zendesk Support account using an API token. See Zendesk API token authentication for how to create a token. The following configurations are available:- API Token: The token used to authenticate against the Zendesk Support API.
- User email: The email of the Zendesk user associated with the token.
-
Company subdomain: The subdomain of your Zendesk Support URL,
https://{subdomain}.zendesk.com. See Zendesk endpoint conventions. - Start Date: The earliest date and time from which incremental streams should begin on the first sync (combined with saved state on later runs).
2. Select streams
Choose which data streams you want to sync. For faster extractions, select only the streams that are relevant to your analysis. You can select entire groups of streams or pick specific ones.Tip: The stream can be found more easily by typing its name.Select the streams and click Next.
3. Configure data streams
Customize how you want your data to appear in your catalog. Select the desired layer where the data will be placed, a folder to organize it inside the layer, a name for each table (which will effectively contain the fetched data) and the type of sync.- Layer: choose between the existing layers on your catalog. This is where you will find your new extracted tables as the extraction runs successfully.
- Folder: a folder can be created inside the selected layer to group all tables being created from this new data source.
- Table name: we suggest a name, but feel free to customize it. You have the option to add a prefix to all tables at once and make this process faster!
- Sync Type: you can choose between INCREMENTAL and FULL_TABLE.
- Incremental: every time the extraction happens, we’ll get only the new data - which is good if, for example, you want to keep every record ever fetched.
- Full table: every time the extraction happens, we’ll get the current state of the data - which is good if, for example, you don’t want to have deleted data in your catalog.
4. Configure data source
Describe your data source for easy identification within your organization, not exceeding 140 characters. To define your Trigger, consider how often you want data to be extracted from this source. This decision usually depends on how frequently you need the new table data updated (every day, once a week, or only at specific times). Optionally, you can define some additional settings:- Configure Delta Log Retention and determine for how long we should store old states of this table as it gets updated. Read more about this resource here.
- Determine when to execute an Additional Full Sync. This will complement the incremental data extractions, ensuring that your data is completely synchronized with your source every once in a while.
5. Check your new source
You can view your new source on the Sources page. If needed, manually trigger the source extraction by clicking on the arrow button. Once executed, your data will appear in your Catalog.Streams and Fields
Below you’ll find all available data streams from Zendesk and their corresponding fields:Tickets
Tickets
Stream for support tickets: status, routing, requester and assignee, custom fields, tags, satisfaction, and channel metadata. Incremental sync uses the cursor-based ticket export and
generated_timestamp.Key Fields:id- Unique identifier for the ticketgenerated_timestamp- Unix timestamp used for incremental cursor syncstatus,priority,type- Workflow fieldscustom_status_id- Custom ticket status when enabledsubject,raw_subject,description- Subject and first public comment bodyexternal_id,encoded_id- External system and encoded identifiers
requester_id,submitter_id,assignee_id- User identifiersorganization_id,group_id- Organization and groupcollaborator_ids,follower_ids,email_cc_ids- Collaborationsharing_agreement_ids- Sharing agreements on the ticket
custom_fields- Array of{ id, value }for custom fields (values stored as serialized strings)fields- All field id/value pairs from the API (values stored as serialized strings)
due_at- Due date and timesatisfaction_rating- Nestedcomment,id,scoretags- Ticket tagsticket_form_id,brand_id,forum_topic_id- Form, brand, and legacy forum linkis_public,allow_channelback,allow_attachments- Channel behaviorfrom_messaging_channel- Whether the ticket originated from messaginghas_incidents,problem_id,followup_ids- Problem and incident links
created_at,updated_at- Record timestampsurl- API URL of the ticket resource
via.channel,via.source- How the ticket was created (from,to,rel, addresses, subject, related ids)
Ticket Metrics
Ticket Metrics
Stream for per-ticket operational metrics from the Ticket Metrics API. Join to Tickets on
ticket_id.Key Fields:id- Unique identifier for the metrics rowticket_id- Related ticket
assigned_at,initially_assigned_at,assignee_updated_at- Assignment timelineassignee_stations,group_stations- Count of assignee or group changeslatest_comment_added_at,requester_updated_at,status_updated_at,custom_status_updated_at- Activity timestamps
first_resolution_time_in_minutes,full_resolution_time_in_minutes,on_hold_time_in_minutes,agent_wait_time_in_minutes,requester_wait_time_in_minutes,reply_time_in_minutes- Objects withbusinessand/orcalendarminute countsreply_time_in_seconds- Object withcalendarseconds where applicablereopens,replies- Countssolved_at- When the ticket reached solved
created_at,updated_at,url
Ticket Fields
Ticket Fields
Stream for ticket field definitions (system and custom), including options and linked custom statuses.Key Fields:
id,type,title,raw_title,key- Identity and field typeactive,required,removable,position- Behavior and sort orderdescription,raw_description,agent_description- Help textregexp_for_validation- Input validation pattern
visible_in_portal,editable_in_portal,required_in_portal,title_in_portal,raw_title_in_portal,collapsed_for_agents
system_field_options- Array of{ name, value }custom_field_options- Array of{ id, name, raw_name, value, default }custom_statuses- Custom status definitions when linked (id, labels,status_category,active,default, timestamps,url)
sub_type_id,tagcreated_at,updated_at,url
Users
Users
Stream for agents and end users. Incremental sync uses the cursor-based user export on
updated_at.Key Fields:id- Unique identifier for the username,email,phone,aliasexternal_id,active,suspended,verifiedorganization_id,default_group_id,role,role_type,custom_role_id
locale,locale_id,time_zone,iana_time_zonedetails,notes,signature,tagsphoto- Attachment metadata (url,id,file_name,content_url, size, dimensions,thumbnails, etc.)
moderator,ticket_restriction,restricted_agent,only_private_commentsshared,shared_agent,shared_phone_numbertwo_factor_auth_enabled,last_login_at,report_csv
user_fields- Array of{ key, value }(values serialized as strings)
created_at,updated_at,url
Organizations
Organizations
Stream for customer organizations. Incremental sync uses the incremental organizations API on
updated_at.Key Fields:id,name,details,notesexternal_id,domain_names,tagsgroup_id- Associated groupshared_tickets,shared_comments- Visibility flags
created_at,updated_at,deleted_at
organization_fields- Array of{ key, value }(values serialized as strings)
url
Organization Fields
Organization Fields
Stream for organization custom field definitions.Key Fields:
id,key,type,title,raw_titleactive,system,positiondescription,raw_description,regexp_for_validationsystem_field_options,custom_field_options- Same shapes as Ticket Fields where applicablecreated_at,updated_at,url
User Fields
User Fields
Stream for user custom field definitions.Key Fields:
id,key,type,title,raw_titleactive,system,positiondescription,raw_description,regexp_for_validationcreated_at,updated_at,url
Data Model
The following diagram illustrates the relationships between the core data streams in Zendesk. The arrows indicate the join keys that link the different entities, providing a clear overview of the data structure.Use Cases for Data Analysis
This guide outlines valuable business intelligence use cases when consolidating Zendesk data, along with ready-to-use SQL queries that you can run on Explorer.1. Ticket Detail and Resolution Metrics
Join tickets to ticket metrics to analyze resolution time, replies, and reopens. Business Value:- Track full and first resolution time by priority or group
- Identify tickets with high reopen counts
- Combine with Users and Organizations for owner- or account-level views
SQL query
SQL query
- AWS
- GCP
Sample Result
Sample Result
| ticket_id | subject | status | priority | organization_id | full_resolution_calendar_mins | first_resolution_calendar_mins | replies | reopens | solved_at |
|---|---|---|---|---|---|---|---|---|---|
| 10042 | Billing question | solved | normal | 501 | 180 | 45 | 4 | 0 | 2024-11-20 14:22:00 |
| 10041 | Login failure | open | urgent | 502 | NULL | NULL | 1 | 0 | NULL |
| 10040 | Refund request | solved | high | 501 | 1440 | 120 | 6 | 1 | 2024-11-19 09:05:00 |
2. Ticket Volume by Organization
Aggregate recent ticket activity by organization for account health and workload reporting. Business Value:- Compare ticket volume across customers
- Spot organizations with rising open priority or urgent tickets
- Prioritize CSM or support outreach
SQL query
SQL query
- AWS
- GCP
Sample Result
Sample Result
| organization_id | organization_name | ticket_count | open_ticket_count | urgent_ticket_count |
|---|---|---|---|---|
| 501 | Acme Corp | 128 | 22 | 3 |
| 502 | Northwind | 94 | 18 | 1 |
| 503 | Contoso | 61 | 9 | 0 |
- Prioritize accounts with the highest recent volume or urgent load
- Balance team workload across segments
- Follow up when open or urgent counts trend upward
custom_fields.
Implementation Notes
Data Quality Considerations
- Custom field payloads (
custom_fields,fields,user_fields,organization_fields) may arrive as serialized strings in the lake; parse as JSON when needed for typing - For trend reporting on tickets, ensure Start Date and triggers cover the window you analyze (for example, at least 30 days)
- Join Ticket Metrics to Tickets for SLA-style metrics; metrics rows are keyed by
ticket_id
API Limits & Performance
- Zendesk returns HTTP 429 when rate limits are reached; the connector retries with backoff using response headers when available
- Selecting only the streams you need reduces extraction time and API usage
- Incremental streams (Tickets, Users, Organizations) advance from Start Date on first sync, then from saved replication state
Skills for agents
Download Zendesk skills file
Zendesk connector documentation as plain markdown, for use in AI agent contexts.