Add snapshots to your DAG
Related documentation
What are snapshots?
Analysts often need to "look back in time" at previous data states in their mutable tables. While some source data systems are built in a way that makes accessing historical data possible, this is not always the case. dbt provides a mechanism, snapshots, which records changes to a mutable table over time.
Snapshots implement type-2 Slowly Changing Dimensions over mutable source tables. These Slowly Changing Dimensions (or SCDs) identify how a row in a table changes over time. Imagine you have an orders
table where the status
field can be overwritten as the order is processed.
id | status | updated_at |
---|---|---|
1 | pending | 2024-01-01 |
Now, imagine that the order goes from "pending" to "shipped". That same record will now look like:
id | status | updated_at |
---|---|---|
1 | shipped | 2024-01-02 |
This order is now in the "shipped" state, but we've lost the information about when the order was last in the "pending" state. This makes it difficult (or impossible) to analyze how long it took for an order to ship. dbt can "snapshot" these changes to help you understand how values in a row change over time. Here's an example of a snapshot table for the previous example:
id | status | updated_at | dbt_valid_from | dbt_valid_to |
---|---|---|---|---|
1 | pending | 2024-01-01 | 2024-01-01 | 2024-01-02 |
1 | shipped | 2024-01-02 | 2024-01-02 | null |
Configuring snapshots
It is not possible to "preview data" or "compile sql" for snapshots in dbt Cloud. Instead, run the dbt snapshot
command in the IDE.
Configuration best practices
How snapshots work
When you run the dbt snapshot
command:
- On the first run: dbt will create the initial snapshot table — this will be the result set of your
select
statement, with additional columns includingdbt_valid_from
anddbt_valid_to
. All records will have adbt_valid_to = null
. - On subsequent runs: dbt will check which records have changed or if any new records have been created:
- The
dbt_valid_to
column will be updated for any existing records that have changed - The updated record and any new records will be inserted into the snapshot table. These records will now have
dbt_valid_to = null
- The
Note, these column names can be customized to your team or organizational conventions using the snapshot_meta_column_names config.
Snapshots can be referenced in downstream models the same way as referencing models — by using the ref function.
Detecting row changes
Snapshot "strategies" define how dbt knows if a row has changed. There are two strategies built-in to dbt:
- Timestamp — Uses an
updated_at
column to determine if a row has changed. - Check — Compares a list of columns between their current and historical values to determine if a row has changed.
Timestamp strategy (recommended)
The timestamp
strategy uses an updated_at
field to determine if a row has changed. If the configured updated_at
column for a row is more recent than the last time the snapshot ran, then dbt will invalidate the old record and record the new one. If the timestamps are unchanged, then dbt will not take any action.
The timestamp
strategy requires the following configurations:
Config | Description | Example |
---|---|---|
updated_at | A column which represents when the source row was last updated | updated_at |
Example usage:
Check strategy
The check
strategy is useful for tables which do not have a reliable updated_at
column. This strategy works by comparing a list of columns between their current and historical values. If any of these columns have changed, then dbt will invalidate the old record and record the new one. If the column values are identical, then dbt will not take any action.
The check
strategy requires the following configurations:
Config | Description | Example |
---|---|---|
check_cols | A list of columns to check for changes, or all to check all columns | ["name", "email"] |
The check
snapshot strategy can be configured to track changes to all columns by supplying check_cols = 'all'
. It is better to explicitly enumerate the columns that you want to check. Consider using a surrogate key to condense many columns into a single column.
Example Usage
Hard deletes (opt-in)
Rows that are deleted from the source query are not invalidated by default. With the config option invalidate_hard_deletes
, dbt can track rows that no longer exist. This is done by left joining the snapshot table with the source table, and filtering the rows that are still valid at that point, but no longer can be found in the source table. dbt_valid_to
will be set to the current snapshot time.
This configuration is not a different strategy as described above, but is an additional opt-in feature. It is not enabled by default since it alters the previous behavior.
For this configuration to work with the timestamp
strategy, the configured updated_at
column must be of timestamp type. Otherwise, queries will fail due to mixing data types.
Example Usage
Snapshot query best practices
This section outlines some best practices for writing snapshot queries:
-
Snapshot source data
Your models should then select from these snapshots, treating them like regular data sources. As much as possible, snapshot your source data in its raw form and use downstream models to clean up the data
-
Use the
source
function in your queryThis helps when understanding data lineage in your project.
-
Include as many columns as possible
In fact, go for
select *
if performance permits! Even if a column doesn't feel useful at the moment, it might be better to snapshot it in case it becomes useful – after all, you won't be able to recreate the column later. -
Avoid joins in your snapshot query
Joins can make it difficult to build a reliable
updated_at
timestamp. Instead, snapshot the two tables separately, and join them in downstream models. -
Limit the amount of transformation in your query
If you apply business logic in a snapshot query, and this logic changes in the future, it can be impossible (or, at least, very difficult) to apply the change in logic to your snapshots.
Basically – keep your query as simple as possible! Some reasonable exceptions to these recommendations include:
- Selecting specific columns if the table is wide.
- Doing light transformation to get data into a reasonable shape, for example, unpacking a JSON blob to flatten your source data into columns.
Snapshot meta-fields
Snapshot tables will be created as a clone of your source dataset, plus some additional meta-fields*.
Starting in 1.9 or with dbt Cloud Versionless, these column names can be customized to your team or organizational conventions via the snapshot_meta_column_names
config.
Field | Meaning | Usage |
---|---|---|
dbt_valid_from | The timestamp when this snapshot row was first inserted | This column can be used to order the different "versions" of a record. |
dbt_valid_to | The timestamp when this row became invalidated. | The most recent snapshot record will have dbt_valid_to set to null . |
dbt_scd_id | A unique key generated for each snapshotted record. | This is used internally by dbt |
dbt_updated_at | The updated_at timestamp of the source record when this snapshot row was inserted. | This is used internally by dbt |
*The timestamps used for each column are subtly different depending on the strategy you use:
For the timestamp
strategy, the configured updated_at
column is used to populate the dbt_valid_from
, dbt_valid_to
and dbt_updated_at
columns.
Details for the timestamp strategy
Snapshot query results at 2024-01-01 11:00
id | status | updated_at |
---|---|---|
1 | pending | 2024-01-01 10:47 |
Snapshot results (note that 11:00
is not used anywhere):
id | status | updated_at | dbt_valid_from | dbt_valid_to | dbt_updated_at |
---|---|---|---|---|---|
1 | pending | 2024-01-01 10:47 | 2024-01-01 10:47 | 2024-01-01 10:47 |
Query results at 2024-01-01 11:30
:
id | status | updated_at |
---|---|---|
1 | shipped | 2024-01-01 11:05 |
Snapshot results (note that 11:30
is not used anywhere):
id | status | updated_at | dbt_valid_from | dbt_valid_to | dbt_updated_at |
---|---|---|---|---|---|
1 | pending | 2024-01-01 10:47 | 2024-01-01 10:47 | 2024-01-01 11:05 | 2024-01-01 10:47 |
1 | shipped | 2024-01-01 11:05 | 2024-01-01 11:05 | 2024-01-01 11:05 |
For the check
strategy, the current timestamp is used to populate each column. If configured, the check
strategy uses the updated_at
column instead, as with the timestamp strategy.
Details for the check strategy
Snapshot query results at 2024-01-01 11:00
id | status |
---|---|
1 | pending |
Snapshot results:
id | status | dbt_valid_from | dbt_valid_to | dbt_updated_at |
---|---|---|---|---|
1 | pending | 2024-01-01 11:00 | 2024-01-01 11:00 |
Query results at 2024-01-01 11:30
:
id | status |
---|---|
1 | shipped |
Snapshot results:
id | status | dbt_valid_from | dbt_valid_to | dbt_updated_at |
---|---|---|---|---|
1 | pending | 2024-01-01 11:00 | 2024-01-01 11:30 | 2024-01-01 11:00 |
1 | shipped | 2024-01-01 11:30 | 2024-01-01 11:30 |