Data Model Pipeline are the core components used by Cosmos to extract data from your data sources, clean the data using transformations, and then place the data in your reporting database. As changes are made to extensions you can then publish these changes and refresh the data in your reporting database, so the changes are available to your users. This article covers how to publish and run pipelines in the Data Model section of Cosmos.
The Publish Pipeline and Run Pipeline actions are in the Manage section of the Data Model.
Publishing a Data Model Pipeline
Clicking on Publish Pipeline will package up everything that has been built into the data model and generate a set of Azure Data Factory pipelines. This will include all data sets and transformations that have been specified in the Cosmos data model. Until the Publish Pipeline button has been pressed, any changes that have been made in the data model will not be present in the current pipeline runs.
Manually Running a Data Model Pipeline
Clicking on Run Pipeline will manually run an update of the data model pipeline. This way, you do not need to wait for the next schedule refresh of the data. If you clicked Publish Pipeline, meaning changes were made to the data model, a full load of the data will be run. Otherwise, an incremental load of the data will take place.
Purging a Pipeline from Azure Data Factory
Clicking Purge Pipeline will purge all the pipelines, data sets, and data flows in Azure Data Factory. There should be no need to do this on a regular basis, and it is suggested that you only do this when working with a member of the Cosmos Support team. The button may need to be clicked multiple times to ensure that all objects are successfully removed from Azure Data Factory prior to republishing the Cosmos data model pipeline.