Currently, the only way to update a dataset is to delete it, then recreate it.

If our production code is accessing a dataset, deleting that dataset could cause service disruption until it is recreated.

It would be preferable to be able to update a dataset in place, with no disruption on results. The next build could simply include the new data, perhaps? Or maybe it could immediately trigger a new build ahead of the scheduled build?

An alternative might be to put some sort of alias in front of the dataset, so the new dataset could be created, then the alias changed to be able to point to the new dataset. If this is the chosen solution, it would be nice to be able to manage this via the API, and not a UI, as we script our dataset updates.