Ability to update a dataset
under review
James Spath
Currently, the only way to update a dataset is to delete it, then recreate it.
If our production code is accessing a dataset, deleting that dataset could cause service disruption until it is recreated.
It would be preferable to be able to update a dataset in place, with no disruption on results. The next build could simply include the new data, perhaps? Or maybe it could immediately trigger a new build ahead of the scheduled build?
An alternative might be to put some sort of alias in front of the dataset, so the new dataset could be created, then the alias changed to be able to point to the new dataset. If this is the chosen solution, it would be nice to be able to manage this via the API, and not a UI, as we script our dataset updates.
James Spath
An update on this ... I don't think we'd end up using a dataset update or alias solution.
When we do a release that requires a dataset change, the old code must continue using the original dataset definition, and the new code must use the new dataset definition. The only way for this to happen is for both datasets to exist during the release period, with different names.
So, I think our new release flow will be:
* Create updated dataset under a new name in Keen.
* The new code being released will point to the new dataset.
* Old code will continue pointing to the original dataset.
* Deploy the new code. During this time both datasets may be accessed.
* Once the new code is fully deployed, delete the original dataset.
Florian Hines
under review