Datasets delete records


“PUT” replaces the data in a dataset
"POST" appends data to a dataset
"DELETE" deletes the entire dataset and schema

Is there, either existing or in the works, a means to delete records from a dataset?
I’d like to append to a dataset based on an action rather than a timer but the use case I’m considering would mean that an appended record may effectively be a replacement for a previously appended record rather than just a new one. This would mean having to delete the original one as well as appending the new one.
Is this possible?


Hi @ausmarkb POST can also merge - so can be used to overwrite existing records as well as add new ones.

The key here is the :id: when you POST data with a different id it will append it, but if you POST data with an existing id, it will overwrite that record.


Thanks @luis,

I’ll get some clarification from you first. Are you saying that a record can have an : id: as well?
From the documentation I took the id to refer to the dataset itself. Didn’t notice anything that said you can have a record id as well. Might have just missed it but certainly haven’t used it.

In response to your suggestion, if records have an id then yes this will be helpful in some cases but most cases I would still need a delete option. In this system many edits to the data result in a change to the id, not just a component of the record therefore records are deleted and new ones are created even though it may appear as just an edit.
System works like this. New record created. User modifies record in a way that affects the record id. Old record is deleted. New updated record is created. This can happen multiple times in a conceptual records existence.
So if I’ve sent the original record to a dataset, the user edits, I want the system to modify the dataset accordingly as well as it’s usual stuff. I can do this if the dataset records have an id and there is a means to delete a record.

Otherwise it’s a collate all records and always “PUT” but that is impractical on anything other than a timer event thus losing its ‘live’ feel.


Hi @ausmarkb I’m afraid I was actually confused :blush:

It is the unique_by and not the id what would determine whether a record is merged or simply appended.


To use unique_by you would need to specify it on your metadata (when the dataset is first created). In other words, you won’t be able to use it now if it wasn’t specified originally.

Here you’ll find more information:


No problem @luis. I’m confused most of the time.

I did specify unique_by and this does reflect the id I need to work with so that’s all good.
I’ll give a simplified specific example to better explain my use case.
The data I’m referring to relates to schedules or appointments, the id of which is multipart made up of who and when.
So the record id or ‘unique_by’ contains but is not limited to the id of the person, and the start and finish times.
So the appointment is created and I send the relevant data to the dataset. What if the appointment changes though, either it gets given to someone else or just changed to a different time-slot. It’s a common edit but it also results in a change to the unique_by value. If I just append, I get the new data but the old data remains in the dataset even though it is no longer valid. So I need a means by which to delete the old one either before or after posting the new data.

I understand this would not be as frequent a requirement if the record ids were just ids and only the data was modified but it is what it is. That said, even with sequential ids, there are times when records are legitimately deleted so I think the option is still worth considering. :grin:


Thanks for clarifying @ausmarkb. I can see the whole picture now.

Unfortunately, deleting records in that way isn’t something our Datasets API supports at present. Needless to say, I’ll let our product team know about your use-case for it.