#314 Standard Op for modifying recs

Shawn Jacobson Thu 27 Aug 2015

I would like to propose that a standard Op is defined for all CRUD operations for dealing with recs. The Read Op takes care of the R, unfortunately the C, U, and D methods have not been standardized. In building an application outside of SkySpark, I have had to implement 3 different implementations for dealing with syncing records between Haystack servers. This could be alleviated, and as such, ensure that Haystack is an easier API to integrate with, if the undefined CRUD operations were defined as part of the Haystack standard APIs. Thanks.

Brian Frank Fri 28 Aug 2015

The problem with create, update, and delete is that it gets into transaction/concurrency semantics.

You could probably safely ignore those for a simple create op that just took a list of dicts and returned the newly created record/entities with assigned ids. But then consider that SkySpark is a flat database, but that Niagara is a tree based database?

Delete is a little tricky. For example in SkySpark you really move it to the trash by adding the trash tag (although you can force a delete).

Update is probably the trickiest because you often want it to work with some sort of concurrency mechanism. Some databases would use a transaction (which can be difficult to do right over a network protocol). SkySpark uses an optimistic concurrency model, but you would have to do to pass back the versioning/mod tag to do it correctly.

So I think this is a good goal to try and achieve, but it might take a little of design work. And because its a nasty problem that is why we haven't done it yet :)

Mark Oellermann Sun 30 Aug 2015

Hi Shawn, We're building on the Node Haystack server and will be implementing CRUD in the coming weeks, so would definitely like to do something that is aligned with project efforts. We're relatively new to Haystack so will likely be more use to you doing "the legwork" on the NodeJS server implementation than adding critical input to the the calling/response conventions.

Thanks for your insights Brian. While much of what you describe strikes me as implementation complexity that would lie in implementation details behind the REST interface for those specific products, they certainly helped to inform (me anyway) of -

  1. the sorts of scenarios that need to be handled (success/fail codes etc.)
  2. the need to pass back haystack entities on successful operations - so any implementation-specific tags (e.g. versioning) can be exposed
  3. the need for two possible variants on delete (or perhaps a flag passed to the delete op) indicating whether to completely delete the object?

The guy our end that will be implementing the Node Haystack stuff is Alex, I'll get him to pop in here and introduce himself so we can get started.

Shawn Jacobson Thu 3 Sep 2015

Mark, awesome to hear that our code base is finding its way into other project, I look forward to seeing any contributions or advise your team may have for the project. I would hope that we can come up with a proposal for some APIs that work while leaving the implementation complexity up to the different servers that have been created to date and in the future. Quite honestly, most, if not all of the implementations currently have a way to do every aspect of CRUD, they are simply not standardized and this causes issues in developing real world applications that PUSH data to a Haystack server.

As far as the variants of delete, I believe this to be unneeded complexity that can easily be remedied by adding a "trash" tag to the defined tag list. This would mean that from a client perspective, sending something to the trash is actually just an Update operation that adds that tag to the rec. This would also allow for the Open Source code bases to implement the ignoring of those records in read operations at that level, rather than having it be an implementation specific item. We have also gone with using a trash tag for our product implementation, but chose to do it at the product level, instead of in NodeHaystack, to stay consistent with the other Open Source options out there.

Delete can then do as it says and actually delete the rec. The main complexity I see with the Delete operation is how to handle the potential parentless children that could be created by deleting a site or equip rec that has child recs. I will work on a proposed API so we can at least begin the discussion.

Shawn Jacobson Thu 3 Sep 2015

Went ahead and wrote up a proposed spec for a Create Op, I'll add the other Ops sometime during the next week.

// Create Op

Create

The create op is used to post new recs to the Haystack Server.

Request: a grid of records containing all recs to be added to the server. If an "id" column is passed, it should be transformed by the server to a "hsid" tag so the server knows the client specific id for potential mapping operations.

Response: order consistent grid containing all recs that were added to the server with an "id" column being added to represent the server specific id of each rec. An "err" column should also be added so any rows that encountered an error during creation can report such error. It is assumed that rows with a defined "err" entity would not return an "id" entity.

Example:

Here is an example which posts some new recs to the server:

// request
ver:"2.0"
id,dis,equip,equipRef,his,kind,point,site,siteRef,vav
@1234,"Site 1234",,,,,,,,
@2345,"Equip 2345",M,,,,,,@1234,M
@3456,"Point 3456",,@2345,M,"Number",M,,@1234,

// response
ver:"2.0"
id,dis,equip,equipRef,his,hsid,kind,point,site,siteRef,vav
@4321,"Site 1234",,,,@1234,,,,,
@5432,"Equip 2345",equip,,,@2345,,,,@4321,M
@6543,"Point 3456",,@5432,M,@3456,"Number",M,,@4321,

Questions:

  1. Is hsid the proper tag to use? If so, it should be added to the tag definitions so it is reserved.
  2. Is err the proper column to return for errors? If so, it should be added to the tag definitions so it is reserved for response grids.
  3. Should the server be allowed to add a rec and return an "err", therefore return both an "id" and an "err" on the record added? My opinion is no.
  4. Should the server be required to automatically map created records to parent entities, or should this overhead be placed on the client?

Alex Afflick Fri 4 Sep 2015

Hi Shawn, it is great to hear that we are heading down the same path. Everything you have described fore the CreateOp looks great. As Mark mentioned we are relatively new to the project but are keen to help out. In regards to your questions:

Q2 - Check out the Error Grid responses provided by the subscription operations. It may be an idea to include the error marker tag in the grid metadata in addition to your suggested err column. It seems the err maker tag definition is used in those ops already so should definitely be added to the documentation.

Q3 - Agree, no.

Q4 - I don't think so.

We look forward to getting more involved in the project.

Brian Frank Fri 4 Sep 2015

Shawn,

For create, I would suggest this behavior:

  1. Client never specifies an id, it is generated by server
  2. That means you can't add a whole site and cross-references at one time, you would have to do your sites, then your equips, then your points. But it keeps things far simpler. If you do allow cross-references, then maybe we could add a flag for swizzling the refs, then you could match up server generated ids by ordering the rows b/w request and response
  3. All operations should complete or fail atomically - a single modification request should be treated as an encapsulated transaction

I still think the fundamental problem with create is how it would work in a system like Niagara. Niagara isn't a flat database, so create would be required to add those entities as components somewhere in the config tree.

Shawn Jacobson Fri 4 Sep 2015

Brian,

  1. I agree an id should not be required, the only question here is does it serve any benefit for the server to have a reference to the client's id, even if it is only for future use. In my current implementation for syncing with a Haystack server, I use haystackHis with SkySpark and hsid for Connexxion since those are the tags that appeared to be decided on within each solution to keep a remote reference.
  2. I think this is fine and is how I had to implement syncing to this point. This also answers my own question 4 in that the client should be responsible for maintaining the remote references.
  3. What is the benefit of having all operations fail atomically? I don't necessarily disagree, but am curious of your opinion on this matter?

I agree that Niagara proposes some unique challenges and may need it's own rules, in addition to the standard rules, to handle component creation. That being said, knowing where to add the component shouldn't be the issue, since NHaystack has a well defined method for transforming paths into ids. I think NHaystack would simply need to define there own extended requirement for how to pass that information. The more major concern, in my opinion, is what type will be added, and how does it receive any configuration information it may require. That being said, I am not aware of anyone pushing data to NHaystack, which is not the case with SkySpark or Connexxion. Therefore, I would not want to not standardize on something because of that. I do agree however, that if we can come up with something that works for NHaystack as well, it would be nice to include.

Shawn Jacobson Sat 5 Sep 2015

// Delete Op

Delete

The delete op is used to remove recs from the Haystack Server.

Request (by filter): a grid with a single row and following columns:

  • filter: required Str encoding of filter

Request (by id): a grid of one or more rows and one column:

  • id: a Ref identifier

Response: grid with a row for each entity removed. If a filter read and no matches were found this will be an empty grid with no rows. If a read by id, then each row corresponds to the request grid and its respective row ordering. If an id from the request was not found, the response includes a row of all null cells.

Example of filter read request:

ver:"2.0"
filter
"point and siteRef==@siteA"

Example of read by id with three identifiers:

ver:"2.0"
id
@vav101.zoneTemp
@vav102.zoneTemp
@vav103.zoneTemp

Example of a read response where an id is not found:

ver:"2.0"
id,dis,curVal
@vav101.zoneTemp, "VAV-101 ZoneTemp",74.2°F
N,N,N
@vav103.zoneTemp, "VAV-103 ZoneTemp",73.8°F

Questions:

  1. How should we handle the possibility of removing a referenced rec? i.e. a site is removed that has equip and point records the reference that site.

Shawn Jacobson Sat 5 Sep 2015

// Update Op

Update

The update op is used to modify existing recs on the Haystack Server.

Request: a grid of records containing all recs to be updated on the server. Prior to performing an Update, the client should do a Read to ensure it has the latest version of the rec to be modified.

Response: order consistent grid containing all recs that were updated on the server. An "err" column should also be added so any rows that encountered an error during the update can report such error.

Example:

Here is an example which posts some updated recs to the server:

// request
ver:"2.0"
id,dis,equip,equipRef,his,kind,point,site,siteRef,vav
@1234,"Site 1234",,,,,,,,
@2345,"Equip 2345",M,,,,,,@1234,M
@3456,"Point 3456",,@2345,M,"Number",M,,@1234,

// response
ver:"2.0"
id,dis,equip,equipRef,his,hsid,kind,point,site,siteRef,vav
@1234,"Site 1234",,,,@1234,,,,,
@2345,"Equip 2345",equip,,,@1234,,,,@4321,M
@3456,"Point 3456",,@2345,M,@3456,"Number",M,,@1234,

Questions:

  1. Should we add to the standard the concept of versioning, or should this be left as a server specific implementation? My opinion is that it should be left as a server specific implementation.
  2. Per Brian's suggestions with the proposed Create Op, should all operations complete or fail atomically?

Shawn Jacobson Sat 5 Sep 2015

// Create Op

Create

The create op is used to post new recs to the Haystack Server. It is the client's responsibility to maintain the remote references. Therefore, the client should post new recs of different types in different requests. In other words, the client should post new site recs prior to posting new equip recs that belong to those sites, and post new equip recs prior to posting new point recs that belong to those equips. This will allow the client to maintain and transform remote references. The client should maintain those remote references by adding the remote server id as an hsid property.

Request: a grid of records containing all recs to be added to the server. The client may pass its own internal id as an hsid so the server contains a reference to the remote record, but this should not be required.

Response: order consistent grid containing all recs that were added to the server with an "id" column being added to represent the server specific id of each rec. An "err" column should also be added so any rows that encountered an error during creation can report such error. It is assumed that rows with a defined "err" entity would not return an "id" entity.

Example:

Here is an example which posts some new recs to the server:

// request - site
ver:"2.0"
hsid,dis,site
@1234,"Site 1234",M

// response - site
ver:"2.0"
id,dis,hsid,site
@4321,"Site 1234",@1234,M

// request - equip
ver:"2.0"
hsid,dis,equip,siteRef,vav
@2345,"Equip 2345",M,@4321,M

// response - equip
ver:"2.0"
id,dis,equip,hsid,siteRef,vav
@5432,"Equip 2345",M,@2345,@4321,M

// request - point
ver:"2.0"
hsid,dis,equipRef,his,kind,point,siteRef
@3456,"Point 3456",@5432,M,"Number",M,@4321

// response - point
ver:"2.0"
id,dis,equipRef,his,hsid,kind,point,siteRef
@6543,"Point 3456",@5432,M,@3456,"Number",M,@4321

Questions:

  1. Is hsid the proper tag to use? If so, it should be added to the tag definitions so it is reserved.
  2. Is err the proper column to return for errors? If so, it should be added to the tag definitions so it is reserved for response grids.
  3. Should the server be allowed to add a rec and return an "err", therefore return both an "id" and an "err" on the record added? My opinion is no.
  4. Should all operations complete or fail atomically?

Shawn Jacobson Sat 5 Sep 2015

Should we create separate topics for each proposed Op for cleaner commenting?

Jason Briggs Sat 5 Sep 2015

Yes let's break each op into its own forum discussion.

For delete I would have an optional parameter for deleting children as well. Again the problem is for things like niagara. If you deleted a site, would you want the points to delete too, and if you deleted the points do you want to also delete the device.

Maybe even option parameters for each record type. Floor,equip,point,device as parameters would delete all of those records too. I think this gets much more complex and maybe should be handled per server

Brian Frank Mon 7 Sep 2015

I personally don't think we should standardize anything unless it can also work with NHaystack too since that is a very important aspect of the community, and also its a good test case for how to handle things with a different database. So I'd definitely want Mike J to chime in.

Before discussing the individual ops, I think we should focus on some high level concepts and philospohy:

  1. Is there such thing as partial failures? I argued previously that I believe that is a bad design path because partial failures are more complex to handle from both a client and server perspective. So I would like to see atomic success or atomic failure for each request.
  2. There has to be some notion of transactions or concurrency handling that would work across different systems. For example in SkySpark we use a mod tag to implement record versioning and optimistic concurrency. So in our system, both delete and update would require passing in both the id tag as well as the mod tag. But that is very much an implementation detail for our system, not necessarily something I would push to standardize.

Mike Jarmy Tue 8 Sep 2015

Shawn, can you post separate discussions for each of the Create, Update and Delete ops? I think what you have done in writing them up is really great, but I have comments about each op, so I am in agreement that they each deserve their own topic.

Generally speaking, I am in agreement with Brian that we will want to at least try to define ops whose semantics will work for nhaystack too (this might be quite difficult to do).

Mike Jarmy Tue 8 Sep 2015

Shawn, here are a couple of comments on one of your earlier posts. You wrote:

I agree that Niagara proposes some unique challenges and may need it's own rules, in addition to the standard rules, to handle component creation. That being said, knowing where to add the component shouldn't be the issue, since NHaystack has a well defined method for transforming paths into ids.

Except that the component id isn't visible to the outside if the component in question is part of a Site/Equip/Point Nav Tree. The way IDs are handled internally in nhaystack is quite complex.

The more major concern, in my opinion, is what type will be added, and how does it receive any configuration information it may require.

Yeah, we don't want to put that sort of thing into a generic standard haystack op.

That being said, I am not aware of anyone pushing data to NHaystack, which is not the case with SkySpark or Connexxion.

We've done it quite a bit, but with non-standard ops.

I would not want to not standardize on something because of that.

I agree with this completely. Its worth having the discussion about whether we can make nhaystack implement these ops, but it would also be perfectly OK not to implement the new ops in Niagara.

I do agree however, that if we can come up with something that works for NHaystack as well, it would be nice to include.

Let's give it a try.

Shawn Jacobson Tue 8 Sep 2015

I have created 3 new topics for each proposed Op. Create, Update, and Delete. Moving forward, let's use this thread for the higher level concepts that pertain to all the Ops. So far, Brian has brought up atomicity and concurrency. As far as concurrency goes, this is why the Update Op says the client should perform a Read prior to running an Update. This should ensure that they have the latest version of the Rec from the server, so when the Update is called, the server can validate and ensure that there is not a conflict.

Pun Mum Wed 7 Mar 2018

Thanks Brian for pointing to this discussion. It is a wonderful discussion indeed.

We are implementing Haystack server which will represent BACnet devices. From this discussion, i have few queries:

  1. Create operation as described here would allow a Haystack client to create Haystack equip and point, but how would these be mapped to BACnet devices ?

    If at all required, I believe, such mapping has to be handled at Haystack server level only.

  2. If someone creates Haystack equip and point without having proper BACnet mapping then data will not be available for such points. In such a case, Haystack client who originally created these entities will have to keep updating values of created equip and point.

    Then only other Haystack client can get the latest value.

    Why would some Haystack client create and keep updating equip and point ? In such a case, "creator" Haystack client is acting as a data source for "Haystack server" which allows creation of equip and point.

  3. In BACnet, there are CreateObject and DeleteObject services which are normally used to create logical BACnet objects like Schedule, TrendLog, etc. CreateObject and DeleteObject services are not used to create, delete physical objects like Analog-Input.

    I understand, Haystack point normally represents (or mapped to) a physical end-point/object. Logical Haystack point or logical Haystack equip concept is not there.

    Please let me know if I am missing anything.

Thanks.

Brian Frank Wed 7 Mar 2018

I believe all those questions are really outside the scope of Haystack. In general how you model your system and what points actually map to is a black box from Haystack's perspective.

Pun Mum Thu 8 Mar 2018

I agree with you Brian. And only because of this I tend to think that Haystack client (a black box) creating equip and device - may further complicate things if mapping is involved.

If a device has a native support for Haystack, then creating equip/point in such a device from outside via Haystack client - looks again complex to me.

Can you please throw some light on probable use cases for supporting these features ?

Login or Signup to reply.