I would like to proposed a standardized set of ops to handle authentication. Today authentication is an ad-hoc set of conventions based on HTTP status codes usually hacked around redirects to a browser login page. It is a huge mess and isn't sustainable.
This proposed is based on two simple requirements:
authentication should use standard HTTP Haystack ops
authentication algorithms should be pluggable
For the initial proposal, I suggest standardization of two algorithms:
PBKDF2 with SHA-256
Plaintext
Authentication is handled with two new ops:
authHello: request to ask server which algorithm should be used for a given username, along with any data needed such as nonce, user salt, etc
authVerify: request to have server verify the client's user credentials and return a token for subsequent requests
These two ops would be special because they are required to be publicly accessible.
Once authentication is successful, then all subsequent HTTP requests specify their authentication token using an HTTP header:
Haystack-Auth-Token: E0z5-dzlywdoHcekGq
What does everybody think?
Craig GemmillTue 8 Mar 2016
I think the idea has merit. I would like to see SCRAMSHA256 (RFC-5802) added to the list. The exact parameters could be adjusted here, but it allows for sticking to the RFC. Also, what about the allowance for the “authToken” to be a cookie, etc. Would/should that be in the proposal as a part of the authHello response?
If the goal is to have pluggable authentication schemes (e.g. scramsha), the request/response format should be designed to accommodate multi-pass authentication as well as client validation of the server’s response.
The hello/verify doesn’t quite fit that model. Maybe something like
There could be some final message type where the client is essentially saying that this is the last message they’re sending, and want a token back.
The client should have some mechanism for breaking off the authentication if they’re not happy with what they see, but that could be client specific, I guess.
Brian FrankTue 8 Mar 2016
I think SCRAM would be good, although really if we are going to force client support of that, then that is really just a more sophisticated version of what I proposed for PBKDF2. So I think we should pick one that all clients that support.
However, I think it needs to be adapted to the existing Haystack formats (Zinc, JSON) so that we don't have some odd embedded format embedded into a single string value.
Also the three round trips seems excessive for what its buying us. Can you provide some background info on why this is more secure then what a simple hash of PBDKF2 would provide with two round trips?
Also, what about the allowance for the “authToken” to be a cookie, etc
I'm sort of thinking for subsequent requests we pick one mechanism and stick with it. I've seen a lot of problems with application client libraries handling cookies well versus a straight HTTP header (which is super simple, it either works or doesn't work). Usually where cookies start to introduce subtle bugs is handling multiple cookies, expecting clients to handle expires correctly, etc. So I'm thinking that we should leave cookies for browser agents, and just pick another HTTP header for this. But I can see pros of cons of both, so definitely something to discuss and figure out would
Christian TremblayWed 9 Mar 2016
Would that be mandatory ? I'm thinking about nHaystack case for example... that relies on a specific platform for authentification (Niagara Web Server)
Brian FrankWed 9 Mar 2016
Would that be mandatory
Absolutely this would be mandatory - that is the whole goal to get away from vendor specific ad hoc authentication mechanisms. As part of the Haystack 3.0 we would switch over to standardized authentication and the support for nested data structures
Christian TremblayWed 9 Mar 2016
Would it mean that it would be impossible to run haystack on existing web server ? Let's say someone wants to implement a haystack server on a IIS system... or an existing django app ?
Don't you think it could stop people to use haystack on their products because of that ?
Brian FrankWed 9 Mar 2016
Would it mean that it would be impossible to run haystack on existing web server
I don't see why not - you can implement Haystack easily on any web server, what it means is that you need to implement the authentication yourself. But that is the reality of HTTP - compared to every other protocol, authentication in HTTP is very fragmented and unstandardized. So even if you did want to use something like Django's, its pretty much a guarantee that no existing Haystack client will know how to authenticate with it (unless its something very simple like Basic Authorization). At this point authentication is the #1 issue affecting compatibility, so we have to standardize it one way or the other.
Richard McElhinneyThu 10 Mar 2016
I'm going to speak to the Niagara side of things a bit, which is I think where Christian and Craig are going which is that it would be a bit of a pain to replace the the Niagara authentication with a Haystack standard.
I've had a quick look and I don't know if it's even possible in Niagara AX (although would love to be corrected) and whilst it looks like in N4 the authentication service can accept pluggable authentication schemes I can't imagine it being a small amount of work to set this up in nhaystack.
Craig: perhaps you could confirm or deny all that?
We currently have working authentication for Niagara AX and Niagara 4. Yes they can be a bit finicky but until I can get my head around the amount of work required to do this in Niagara AX and 4 then I have some doubts about it for nhaystack.
Perhaps you could go ahead and implement the standard ops but keep the existing support we currently have. I'm sure that's what you had in mind anyway backwards compatibility would be important.
I agree that standardisation is a good aim in general, but you said it yourself that authentication over HTTP is already very fragmented, so why would we add another authentication mechanism into the mix. Yes it might be standardised, but it's only standardised for us, it doesn't mean anyone else will support it.
Steve EynonThu 10 Mar 2016
All this talk of standards reminds me of this (!) :
Would an existing standard fit the need of Haystack ? OAuth for example or something already widely used ?
I mean... project-haystack should be targetted has becoming the best reference in data semantic... I don't think we should try to force a new authentification standard in the HTTP protocol.
The user base is too small compared to every other services already available in HTTP. I really think we should choose something already existing and widely used.
Alex AfflickSat 12 Mar 2016
I agree with Christian, using something like OAuth2.0 makes more sense.
Rav PanchalingamSun 13 Mar 2016
agree with using OAuth
Shawn JacobsonSun 13 Mar 2016
While I 100% agree that introducing a new authentication protocol is overkill, I'm not sure how OAuth solves the problem either. Sure, it's the perfect auth mechanism if we're simply talking about things like SkySpark talking to Niagara (or vice-versa), on behalf of a user, but it's entire design is based off that specific premise...a 3rd party application communicating with the source (Haystack) application. It doesn't deal with local authentication, or how the user logs in directly to the source application. Any realistic "standard" for Haystack should handle both methodologies.
My honest opinion is that we should settle on the least common denominator as the required standard that all Haystack servers must implement. Anything else, can be "highly recommended" or even a caveat for specific interactions (like 3rd party access). While I understand wanting to fit this into a "standard Haystack Op", it feels like trying to do something just to do it. I can think of no real benefit in not using an existing standard just so we can communicate with Grids. I strongly suggest we do not go down that path.
I recommend that the only "required" authentication protocol should be Digest authentication. It is widely accepted and almost all clients and servers can already use it. I would also support having a special addition or clause which states that in order to communicate with 3rd party Haystack applications, that OAuth2 must also be implemented. Just my 2 cents.
Brian FrankThu 17 Mar 2016
Okay, good comments. I met with Tridium today, and they are on board working towards a interoperable authentication solution which would work for both oBIX and Haystack (and really any other HTTP protocol).
So first off, lets clarify the key problem: what we need is interoperable authentication b/w two machines/servers/devices. While human authentication is nice too, that is mostly handled by system specific login screens already. The problem currently creating serious interoperability issues is the use case making two servers/devices/applications talk to one another.
Should we try and use some existing standard? Of course, but the reality is that HTTP authentication is a huge mess. If there was obvious set of standards, we would already be using it. In protocols like SMTP, there are very clean standards like SASL as a standard way to negotiate the authentication. Anything like that in HTTP is fairly ad hoc.
The only practical, widely used standard HTTP authentication mechanism is Basic Authentication, which is probably what 95% of Haystack traffic is using today. Basic is absolutely terrible because it requires the password to be stored in the client application as plaintext. Furthermore if not using TLS (probably the common case) then the password is passed over the network in plaintext. Pros: extremely standard; Con: probably the worst option possible (yet probably the most widely used).
Shawn mentioned Digest Authentication, which is fairly standardized. But I'd have a hard time recommending that as our required mechanism. It still requires the password to be stored in plaintext, uses MD5, and was really designed almost two decades ago before techniques key stretching were used. So fairly standard, but doesn't seem like the one to pick in 2016.
Others have mentioned OAuth. The biggest problem with this is that it requires TLS, and if you require TLS then there are simpler, better options. Furthermore although it can be made to work machine-to-machine, it isn't really a suitable design. Its design center is around three parties and giving a third party app permission to access your Facebook/Twitter data.
So after meeting with Tridium, we would like to propose the following two prong strategy:
We believe the ideal solution for server-to-server authentication is to use TLS client certificates. This is a standard aspect of TLS which allows both endpoints to identity themselves with a PKI certificate. The beauty of this approach is that clients don't need to store any sensitive secrets like a password or password hash. For systems to support this mechanism, servers would be required to have the tooling for configuring a "user" identity as a certificate and its public key. While awkward for humans compared to a username/password, its a very good solution for binding two machines together. And its already something built into the TLS standards and TLS libraries.
But before we get the ecosystem fully using TLS, we also still need something that works decently without TLS. What we propose is to leverage existing conventions in HTTP via the WWW-Authenticate and Authorize header as much as possible. The mechanism will be pluggable, but we only require one mechanism for interoperability and that is SCRAM as specified in RFC 5802. There isn't a standard way to do SCRAM in HTTP although there is an expired draft RFC that seems like a pretty good basis to start. What we would do is formalize the gaps in exactly how the HTTP status codes and headers are used.
Christian TremblayTue 29 Mar 2016
I think I better understand where you want to go. What would be the next step ?
Matthew GianniniThu 14 Apr 2016
We just posted a draft proposal (v0.1.0) for standardizing haystack authentication over HTTP. Here's the forum post. Let's move discussion to that thread for now.
Brian Frank Tue 8 Mar 2016
I would like to proposed a standardized set of ops to handle authentication. Today authentication is an ad-hoc set of conventions based on HTTP status codes usually hacked around redirects to a browser login page. It is a huge mess and isn't sustainable.
This proposed is based on two simple requirements:
For the initial proposal, I suggest standardization of two algorithms:
Authentication is handled with two new ops:
authHello
: request to ask server which algorithm should be used for a given username, along with any data needed such as nonce, user salt, etcauthVerify
: request to have server verify the client's user credentials and return a token for subsequent requestsThese two ops would be special because they are required to be publicly accessible.
Here is an example PBDKF2:
The verification is computed as follows:
Successful authentication returns a
authToken
, failure returns a standard error grid.Plaintext is also supported (although should only be used over HTTPS):
Once authentication is successful, then all subsequent HTTP requests specify their authentication token using an HTTP header:
What does everybody think?
Craig Gemmill Tue 8 Mar 2016
I think the idea has merit. I would like to see SCRAMSHA256 (RFC-5802) added to the list. The exact parameters could be adjusted here, but it allows for sticking to the RFC. Also, what about the allowance for the “authToken” to be a cookie, etc. Would/should that be in the proposal as a part of the authHello response?
Example:
If the goal is to have pluggable authentication schemes (e.g. scramsha), the request/response format should be designed to accommodate multi-pass authentication as well as client validation of the server’s response.
The hello/verify doesn’t quite fit that model. Maybe something like
There could be some final message type where the client is essentially saying that this is the last message they’re sending, and want a token back.
The client should have some mechanism for breaking off the authentication if they’re not happy with what they see, but that could be client specific, I guess.
Brian Frank Tue 8 Mar 2016
I think SCRAM would be good, although really if we are going to force client support of that, then that is really just a more sophisticated version of what I proposed for PBKDF2. So I think we should pick one that all clients that support.
However, I think it needs to be adapted to the existing Haystack formats (Zinc, JSON) so that we don't have some odd embedded format embedded into a single string value.
Also the three round trips seems excessive for what its buying us. Can you provide some background info on why this is more secure then what a simple hash of PBDKF2 would provide with two round trips?
I'm sort of thinking for subsequent requests we pick one mechanism and stick with it. I've seen a lot of problems with application client libraries handling cookies well versus a straight HTTP header (which is super simple, it either works or doesn't work). Usually where cookies start to introduce subtle bugs is handling multiple cookies, expecting clients to handle expires correctly, etc. So I'm thinking that we should leave cookies for browser agents, and just pick another HTTP header for this. But I can see pros of cons of both, so definitely something to discuss and figure out would
Christian Tremblay Wed 9 Mar 2016
Would that be mandatory ? I'm thinking about nHaystack case for example... that relies on a specific platform for authentification (Niagara Web Server)
Brian Frank Wed 9 Mar 2016
Absolutely this would be mandatory - that is the whole goal to get away from vendor specific ad hoc authentication mechanisms. As part of the Haystack 3.0 we would switch over to standardized authentication and the support for nested data structures
Christian Tremblay Wed 9 Mar 2016
Would it mean that it would be impossible to run haystack on existing web server ? Let's say someone wants to implement a haystack server on a IIS system... or an existing django app ?
Don't you think it could stop people to use haystack on their products because of that ?
Brian Frank Wed 9 Mar 2016
I don't see why not - you can implement Haystack easily on any web server, what it means is that you need to implement the authentication yourself. But that is the reality of HTTP - compared to every other protocol, authentication in HTTP is very fragmented and unstandardized. So even if you did want to use something like Django's, its pretty much a guarantee that no existing Haystack client will know how to authenticate with it (unless its something very simple like Basic Authorization). At this point authentication is the #1 issue affecting compatibility, so we have to standardize it one way or the other.
Richard McElhinney Thu 10 Mar 2016
I'm going to speak to the Niagara side of things a bit, which is I think where Christian and Craig are going which is that it would be a bit of a pain to replace the the Niagara authentication with a Haystack standard.
I've had a quick look and I don't know if it's even possible in Niagara AX (although would love to be corrected) and whilst it looks like in N4 the authentication service can accept pluggable authentication schemes I can't imagine it being a small amount of work to set this up in nhaystack.
Craig: perhaps you could confirm or deny all that?
We currently have working authentication for Niagara AX and Niagara 4. Yes they can be a bit finicky but until I can get my head around the amount of work required to do this in Niagara AX and 4 then I have some doubts about it for nhaystack.
Perhaps you could go ahead and implement the standard ops but keep the existing support we currently have. I'm sure that's what you had in mind anyway backwards compatibility would be important.
I agree that standardisation is a good aim in general, but you said it yourself that authentication over HTTP is already very fragmented, so why would we add another authentication mechanism into the mix. Yes it might be standardised, but it's only standardised for us, it doesn't mean anyone else will support it.
Steve Eynon Thu 10 Mar 2016
All this talk of standards reminds me of this (!) :
http://imgs.xkcd.com/comics/standards.png
Christian Tremblay Thu 10 Mar 2016
Would an existing standard fit the need of Haystack ? OAuth for example or something already widely used ?
I mean... project-haystack should be targetted has becoming the best reference in data semantic... I don't think we should try to force a new authentification standard in the HTTP protocol.
The user base is too small compared to every other services already available in HTTP. I really think we should choose something already existing and widely used.
Alex Afflick Sat 12 Mar 2016
I agree with Christian, using something like OAuth2.0 makes more sense.
Rav Panchalingam Sun 13 Mar 2016
agree with using OAuth
Shawn Jacobson Sun 13 Mar 2016
While I 100% agree that introducing a new authentication protocol is overkill, I'm not sure how OAuth solves the problem either. Sure, it's the perfect auth mechanism if we're simply talking about things like SkySpark talking to Niagara (or vice-versa), on behalf of a user, but it's entire design is based off that specific premise...a 3rd party application communicating with the source (Haystack) application. It doesn't deal with local authentication, or how the user logs in directly to the source application. Any realistic "standard" for Haystack should handle both methodologies.
My honest opinion is that we should settle on the least common denominator as the required standard that all Haystack servers must implement. Anything else, can be "highly recommended" or even a caveat for specific interactions (like 3rd party access). While I understand wanting to fit this into a "standard Haystack Op", it feels like trying to do something just to do it. I can think of no real benefit in not using an existing standard just so we can communicate with Grids. I strongly suggest we do not go down that path.
I recommend that the only "required" authentication protocol should be Digest authentication. It is widely accepted and almost all clients and servers can already use it. I would also support having a special addition or clause which states that in order to communicate with 3rd party Haystack applications, that OAuth2 must also be implemented. Just my 2 cents.
Brian Frank Thu 17 Mar 2016
Okay, good comments. I met with Tridium today, and they are on board working towards a interoperable authentication solution which would work for both oBIX and Haystack (and really any other HTTP protocol).
So first off, lets clarify the key problem: what we need is interoperable authentication b/w two machines/servers/devices. While human authentication is nice too, that is mostly handled by system specific login screens already. The problem currently creating serious interoperability issues is the use case making two servers/devices/applications talk to one another.
Should we try and use some existing standard? Of course, but the reality is that HTTP authentication is a huge mess. If there was obvious set of standards, we would already be using it. In protocols like SMTP, there are very clean standards like SASL as a standard way to negotiate the authentication. Anything like that in HTTP is fairly ad hoc.
The only practical, widely used standard HTTP authentication mechanism is Basic Authentication, which is probably what 95% of Haystack traffic is using today. Basic is absolutely terrible because it requires the password to be stored in the client application as plaintext. Furthermore if not using TLS (probably the common case) then the password is passed over the network in plaintext. Pros: extremely standard; Con: probably the worst option possible (yet probably the most widely used).
Shawn mentioned Digest Authentication, which is fairly standardized. But I'd have a hard time recommending that as our required mechanism. It still requires the password to be stored in plaintext, uses MD5, and was really designed almost two decades ago before techniques key stretching were used. So fairly standard, but doesn't seem like the one to pick in 2016.
Others have mentioned OAuth. The biggest problem with this is that it requires TLS, and if you require TLS then there are simpler, better options. Furthermore although it can be made to work machine-to-machine, it isn't really a suitable design. Its design center is around three parties and giving a third party app permission to access your Facebook/Twitter data.
So after meeting with Tridium, we would like to propose the following two prong strategy:
We believe the ideal solution for server-to-server authentication is to use TLS client certificates. This is a standard aspect of TLS which allows both endpoints to identity themselves with a PKI certificate. The beauty of this approach is that clients don't need to store any sensitive secrets like a password or password hash. For systems to support this mechanism, servers would be required to have the tooling for configuring a "user" identity as a certificate and its public key. While awkward for humans compared to a username/password, its a very good solution for binding two machines together. And its already something built into the TLS standards and TLS libraries.
But before we get the ecosystem fully using TLS, we also still need something that works decently without TLS. What we propose is to leverage existing conventions in HTTP via the WWW-Authenticate and Authorize header as much as possible. The mechanism will be pluggable, but we only require one mechanism for interoperability and that is SCRAM as specified in RFC 5802. There isn't a standard way to do SCRAM in HTTP although there is an expired draft RFC that seems like a pretty good basis to start. What we would do is formalize the gaps in exactly how the HTTP status codes and headers are used.
Christian Tremblay Tue 29 Mar 2016
I think I better understand where you want to go. What would be the next step ?
Matthew Giannini Thu 14 Apr 2016
We just posted a draft proposal (v0.1.0) for standardizing haystack authentication over HTTP. Here's the forum post. Let's move discussion to that thread for now.