However, the HTime constructors enforce millisecond precision. Is the BNF incorrect or the reference implementation?
Brian FrankTue 9 Oct 2012
On one hand I'm sort of the position that we should allow infinite precision, and let implementations decide how precise they want to store the value. But on the other hand if we don't specify and require a given precision, then round tripping a timestamp might cause loss of precision. So maybe it would be best that we require a given precision to avoid weird interop problems. In that case milliseconds are actually a little too big for some power quality applications. So my proposal would be that we stick with nanosecond level precision.
Andy FrankTue 9 Oct 2012
I would specify - +1 on ns precision.
Matthew GianniniTue 9 Oct 2012
I can see the argument for nanosecond precision.
However, using nanoseconds means that we can never turn an HTime or HDateTime back into a standard java object (Calendar, Date, or JodaTime) because they only have millisecond precision. This seems really unfortunate. There are a lot of existing libraries that have useful date/time functions; eg.: adding [day|minutes|years|etc.]), testing for before/after etc. We wouldn't be able to make use of them without essentially truncating back to millisecond precision.
But maybe that's not a big deal. Any thoughts on this?
Brian FrankSun 14 Oct 2012
However, using nanoseconds means that we can never turn an HTime or HDateTime back into a standard java object (Calendar, Date, or JodaTime) because they only have millisecond precision.
I would probably restate that this way: you can never "round trip" a HDateTime to Java millis and back again since there would be a loss of precision. But you can easily go one way or the other. For example converting from a java.util.Date to a HDateTime or vise versa.
This is definitely a topic I gave a lot of thought to years ago when designing Fantom's date/time classes. One of the main reasons we decided to make Fantom's precision nanosecond based versus millisecond based was that it was clear that the Java APIs were suffering from their original decision to use millis. This is why you have all sorts of awkward hacks like java.sql.Timestamp that subclasses java.util.Date to add nanosecond precision and the overloading of Thread.sleep with both/separate millis and nanos.
So if given a clean slate, I think standardizing on nanosecond precision is a lot more future proof. Not to mention as I said that some domains like power quality really do require this precision. So I definitely think nanoseconds is the way to go here.
Matthew Giannini Sun 7 Oct 2012
The Zinc BNF indicates nanosecond precision for encoding:
However, the HTime constructors enforce millisecond precision. Is the BNF incorrect or the reference implementation?
Brian Frank Tue 9 Oct 2012
On one hand I'm sort of the position that we should allow infinite precision, and let implementations decide how precise they want to store the value. But on the other hand if we don't specify and require a given precision, then round tripping a timestamp might cause loss of precision. So maybe it would be best that we require a given precision to avoid weird interop problems. In that case milliseconds are actually a little too big for some power quality applications. So my proposal would be that we stick with nanosecond level precision.
Andy Frank Tue 9 Oct 2012
I would specify - +1 on ns precision.
Matthew Giannini Tue 9 Oct 2012
I can see the argument for nanosecond precision.
However, using nanoseconds means that we can never turn an HTime or HDateTime back into a standard java object (Calendar, Date, or JodaTime) because they only have millisecond precision. This seems really unfortunate. There are a lot of existing libraries that have useful date/time functions; eg.: adding [day|minutes|years|etc.]), testing for before/after etc. We wouldn't be able to make use of them without essentially truncating back to millisecond precision.
But maybe that's not a big deal. Any thoughts on this?
Brian Frank Sun 14 Oct 2012
I would probably restate that this way: you can never "round trip" a HDateTime to Java millis and back again since there would be a loss of precision. But you can easily go one way or the other. For example converting from a java.util.Date to a HDateTime or vise versa.
This is definitely a topic I gave a lot of thought to years ago when designing Fantom's date/time classes. One of the main reasons we decided to make Fantom's precision nanosecond based versus millisecond based was that it was clear that the Java APIs were suffering from their original decision to use millis. This is why you have all sorts of awkward hacks like java.sql.Timestamp that subclasses java.util.Date to add nanosecond precision and the overloading of Thread.sleep with both/separate millis and nanos.
So if given a clean slate, I think standardizing on nanosecond precision is a lot more future proof. Not to mention as I said that some domains like power quality really do require this precision. So I definitely think nanoseconds is the way to go here.