Blogged: Announcing time.cincura.net – Stratum 1 NTP and NTS time server located in Czech Republic, Europe
Blogged: Announcing time.cincura.net – Stratum 1 NTP and NTS time server located in Czech Republic, Europe
A list of known Time formatting and storage bugs as documented by Wikipedia:
Note the 2036 NTP issue which occurs before the more well known 2038 Unix Date "Epocolypse"
Apparently the XBox 360 has a known issue where the system time on the Xbox 360 console can only be advanced until 23:59 on 31 December 2025.
The system will continue to advance into 2026 and beyond; however, users cannot set the system date past this point
https://en.wikipedia.org/wiki/Time_formatting_and_storage_bugs
I just spent probably two week to implement a #NTP server as an #unikernel in #OCaml. It was pretty... tuff! My recent outcome is that my "skew" is ~3.5e-7 where chrony has a skew around 1e-6. In other words, the error is overall smaller in my implementation than in chrony's.
I will continue to compare metrics, but it is quite satisfying to confirm the suitability of unikernels for this type of service.
Okay, it's time for the big #ntp and #ptp wrap-up post. My week-long timing project spiraled out of control and turned into a two month monster, complete with 7 (ish?) GPS timing devices, 14 different test NICs, and a dozen different test systems.
What'd I learn along the way? See https://scottstuff.net/posts/2025/06/10/timing-conclusions/ for the full list (and links to measurements and experimental results), but the top few are:
1. It's *absolutely* possible to get single-digit nanosecond time syncing with NTP between a pair of Linux systems with Chrony in a carefully-constructed test environment. Outside of a lab, 100-500 ns is probably more reasonable with NTP on a real network, and even that requires carefully selected NICs. But single-digit nanoseconds *are* possible. NTP isn't just for millisecond-scale time syncing.
2. Generally, PTP on the same hardware shows similar performance to NTP in a lab setting, with a bit less jitter. I'd expect it to scale *much* better in a real network, though. However, PTP mostly requires higher-end hardware (especially switches) and a bit more engineering work. Plus many older NICs just aren't very good at PTP (especially ConnectX-3s).
3. Intel's NICs, *especially* the E810 and to a lesser extent the i210 are very good at time accuracy. Unfortunately their X710 isn't as good, and the i226 is mixed. Mellanox is less accurate in my tests, with 200ns of skew, but still far better than Realtek and other consumer NICs.
4. GPS receivers aren't really *that* accurate. Even good receivers "wander" around 5-30 ns from second to second.
5. Antennas are critical. The cheap, flat window ones aren't a good choice for timing work. (Also, they're not actually supposed to be used in windows, they generally want a ground plane).
6. Your network probably has more paths with asymmetrical timing in it than you'd have expected. ECMP, LACP, and 2.5G/5G/10Gbase-T probably all negatively impact your ability to get extremely accurate time.
Anyway, it's been a fun journey. I had a good #time.
My overnight tests finished!
In my environment, I get the best #NTP accuracy with #Chrony when using `minpoll -2 maxpoll -2` and not applying any filtering. That is, have the client poll the NTP server 4 times per second. Anything between `minpoll -4` (16x/second) and `minpoll 0` (1x/second) should have similar offsets, but the jitter increases with fewer than 4 polls per second.
https://scottstuff.net/posts/2025/06/03/measuring-ntp-accuracy-with-an-oscilloscope-2/
Chrony has a `filter` option that applies a median filter to measurements; the manual claims that it's useful for high-update rate local servers. I don't see any consistent advantage to `filter` in my testing and larger filter values (8 or 16) consistently make everything worse.
When polling 4x/second on a carefully constructed test network, NTP on the client machine is less than 2 ns away from #PTP with 20 ns of jitter. I know that PTP on the client is 4 ns away from PTP on the server (w/ 2 ns of jitter), as measured via oscilloscope.
So, you could argue that this counts as single-digit nanosecond NTP error, although with 20 ns of jitter that's probably a bit optimistic. In any case, that's *well* into the range where cable lengths are a major factor in accuracy. It's a somewhat rigged test environment, but it's still much better than I'd have expected from NTP.
Bei all dem Traurigen und all dem Schmarrn mal was Feines aus der #NASA-"Streichliste": "Nuclear Thermal Propulsion and Nuclear Electric Propulsion are cancelled." Wenn man also vom Einsatz von Kernreaktoren zum Raketenantrieb absieht: Gut so.
So, here's something I'm working on this weekend.
I'm trying to self-host my own NTP Server. NTP Services keep the internet's clocks synced. The biggest NTP Servers are NTP Pool and NIST.
NTP Pool is a community-led project which I try to use but it caused my TrueNAS box to desync for some reason.
NIST is, uh, run by the US Government. Which I don't want to rely on if possible lol.
So what I have here is a Raspberry Pi 3 and a GNSS receiver that connects to multiple satellites in the sky, typically used for navigation. These satellites have super-accurate clocks in them and I could get the time signal for those and then sync my server and other stuff at home to that. That way I always have time sync even if the internet goes down or the public NTP Servers become unreliable for whatever reason.
The antenna is the tiny rectangle ceramic in the middle of a square of aluminum foil. Due to satellites being out in space, it's really specific how antennas are made.
This isn't the final hardware, if it works I'll be using smaller, less powerful Pi Zeroes instead. I'd have two of them in a 3D-printed case with proper layouts and hard cables.
Okay, hopefully that's it for #NTP for now:
https://scottstuff.net/posts/2025/05/19/ntp-limits/
I'm seeing up to 200 ns of difference between various GPS devices on my desk (one outlier, should really all be closer to that) plus 200-300 ns of network-induced variability on NTP clients, giving me somewhere between 200 and 500 ns of total error, depending on how I measure it.
So, it's higher than I'd really expected to see when I started, but *well* under my goal of 10 μS.
Ah ha! Here we go, a reasonably fundamental limit to #NTP accuracy on my network.
I'm starting think that ~300ns is about the limit of time accuracy on my network, and even that's probably a bit optimistic.
Here's one solid example. I have 2 identical NTP servers (plus several non-identical ones that I'm ignoring here) with their own antennas connected at different points on my network. Then I have 8 identical servers syncing their time from NTP once per second using Chrony.
This is a graph of the delta between NTP1's 1-hour median offset and NTP2's 1-hour median offset, showing one line for each server.
Notice that half of them think that NTP1 is faster and half think that NTP2 is faster.
This is almost certainly due to ECMP; each server is attached to 2 L3 switches. Each NTP server is connected to a different L2 switch, and each of those L2 switches are connected to both L3 switches via MLAG.
For some reason, one ECMP path seems to be faster than the other, so server-NTP pairs that hash onto the fast path go 200-400ns faster than server-NTP pairs that take the other path.
Today I decided to try the DCF77 clock on my gateway machine again after a long hiatus.
It still doesn't work as it should:
1. the GUDEADS Expert mouseCLOCK USB II is recognised and the LED on it is green (i.e. "signal"):
Apr 28 11:24:51 gw-gva /bsd: udcf0 at uhub0 port 4 configuration 1 interface 0 "GUDEADS Expert mouseCLOCK USB II" rev 2.00/6.00 addr 2
2. ntpctl -s all shows it is seen but never syncs:
sensor
wt gd st next poll offset correction
udcf0
1 0 0 2s 15s - sensor not valid -
I don't desperately want to make it work, it was a "I would like to see it work, at least once in my life" as I do have a GPS/PPS clock on my network
des connaisseurs en #proxmox et #ntp ici ?
mon serveur proxmox n'est pas à l'heure
root@pve:~# timedatectl
Local time: Sat 2025-04-05 00:07:08 CEST
Universal time: Fri 2025-04-04 22:07:08 UTC
RTC time: Fri 2025-04-04 22:07:08
Time zone: Europe/Paris (CEST, +0200)
System clock synchronized: no
NTP service: active
RTC in local TZ: no
et pourtant NTP service est bien "active", qu'est ce qui ne va pas ?
Transparent, Trustworthy Time with NTP and NTS (2021-12)
Number two is an old-timer, but still going strong.
The first article in the #NTS series, it introduces the need for a secure time mechanism. And why it is important that Internet time also make the (easy!) move from #NTP to NTS. It is almost as painless as moving from #HTTP to #HTTPS today, thanks to @letsencrypt and the #ACME protocol (Automated Certificate Management Environment)
#LetsEncrypt
https://netfuture.ch/2021/12/transparent-trustworthy-time-with-ntp-and-nts/
Configuring an NTS-capable NTP server (2022-01)
NTP (Network Time Protocol) is probably how your computer or mobile device knows about the current time, often accurate to only a few milliseconds.
Keeping accurate time is of essence to many applications, including whether a security certificate has expired.
#NTS is to #NTP what #HTTPS is to #HTTP: It provides authenticity of the source and prevents modification in-transit. And easy to upgrade.
https://netfuture.ch/2022/01/configuring-an-nts-capable-ntp-server/
https://netfuture.ch/public-nts-server-list/
I dag har transportvirksomhetene levert sine forslag til prioriteringer i neste #NTP. Blir spennende å se hvordan regjeringa prioriteringer innenfor et trangere handlingsrom, mange vil nok bli skuffet. #TutOgKjør #NorskTut https://www.regjeringen.no/no/dokumenter/ntp-20252036-prioriteringsoppdrag-svar-fra-transportvirksomhetene/id2969831/