snabelen.no er en av mange uavhengige Mastodon-servere du kan bruke for å delta i det desentraliserte sosiale nettet.
Ein norsk heimstad for den desentraliserte mikroblogge-plattformen.

Administrert av:

Serverstatistikk:

369
aktive brukere

#ntp

5 innlegg3 deltakere2 innlegg i dag

A list of known Time formatting and storage bugs as documented by Wikipedia:

Note the 2036 NTP issue which occurs before the more well known 2038 Unix Date "Epocolypse" 🤔

Apparently the XBox 360 has a known issue where the system time on the Xbox 360 console can only be advanced until 23:59 on 31 December 2025.

The system will continue to advance into 2026 and beyond; however, users cannot set the system date past this point

en.wikipedia.org/wiki/Time_for

en.wikipedia.orgTime formatting and storage bugs - Wikipedia

I just spent probably two week to implement a #NTP server as an #unikernel in #OCaml. It was pretty... tuff! My recent outcome is that my "skew" is ~3.5e-7 where chrony has a skew around 1e-6. In other words, the error is overall smaller in my implementation than in chrony's.

I will continue to compare metrics, but it is quite satisfying to confirm the suitability of unikernels for this type of service.

Okay, it's time for the big #ntp and #ptp wrap-up post. My week-long timing project spiraled out of control and turned into a two month monster, complete with 7 (ish?) GPS timing devices, 14 different test NICs, and a dozen different test systems.

What'd I learn along the way? See scottstuff.net/posts/2025/06/1 for the full list (and links to measurements and experimental results), but the top few are:

1. It's *absolutely* possible to get single-digit nanosecond time syncing with NTP between a pair of Linux systems with Chrony in a carefully-constructed test environment. Outside of a lab, 100-500 ns is probably more reasonable with NTP on a real network, and even that requires carefully selected NICs. But single-digit nanoseconds *are* possible. NTP isn't just for millisecond-scale time syncing.
2. Generally, PTP on the same hardware shows similar performance to NTP in a lab setting, with a bit less jitter. I'd expect it to scale *much* better in a real network, though. However, PTP mostly requires higher-end hardware (especially switches) and a bit more engineering work. Plus many older NICs just aren't very good at PTP (especially ConnectX-3s).
3. Intel's NICs, *especially* the E810 and to a lesser extent the i210 are very good at time accuracy. Unfortunately their X710 isn't as good, and the i226 is mixed. Mellanox is less accurate in my tests, with 200ns of skew, but still far better than Realtek and other consumer NICs.
4. GPS receivers aren't really *that* accurate. Even good receivers "wander" around 5-30 ns from second to second.
5. Antennas are critical. The cheap, flat window ones aren't a good choice for timing work. (Also, they're not actually supposed to be used in windows, they generally want a ground plane).
6. Your network probably has more paths with asymmetrical timing in it than you'd have expected. ECMP, LACP, and 2.5G/5G/10Gbase-T probably all negatively impact your ability to get extremely accurate time.

Anyway, it's been a fun journey. I had a good #time.

scottstuff.net · Timing ConclusionsThis is the 13th article that I’ve written lately on NTP and PTP timing with Linux. I set out to answer a couple questions for myself and ended up spending two months swimming in an ocean of nanosecond-scale measurements. When I started, I saw a lot of misinformation about NTP and PTP online. Things like: Conventional wisdom said that NTP was good for millisecond-scale timing accuracy. I expected that to be rather pessimistic, and expected to see low microsecond to high nanosecond-range syncing with Chrony, at least under controlled circumstances.In a lab environment, it’s possible to get single-digit nanosecond time skew out of Chrony. With a less-contrived setup, 500 ns is probably a better goal. In any case “milliseconds” is grossly underselling what’s possible. Conventional wisdom also said that PTP was better than NTP when you really cared about time, but that it was more difficult to use and made more requirements on hardware.You know, conventional wisdom is actually right sometimes. PTP is somewhat more difficult to set up and really wants to have hardware support from every switch and every NIC, but once you have that it’s pretty solid. Along the way I tested NTP and PTP “in the wild” on my network, built a few new GPS-backed NTP (and PTP) servers, collected a list of all known NICs with timing features,Specifically GNSS modules or PPS inputs. built a testing environment for measuring time-syncing accuracy to within a few nanoseconds, tested the impact of various Chrony polling settings, tested 14 different NICs for time accuracy, and tested how much added latency PTP-aware switches add. I ran into problems with PTP on Mellanox/nVidia ConnectX-4 and Intel X710 NICs.Weird stuff. The X710 doesn’t seem to like PTP v2.1, and it doesn’t like it when you ask it to timestamp packets too frequently. I fought with Raspberry Pis. I tested NICs until my head hurt. I fought with statistics. This little project that I’d expected to last most of a week has now dragged on for two months. It’s finally time to summarize what I’ve learned and celebrate The End Of Time.
Fortsettelse av samtale

My overnight tests finished!

In my environment, I get the best #NTP accuracy with #Chrony when using `minpoll -2 maxpoll -2` and not applying any filtering. That is, have the client poll the NTP server 4 times per second. Anything between `minpoll -4` (16x/second) and `minpoll 0` (1x/second) should have similar offsets, but the jitter increases with fewer than 4 polls per second.

scottstuff.net/posts/2025/06/0

Chrony has a `filter` option that applies a median filter to measurements; the manual claims that it's useful for high-update rate local servers. I don't see any consistent advantage to `filter` in my testing and larger filter values (8 or 16) consistently make everything worse.

When polling 4x/second on a carefully constructed test network, NTP on the client machine is less than 2 ns away from #PTP with 20 ns of jitter. I know that PTP on the client is 4 ns away from PTP on the server (w/ 2 ns of jitter), as measured via oscilloscope.

So, you could argue that this counts as single-digit nanosecond NTP error, although with 20 ns of jitter that's probably a bit optimistic. In any case, that's *well* into the range where cable lengths are a major factor in accuracy. It's a somewhat rigged test environment, but it's still much better than I'd have expected from NTP.

scottstuff.net · Measuring NTP and PTP Accuracy With An Oscilloscope (part 2: Chrony's poll and filter settings)In part 1 yesterday I went through all of the work needed to measure NTP and PTP accuracy between two computers on my desk using an oscilloscope. I demonstrated that PTP was accurate to a mean of 4 ns with 2 ns of standard deviation. Under ideal circumstances NTP was only slightly worse at 8–10 ns with a SD of 12–20 ns, depending on the test setup. I measured these with extremely high NTP polling rates, thousands of times more frequent than Chrony’s defaults. I ran a few tests with slower polling rates but had a hard time getting stable results due to issues with my test environment. I went back and rethought a few things, and I was able to do a bunch more testing overnight. I wanted answers to these two questions: What is the best polling rate for Chrony on an extremely low-latency, low-jitter LAN? Does Chrony’s per-source filter setting improve accuracy? After running tests overnight, I have my answers: For the best accuracy, use something between minpoll -4 and minpoll -1.Please only poll this aggressively when you control the NTP server that you’re polling. Don’t DoS public servers. Above 1 second or so, error starts increasing exponentially. Very high polling rates (1/128th and sometimes 1/64th of a second) show added error as well. The filter keyword is never a win in my environment. Small values (filter=2 and filter=4) don’t make a huge difference in results; larger values add increasing amounts of error. Here’s how the measured time offset varied as the update period and filter settings changed: {"legend":{"bottom":5,"data":["filter=1","filter=2","filter=4","filter=8","filter=16"]},"series":[{"data":[[0.0625,2,17.5,12684],[0.125,5.1,18.3,12559],[0.25,1.5,20,12235],[0.5,5.6,39.8,12271],[1,1.3,40.5,12279],[2,78.1,155.6,12359],[4,26.7,285.3,12258],[8,475,567.4,12343]],"name":"filter=1","smooth":false,"type":"line"},{"data":[[0.0625,1.8,25,12764],[0.125,1.7,22,12691],[0.25,9.5,23.3,12560],[0.5,14.8,28.9,12336],[1,3.9,27.8,12302],[2,177.6,285.2,12258],[4,203.6,448.3,12295],[8,228.5,298.2,12307]],"name":"filter=2","smooth":false,"type":"line"},{"data":[[0.0625,7.7,22.4,12820],[0.125,3.5,20.5,12773],[0.25,1.4,20.2,12695],[0.5,23.9,38.7,12606],[1,57.3,81.3,12304],[2,60.7,124.1,12264],[4,77.4,174.8,12187],[8,128.9,289.9,12360]],"name":"filter=4","smooth":false,"type":"line"},{"data":[[0.0625,12.8,32.3,12950],[0.125,7,17.7,12804],[0.25,14.5,24.2,12760],[0.5,11.7,35.2,12714],[1,75,121.3,12624],[2,4.8,69.6,12315],[4,47.7,213,12299],[8,889.5,1070.5,12324]],"name":"filter=8","smooth":false,"type":"line"},{"data":[[0.125,15.7,37.1,12981],[0.25,11.2,27,12795],[0.5,43.5,76,12769],[1,43.9,89.9,12695],[2,163.8,312.4,12538],[4,424.1,8255.2,12404],[8,515.6,1350.4,12254]],"name":"filter=16","smooth":false,"type":"line"}],"title":{"left":"center","text":"NTP Clock offset by effective polling rate and filter"},"tooltip":{"trigger":"axis"},"xAxis":{"logBase":2,"max":8,"min":0.0625,"name":"effective seconds between polls (including filtering period)","nameGap":20,"nameLocation":"middle","type":"log"},"yAxis":{"logBase":10,"name":"nanoseconds →","nameGap":40,"nameLocation":"middle","nameRotate":90,"type":"log"}} And here’s the measured jitter in the same environment: {"legend":{"bottom":5,"data":["filter=1","filter=2","filter=4","filter=8","filter=16"]},"series":[{"data":[[0.0625,17.5,12684],[0.125,18.3,12559],[0.25,20,12235],[0.5,39.8,12271],[1,40.5,12279],[2,155.6,12359],[4,285.3,12258],[8,567.4,12343]],"name":"filter=1","smooth":false,"type":"line"},{"data":[[0.0625,25,12764],[0.125,22,12691],[0.25,23.3,12560],[0.5,28.9,12336],[1,27.8,12302],[2,285.2,12258],[4,448.3,12295],[8,298.2,12307]],"name":"filter=2","smooth":false,"type":"line"},{"data":[[0.0625,22.4,12820],[0.125,20.5,12773],[0.25,20.2,12695],[0.5,38.7,12606],[1,81.3,12304],[2,124.1,12264],[4,174.8,12187],[8,289.9,12360]],"name":"filter=4","smooth":false,"type":"line"},{"data":[[0.0625,32.3,12950],[0.125,17.7,12804],[0.25,24.2,12760],[0.5,35.2,12714],[1,121.3,12624],[2,69.6,12315],[4,213,12299],[8,1070.5,12324]],"name":"filter=8","smooth":false,"type":"line"},{"data":[[0.125,37.1,12981],[0.25,27,12795],[0.5,76,12769],[1,89.9,12695],[2,312.4,12538],[4,8255.2,12404],[8,1350.4,12254]],"name":"filter=16","smooth":false,"type":"line"}],"title":{"left":"center","text":"NTP Clock jitter by effective polling rate and filter"},"tooltip":{"trigger":"axis"},"xAxis":{"logBase":2,"max":8,"min":0.0625,"name":"effective seconds between polls (including filtering period)","nameGap":20,"nameLocation":"middle","type":"log"},"yAxis":{"name":"nanoseconds →","nameGap":40,"nameLocation":"middle","nameRotate":90,"type":"log"}} Read on for details on how I measured these.

So, here's something I'm working on this weekend.

I'm trying to self-host my own NTP Server. NTP Services keep the internet's clocks synced. The biggest NTP Servers are NTP Pool and NIST.

NTP Pool is a community-led project which I try to use but it caused my TrueNAS box to desync for some reason.

NIST is, uh, run by the US Government. Which I don't want to rely on if possible lol.

So what I have here is a Raspberry Pi 3 and a GNSS receiver that connects to multiple satellites in the sky, typically used for navigation. These satellites have super-accurate clocks in them and I could get the time signal for those and then sync my server and other stuff at home to that. That way I always have time sync even if the internet goes down or the public NTP Servers become unreliable for whatever reason.

The antenna is the tiny rectangle ceramic in the middle of a square of aluminum foil. Due to satellites being out in space, it's really specific how antennas are made.

This isn't the final hardware, if it works I'll be using smaller, less powerful Pi Zeroes instead. I'd have two of them in a 3D-printed case with proper layouts and hard cables.

Okay, hopefully that's it for #NTP for now:

scottstuff.net/posts/2025/05/1

I'm seeing up to 200 ns of difference between various GPS devices on my desk (one outlier, should really all be closer to that) plus 200-300 ns of network-induced variability on NTP clients, giving me somewhere between 200 and 500 ns of total error, depending on how I measure it.

So, it's higher than I'd really expected to see when I started, but *well* under my goal of 10 μS.

scottstuff.net · The Limits of NTP Accuracy on LinuxLately I’ve been trying to find (and understand) the limits of time syncing between Linux systems. How accurate can you get? What does it take to get that? And what things can easily add measurable amounts of time error? After most of a month (!), I’m starting to understand things. This is kind of a follow-on to a previous post, where I walked through my setup and goals, plus another post where I discussed time syncing in general. I’m trying to get the clocks on a bunch of Linux systems on my network synced as closely as possible so I can trust the timestamps on distributed tracing records that occur on different systems. My local network round-trip times are in the 20–30 microsecond (μS) range and I’d like clocks to be less than 1 RTT apart from each other. Ideally, they’d be within 1 μS, but 10 μS is fine. It’s easy to fire up Chrony against a local GPSTechnically, GNSS, which covers multiple satellite-backed navigation systems, not just the US GPS system, but I’m going to keep saying “GPS” for short. -backed time source and see it claim to be within X nanoseconds of GPS, but it’s tricky to figure out if Chrony is right or not. Especially once it’s claiming to be more accurate than the network’s round-trip time20 μS or so. , the amount of time needed for a single CPU cache miss50-ish nanoseconds. , or even the amount of time that light would take to span the gap between the server and the time source.About 5 ns per meter. I’ve spent way too much time over the past month digging into time, and specifically the limits of what you can accomplish with Linux, Chrony, and GPS. I’ll walk through all of that here eventually, but let me spoil the conclusion and give some limits: GPSes don’t return perfect time. I routinely see up to 200 ns differences between the 3 GPSes on my desk when viewing their output on an oscilloscope. The time gap between the 3 sources varies every second, and it’s rare to see all three within 20 ns of each other. Even the best GPS timing modules that I’ve seen list ~5 ns of jitter on their datasheets. I’d be surprised if you could get 3-5 GPS receivers to agree within 50 ns or so without careful management of consistent antenna cable length, etc. Even small amounts of network complexity can easily add 200-300 ns of systemic error to your measurements. Different NICs and their drivers vary widely on how good they are for sub-microsecond timing. From what I’ve seen, Intel E810 NICs are great, Intel X710s are very good, Mellanox ConnectX-5 are okay, Mellanox ConnectX-3 and ConnectX-4 are borderline, and everything from Realtek is questionable. A lot of Linux systems are terrible at low-latency work. There are a lot of causes for this, but one of the biggest is random “stalls” due to the system’s SMBIOS running to handle power management or other activities, and “pausing” the observable computer for hundreds of microseconds or longer. In general, there’s no good way to know if a given system (especially cheap systems) will be good or bad for timing without testing them. I have two cheap mini PC systems that have inexplicably bad time syncing behavior,1300-2000 ns. and two others with inexplicably good time syncing20-50 ns . Dedicated server hardware is generally more consistent. All in all, I’m able to sync clocks to within 500 ns or so on the bulk of the systems on my network. That’s good enough for my purposes, but it’s not as good as I’d expected to see.

Ah ha! Here we go, a reasonably fundamental limit to #NTP accuracy on my network.

I'm starting think that ~300ns is about the limit of time accuracy on my network, and even that's probably a bit optimistic.

Here's one solid example. I have 2 identical NTP servers (plus several non-identical ones that I'm ignoring here) with their own antennas connected at different points on my network. Then I have 8 identical servers syncing their time from NTP once per second using Chrony.

This is a graph of the delta between NTP1's 1-hour median offset and NTP2's 1-hour median offset, showing one line for each server.

Notice that half of them think that NTP1 is faster and half think that NTP2 is faster.

This is almost certainly due to ECMP; each server is attached to 2 L3 switches. Each NTP server is connected to a different L2 switch, and each of those L2 switches are connected to both L3 switches via MLAG.

For some reason, one ECMP path seems to be faster than the other, so server-NTP pairs that hash onto the fast path go 200-400ns faster than server-NTP pairs that take the other path.

Today I decided to try the DCF77 clock on my gateway machine again after a long hiatus.

It still doesn't work :flan_disappointed:​ as it should:

1. the GUDEADS Expert mouseCLOCK USB II is recognised and the LED on it is green (i.e. "signal"):

Apr 28 11:24:51 gw-gva /bsd: udcf0 at uhub0 port 4 configuration 1 interface 0 "GUDEADS Expert mouseCLOCK USB II" rev 2.00/6.00 addr 2

2. ntpctl -s all shows it is seen but never syncs:

sensor
wt gd st next poll offset correction
udcf0
1 0 0 2s 15s - sensor not valid -

I don't desperately want to make it work, it was a "I would like to see it work, at least once in my life" as I do have a GPS/PPS clock on my network :flan_XD:

I’m already providing public #NTP systems to the NTPPool project for more than 10 yrs and with the BoxyBSD locations, I bring them all to the NTPool - starting now with #Milan (IT), Kansas (US) & Amsterdam (NL).

des connaisseurs en #proxmox et #ntp ici ?
mon serveur proxmox n'est pas à l'heure

root@pve:~# timedatectl
Local time: Sat 2025-04-05 00:07:08 CEST
Universal time: Fri 2025-04-04 22:07:08 UTC
RTC time: Fri 2025-04-04 22:07:08
Time zone: Europe/Paris (CEST, +0200)
System clock synchronized: no
NTP service: active
RTC in local TZ: no

et pourtant NTP service est bien "active", qu'est ce qui ne va pas ?

Fortsettelse av samtale

2️⃣ Transparent, Trustworthy Time with NTP and NTS (2021-12)
Number two is an old-timer, but still going strong.

The first article in the #NTS series, it introduces the need for a secure time mechanism. And why it is important that Internet time also make the (easy!) move from #NTP to NTS. It is almost as painless as moving from #HTTP to #HTTPS today, thanks to @letsencrypt and the #ACME protocol (Automated Certificate Management Environment)
#LetsEncrypt
netfuture.ch/2021/12/transpare

Netfuture: The future is networked · Transparent, Trustworthy Time with NTP and NTS
Mer fra Marcel Waldvogel
Fortsettelse av samtale

8️⃣ Configuring an NTS-capable NTP server (2022-01)
NTP (Network Time Protocol) is probably how your computer or mobile device knows about the current time, often accurate to only a few milliseconds.

Keeping accurate time is of essence to many applications, including whether a security certificate has expired.

#NTS is to #NTP what #HTTPS is to #HTTP: It provides authenticity of the source and prevents modification in-transit. And easy to upgrade.

netfuture.ch/2022/01/configuri
netfuture.ch/public-nts-server

Netfuture: The future is networked · Configuring an NTS-capable NTP server
Mer fra Marcel Waldvogel