We are indebted to South Pender and Catalin for developing this post. Since this is a sticky, please confine follow on posts to those germane to the topic.Methods of Determining the Accuracy of a Watch
WUS HEQ Members, March, 2010
There are several ways of determining just how much your watch is gaining or losing time against an absolute standard. These methods differ in sophistication and in the accuracy of your results. Let’s consider three methods here. There are undoubtedly more than this, but these three are all we need to get the job done.
Method 1: The Naked-Eye Offset Determination Method
That’s a fancy title we made up for this the simplest and least precise of the methods we’ll consider.
- access to a precise representation of the time, this representation based on an atomic clock and referred to below as the reference clock
- your watch. One source that is readily available as the reference clock is the time representation on the website:
The time zone can be changed easily once on this website. This site displays the time in the form Hr:Min:Sec as XX:XX:XX. The accuracy of the time display at any one time is given just below the time as "Accuracy within 0.X seconds." In this, 0.X can be as low as 0.1 and can occasionally be over one minute. Most of the time it is around 0.2. It is best to use this reference clock when it is showing either 0.1 or 0.2 accuracy. This suggestion applies with Method 2 below as well.
Procedure. With the 2-digit seconds (in the display XX:XX:XX) from the reference clock changing on your computer screen, hold your watch close to the display on the screen and, keeping an eye on both the reference clock and your watch, note the minute marker your watch's second hand lands on when the reference clock hits XX:XX:00 (or some other easy reference point). Let's say you are doing this at around 9:30. When the reference clock shows exactly 09:30:00, on what minute marker has the second hand of your watch just landed? Say the second hand has landed at the 9-minute mark (this could be seen as a "second marker" in the present context), while the minute and hour hands are registering 9:30; in that case, your watch is running 9 seconds fast. Or, if your watch's second hand landed at the 54-minute mark when the minute and hour hands were registering 9:29, your watch would be running 6 seconds slow.
A good way to track the time-keeping accuracy of your watch would be to set it to coincide precisely with the reference clock. That is, stop the watch and have it ready to restart at the second the reference clock hits XX:XX:00 (with, of course, the hour and minute correct). At that point, your watch will be absolutely precise, or at the identical time point as the atomic clock from which the reference clock is getting its data. Some time later, you can do the time check described in the above paragraph, and any displacement from what the reference clock is showing will index the gain or loss taking place in your watch over that time period. Thus, say that you set your watch to be exactly on with the atomic-clock reading, and 30 days later check how your watch is doing with respect to the atomic clock. If now (30 days later) it is indicating a time that is 3 seconds ahead of that on the reference clock, your watch is running fast at the rate of 3 seconds in 30 days. We often express watch accuracy in seconds per year (or spy). Our result of 3 seconds in 30 days could be transformed into seconds per year by taking (365/30) x 3.0 or 36.5 spy.
Accuracy of the Method. With practice, one can easily stay within 1 second accuracy for the period of time, and can probably get to .50-second accuracy. By the latter, we mean that you will be able to determine that the watch is more than, say, 2 seconds off and less than 3 seconds off, and so would register it as 2.5 seconds off perfect time. For short time periods between setting the watch to absolutely correct time and checking its accuracy, this represents a relative poor level of accuracy, but for a longer period—e.g., 6 months—it is adequate to give a sufficiently-accurate estimate of seconds per year accuracy.
Method 2: The Stopwatch Method
The provenance of this method is likely lost in the mists of time, but we are indebted to WUS Forum Member Oldtimer2 for bringing it to our attention recently. This method produces more accurate results than does the simpler approach in Method 1 above. What follows is what appeared in Oldtimer2’s post, but edited to make it read consistently with the rest of this piece.
- reference clock as above;
- stopwatch (and this can be a very inexpensive sports quartz stopwatch)
- your watch.
Procedure. With the 2-digit seconds (in the display XX:XX:XX) from the reference clock changing on your computer screen, it's easy to judge the accuracy offset in a test watch by eye to within a fraction of a second. First start the stopwatch the instant the reference clock reaches a convenient point, for example, XX:XX:00. Next, glance at your watch and stop the stopwatch at precisely the point the second hand of your watch lands on a minute marker that represents some arbitrary number of seconds ahead of the starting point, for example, XX:XX:10.
Example: We start the stopwatch the precise instant the reference clock shows 10:30:00. Now glancing at our watch, we stop the stopwatch at the exact instant the second hand of the watch lands on the 10-minute (or second) marker with the hour and minute at 10:30. We now read the elapsed time on the stopwatch. If it reads 11.50 seconds, for example, this would indicate that the watch is running 1.50 seconds slow, or behind the reference clock (11.50 – 10.00). On the other hand, if the stopwatch reads 8.25 seconds, then we can conclude that the watch is running 1.75 seconds fast, or ahead of the reference clock (8.25 – 10.00). We can start the stopwatch at any convenient point on the reference clock and stop it at any convenient point on our watch some number of seconds beyond the starting time. For example, we could start the stopwatch when the reference clock is at 10:30:20 and stop it when the watch second hand lands precisely on 10:30:30. Since these times are 10 seconds apart, any deviation from 10:00 on the stopwatch indicates some offset from perfect time (with numbers larger than 10:00, indicating that the watch is slow, numbers smaller than 10:00, that the watch is fast).
Accuracy of the Method. With this method, you should make several estimates using the same procedure each time, and these can be made in rapid succession taking overall no more than 2 to 3 minutes (for maybe 10 estimates). You would then average these estimates. This averaged result should be within about .10 seconds of exact correctness, but to some extent, of course, this level of accuracy will depend on the number of single estimates averaged. As Oldtimer2 summarized this method: “…any reaction time delay errors in starting/stopping the stopwatch tend to cancel.
Plus timebase errors in the stopwatch are completely negligible over timescales of a few tens of seconds, [true for] even the most awful quartz movement ever made! I find that with practice five readings seems to easily give 0.10-second accuracy or better (and if you doubt this, you can always test the method by starting and stopping the stopwatch against the reference time source alone).”
The note above about time interval between separate timings applies with this method as well. To get a good estimate of the seconds per year accuracy of your watch, you would set it exactly to the reference-clock reading first, and then, some time later, use the above method to determine offset over that time interval. As before, longer intervals will lead to more precise spy values.
Method 3: The Video Method
This method is the most accurate of the three, albeit somewhat more equipment-intensive as well. It appears that the notion of capturing on video the closeness of a watch second hand to the readout of a reference clock has been considered and described, over the years, by several of the more technically-expert members of the WUS HEQ Forum. The specific program described herein, however, is due to forum member Catalin, and this software addition brings this method into the realm of possibility for many watch enthusiasts. What follows is Catalin’s own description of this method.
- A fast (>1 GHz) PC connected to the Internet on a decent connection and synchronized to one of the major Internet time servers just before the tests (this probably provides better than 10 milliseconds (ms.) errors on the time on the computer. I use a freeware program called AboutTime in order to sync my computer since the program also tells you the error after each sync, and having two consecutive syncs with very small errors is a good guarantee your computer is set very, very close to the actual atomic time);
- A decent 'watch program' on the computer that will display the time, including some fractions of a second, in a very careful way (so as to always have very constant and small delays). Unfortunately none of the major operating systems around are even soft-realtime, but I have modified one of my own programs (see below) on Windows, and I believe the errors are in the same 10-20 ms. interval and, more important, very constant (so as to be completely eliminated for practical purposes when you calculate the difference from two such similar measurements);
- A decent (LCD) display with 60Hz refresh rate or better; the newer 120 Hz (some of which are also '3D ready') are even better;
- A decent camera that can do movies at 25/30 (maybe even 50/60 or 100-200 if you have a 120 Hz monitor) frames/sec.; ideally it should also do those movies in 'macro mode';
- A program that can display the movie from the above camera 'frame by frame' (VLC, even BSPlayer); and, of course
- Your watch.
The program I use can be now found at:
You will note that the 'main window' stays normally hidden and can be shown with either a click on it or just 'hovering' with the mouse over it (there is a setting to configure that). Normally the milliseconds are not shown, but if on activation either SHIFT or CTRL is pressed the 'millisecond mode' is activated. If both SHIFT and CTRL are pressed the 'seconds beep' mode is also activated. See below a post with pictures from the mini-movies I am using for timing tests.
Procedure. Once you take a few (2-3) short (10-15 seconds) movies, you go with them to the video player and 'hunt' for the milliseconds interval when the seconds-hand is advancing—most often around that moment that you will see one frame with one time displayed on the monitor and the seconds-hand in the first position, then a second frame with a later time displayed and the seconds-hand already in the final new position. In that scenario, the actual time is the average of the one in the first and the one in the second frame, and by looking at a number of such frames you can minimize the interval so as to have statistically better precision. Even better IMHO would be to 'catch' a frame where the seconds-hand is actually 'in motion”; in those cases I take in my measurements the milliseconds time of that frame as the actual time.
At the end of this description, there is a sequence of images like that described above. Note in these images, that the milliseconds are at .743 and by actually looking at the full time and the time on the watch we can calculate that the watch is at about +24.257 seconds from the Internet atomic time (ahead).
That time is 'written-down' (I placed it initially into a TXT and now in an Excel file), and some time later (I suggest 1 or 2 weeks for regular longer-term measurements), you will do the next similar measurement; in my case it was +26.744 after two weeks, and from that, the difference was +2.487 over two weeks which means a rate of about +64.840 seconds over one year (our seconds per year, or spy value discussed earlier). All those numbers are in one of my more recent PDF files, for instance:
Accuracy of the Method. With the above equipment and procedure, I believe you can easily measure with a precision clearly better than 100 milliseconds (and even down to the actual time for each frame on the monitor)—of course, probably nothing much better than 5-10 ms. but even at 50-100 ms., the results will be more than 10 times better than what we normally get with a quick single measurement with 'the human eye.’ (Method 1 above.)