12 October 2020

Timex "Expedition Field Chronograph 43mm" watch

Photo of watch

It's about a week since I bought one of these so I thought I'd jot down some of my first impressions of the timepiece.

My overall impression is positive. There are a couple of negative points that I'll deal with but, on the whole, I like this watch.

The build strikes me as being sturdy. There is no play in the crown or in the buttons and their operation is met with a reassuring amount of smooth friction.

The mechanism for setting the watch is slightly unconventional but it actually makes sense. Most analogue watches with date usually allow you to set the date by pulling the crown out to a first position, and then to set the time by pulling the crown out to a second position. This watch does things slightly differently because it also has to allow you to calibrate the chronograph before you use it for the first time. With the crown in the first position you can zero the chronograph's minutes hand (10 o'clock dial) and you can set the date by rotating the hours hand independently of the minutes hand. With the crown pulled out to the second position, you can zero the chronograph's seconds and tenths of a second (2 o'clock dial) hands.

When pulled out to its second position, rotation of the crown sets the hours and minutes hands of the watch. The gearing is such that the hands rotate slowly enough to allow accurate positioning of the minutes hand taking into account the seconds past the minute. E.g. you can pull the crown out when the seconds hand points to 0/60, which stops the watch. You then set the minutes hand to point precisely to the next minute and when that actual time is reached, push the crown back in, thus starting the watch at precisely that time.

The markings on the face are accurate. There are so many watches in the same price range as the Expedition Field Chronograph (ca. £110) that have irregularly-spaced markings. They make the minutes hand point directly to a given minute on one side of the dial but halfway between two minutes on the other side of the dial when the seconds hand is pointing to 0/60. Not so for this watch.

The face itself is actually quite readable, more so than the above photograph would lead you to believe. Legibility decreases with the level of ambient light and there are no phosphorescent "glow in the dark" markings on the watch face, but this is one of Timex's "Indiglo" watches. The face can be illuminated by an electroluminescent panel emitting blue light, although like any form of lighting, this is a drain on the battery that is huge compared to normal operation of the watch.

I have to admit that I wasn't a fan of the khaki-coloured fabric strap that came with this watch. Yes, it does match the watch face but I just didn't like it. I replaced it with a black leather strap matching the body of the watch. It's an easy and inexpensive "fix".

The chronograph needs calibrating before first use so that the hands actually point to zero when the chronograph is reset. The procedure is simple enough and only needs to be done once unless the watch is subjected to unusually high G-forces. Three hands indicate the chronograph time elapsed. The chronograph seconds hand is the long hand on the central axle (pointing to 36 on the photo), the minutes hand is on the dial at 10 o'clock reading from 0 to 30 and the 10ths of a second hand is on the dial at 2 o'clock and actually has a resolution of a 20th of a second (0.05s). This last hand does not move during chronograph operation; it is moved to point to the correct reading once a split time is displayed or the chronograph is stopped.

The seconds hand for timekeeping is on the dial at 6 o'clock. This may seem unusual to have it there rather than on the same axle as the hours and minutes hands but, once again, this decision makes practical sense. This dial is partially obscured by the hours hand between about 4:30 and 7:30 and by the minutes hand from about 20 minutes past the hour to 40 minutes past the hour. If it was the seconds hand of the chronograph, you would potentially have to wait for three hours to see how many seconds the chronograph is displaying. With it being the timekeeping seconds hand, you only have to wait a few seconds to see it again and the chronograph seconds hand is not obscured.

Having only owned this watch for about a week, it's far too early to think of accuracy. This said, it's looking good. As soon as I unboxed the watch, I synchronised it to a radiocontrolled clock that I have. Today, the watch is running maybe 0.2s faster than the clock. That equates to an inaccuracy of around one second per month, or somewhere between 10 and 12 seconds a year.

Would I recommend this timepiece? It matches more or less what I expect from a watch in this price range, so yes. I was pleasantly surprised by most things that I wasn't expecting and not really bothered by the rest.

22 September 2020

I just passed an impossible test on Facebook!

We've all seen those status updates from friends stating that they got 9 out of 10 answers right to a so-called impossible test claiming that "Nobody will get more than 60% of these right!" or something similar, haven't we? Let's look at how these things work, what they do and why they do it. Before going any further, help yourself to a decent dollop of cynicism or you're going to be very disappointed in social media.

Remember the golden rule: if it looks too good to be true, then the chances are that it is not true. If there is the remotest possibility that something happening on social media could be something other than what it looks like, then it is not what it looks like and someone is trying to pull a fast one for profit in some way. Are the people setting these quizzes doing it out of the goodness of their hearts for the enrichment of the human race? Of course not. They're doing it to make money. So how do they make their money? What is their business model?

They all rely on deception to trick users into visiting their pages. One way to do this is to issue a challenge. "Nobody will get more than 60% of these questions right!"

"Really? Hold my beer..."

Another way to tempt people in is to tell them about how a friend did so well, which actually requires said friend's collaboration.

So, you end up on the quiz page and you see one question with a difficulty level that you could probably tackle blindfolded with both arms and a leg tied behind your back and a broomstick up where the sun don't shine. But you only just see it because it's almost drowned out by a sea of advertisements, many of which are animated, distract you from the question and tie your computer's CPU up with a 90% load. You answer the question and attempt to move to the next, but your computer takes forever because the CPU is overloaded trying to display and animate those ads. However, you get there eventually. The next question is displayed somewhere in another sea of ads that tie your machine's CPU up just as much and you realize you don't need to remove the blindfold, the ties or the broomstick to answer it just as ably.

This goes on until you reach the end of the questionnaire, by which time you've been subjected to approximately 2000 ads, your computer is running out of memory and you're wondering where the catch was. You are given your score, which is always at least 80%, but if it isn't 100% then you might be offered the option to re-sit the test (and get shown another 2000 ads). Either way round, you also get the option to post the result on Facebook, which is the "collaboration" required by these systems to tempt your friends to take the test.

What really happened here?

Firstly, the site owners were paid by the advertisers to show you as many ads as humanly possible.

Secondly, high on your success, you wanted to share your achievement on social media. To this end, more often than not, you can deliberately answer all the questions incorrectly and you will still get a score of 80% or more because the aim here is not to test your knowledge in any way, shape or form (the questions wouldn't be that easy if it was) but to make you want to share your achievement. Why? Because it will encourage others to take the test and get bombarded with ads that the site owners are paid to show them.

Thirdly, when you granted the website permission to post your achievement on Facebook, you gave them your e-mail address as part of the authorisation process. Think about that. You gave your e-mail address to an organisation that uses deception to make money out of advertising.

Feel free to remove the blindfold, the ties and the broomstick now, and please burn the broomstick...

21 September 2020

Using random numbers to compute the value of π

This is a little foray into number theory and presents one method of calculating the value of \(\pi\). It involves using a program to perform iterative calculations, so in order to minimise the impact of compounded rounding errors, it makes sense to use a computing device that operates with sufficient precision. I chose the SwissMicros DM42 for this because it uses Thomas Okken's Free42 Decimal under the hood, which offers around 34 digits of precision. The DM42 is also extremely fast, particularly so if connected to a USB port while operating because it more than triples the CPU clock compared to battery-powered operation.

Back in 1735, the Swiss mathematician, astronomer (and a few other things besides) Leonhard Euler stated, and proved in 1741, that as the number \(n\) tends towards \(\infty\), the probability that two natural numbers \(a\) and \(b\) less than or equal to \(n\) are coprime, i.e. that \(a\) and \(b\) share no common divisor other than \(1\), tends towards \(\frac 6{\pi^2} \approx 0.607927\) (Wikipedia).

So, what's to stop us from writing a program to sample pairs of natural numbers, count how many of them are coprime and examine the proportion of coprime pairs to the total number of pairs? I'll call this proportion that we find experimentally \(p\).

\(p\) should reflect the probability demonstrated by Euler and should therefore be close to our value of \(\frac 6{\pi^2}\). All we have to do then is calculate \(\sqrt {\frac 6p}\) and that will give us our approximation of \(\pi\).

A few caveats first. Firstly, this only gets close to reality for values of \(n\) (the highest random natural number that we work with) that approach \(\infty\). However, we get an error of only about 0.3% if we stick to \(n=200\) or about 0.06% for \(n=1000\). Why? Because the probability of choosing two coprimes from the first \(n\) natural numbers is: $$\frac 1{\sum_{k=1}^n \frac 1{k^2}}$$ Only if \(n=\infty\) do we get: $$\frac 1{\sum_{k=1}^\infty \frac 1{k^2}} = \frac 1{\zeta(2)} = \frac 6{\pi^2}$$ Ref: "Basel" problem, Riemann Zeta function.

It's not going to take any longer to generate random numbers up to 1000 than it is random numbers up to 200, so I'll stick with 1000 for the purposes of this experiment.

Secondly, for the sample of data points to be significant, it must be as large as possible. The DM42 has plenty of computing power under the hood, especially if hooked up to USB power, so sample sizes of 1000, 10000 and even 100000 are not a problem. The higher this number, the better, but the longer it'll take to run.

Thirdly, the random number generator in Free42 is already pretty darn good but it'll never be perfect, so that will introduce a few errors as well.

Given the three sources of errors above, I'll be very happy if I get results within one percent, i.e. if it is able to give me a value of \(\pi\) somewhere between, say, 3.1 and 3.2.

Now, that really great screen on the DM42 should not be allowed to go to waste while this is running, so I included instructions to plot the value \(p\) computed after each iteration of the loop in the program I wrote.

First, here's the initial setup. It displays its current parameters and provides a menu for the user to specify the parameters of the experiment. \(n\) gets stored in R00 and the sample size in R01.

00 { 267-Byte Prgm }
01▸LBL "PI2"
02 ALL
03 6
04 1
07 6
08 PI
09 X↑2
10 ÷
11 STO 02
13 CF 29
14▸LBL 10
15 "HIGHEST: "
16 ARCL 00
17 ├"[LF]SAMP SZ: "
18 ARCL 01
21 "HIGH"
22 KEY 1 GTO 21
23 "SAMP"
24 KEY 2 GTO 22
25 KEY 9 GTO 12
26 RCL 00
27 100
28 X>Y?
29 GTO 11
30 RCL 01
31 1ᴇ3
32 X>Y?
33 GTO 11
34 "GO→"
35 KEY 6 GTO 30
36▸LBL 11
39 GTO 10
40▸LBL 12
42 RTN
43▸LBL 21
44 ABS
45 IP
46 STO 00
47 GTO 10
48▸LBL 22
49 ABS
50 IP
51 STO 01
52 GTO 10

No doubt you'll notice that the "GO→" menu entry is only displayed if the highest random number to use \(n\geqslant 100\) and if the sample size is at least 1000. But if it's there and you hit it, we go off to LBL 30 and start the run.

53▸LBL 30
54 RCL 01
55 STO 03
56 3
57 STO "GrMod"
59 -120
60 1

This initialises the counter for the number of iterations to perform, sets the display to the highest resolution of 400x240 and draws a horizontal line halfway down the screen. This line shows the level that we want the proportion \(p\) to converge on during the course of the test run.

The main loop runs here:

62▸LBL 06
63 XEQ 04
64 XEQ 04
65 XEQ 01
66 1
67 X≠Y?
68 GTO 07
69 STO+ 04
70▸LBL 07
71 RCL 01
72 RCL- 03
73 1
74 +
76 RCL 04
77 X<>Y
78 ÷
79 STO 05
80 RCL- 02
81 20
82 ×
83 120
84 ×
85 RCL+ ST L
86 X<0?
87 GTO 08
88 X<>Y
89 RCL÷ 01
90 400
91 ×
93▸LBL 08
94 DSE 03
95 GTO 06

The subroutine at LBL 04 just generates a random number from 1 to \(n\).

The subroutine at LBL 01 computes the greatest common divisor of the numbers in X and Y using the Euclidean algorithm. If that number is \(1\) then we know that the two random numbers generated were coprime and we increment the number of coprime pairs found. We then compute the new \(p\), apply some scaling, plot the point on the graph and go back for the next iteration.

When everything is finished, we compute \(\pi\) from the proportion \(p\) that we have and then calculate the error between that and the real value of \(\pi\) as a percentage. Our \(\pi\) computed from \(p\) is left in Y and the error percentage in X.

96 RCL 05
97 1/X
98 6
99 ×
100 SQRT
101 PI
102 RCL- ST Y
103 RCL÷ ST L
104 ABS
105 100
106 ×
107 FIX 04
108 GTO 12

This is the subroutine that calculates the gcd of X and Y:

109▸LBL 01
110 X>Y?
111 X<>Y
112▸LBL 02
113 MOD
114 X=0?
115 GTO 03
117 X<>Y
118 GTO 02
119▸LBL 03
121 RTN

And finally, this generates the random number from 1 to \(n\):

122▸LBL 04
123 RAN
124 RCL× 00
125 IP
126 1
127 +
128 END

This graph was generated using 5000 pairs of random numbers from 1 to 1000:

The computation gave me a value of \(π\approx 3.149704\), i.e. an error of 0.26%, roughly.