September/October 2018 issue of acmqueue The September/October issue of acmqueue is out now

Subscribers and ACM Professional members login here


  Download PDF version of this article PDF

Error 526 Ray ID: 478c619b7a8491b8 • 2018-11-12 22:19:38 UTC

Invalid SSL certificate








What happened?

The origin web server does not have a valid SSL certificate.

What can I do?

If you're a visitor of this website:

Please try again in a few minutes.

If you're the owner of this website:

The SSL certificate presented by the server did not pass validation. This could indicate an expired SSL certificate or a certificate that does not include the requested domain name. Please contact your hosting provider to ensure that an up-to-date and valid SSL certificate issued by a Certificate Authority is configured for this domain name on the origin server. Additional troubleshooting information here.


Originally published in Queue vol. 8, no. 4
see this item in the ACM Digital Library



Yonatan Sompolinsky, Aviv Zohar - Bitcoin's Underlying Incentives
The unseen economic forces that govern the Bitcoin protocol

Antony Alappatt - Network Applications Are Interactive
The network era requires new models, with interactions instead of algorithms.

Jacob Loveless - Cache Me If You Can
Building a decentralized web-delivery model

Theo Schlossnagle - Time, but Faster
A computing adventure about time through the looking glass


(newest first)

Tim Shoppa | Sat, 08 May 2010 18:47:55 UTC

Interesting in its concentration on the hundred-microsecond level over a small local LAN.

Traditionally NTP has concerned itself with synchronization across the internet at large, with latencies measured in tens of milliseconds, and the PLL algorithm that not just corrects the absolute time but also corrects for the rate in drift of the time through potential connectivity or server outages.

I certainly believe that the principles of NTP should apply to either scale, but some context as to the vast difference between the two scales, and the actual utility of time accurate to the tens of microseconds, would help explain why this article has to be written at all.

Geoff W | Sun, 02 May 2010 20:58:50 UTC

in the windows world, it is even worse, as the build-in clock has a resolution of tens of ms. In order to get better accuracy you have to use multimedia calls that have problems of their own when you switch processors (AMD) or use multiple cores.

The real solution is the relatively new PTP (IEEE1588) protocol. It has microsecond accuracy using the clock on the PHY itself and attempts to account for network latency.

Eric Gallimore | Sun, 02 May 2010 17:44:41 UTC

The assertion that a typical hardware oscillator in COTS computing hardware only drifts by 0.1ppm likely assumes no temperature variation. If temperature changes, it is untrue.

To quote the NTP FAQ: "A typical quartz is expected to drift about 1 PPM per °C."

This can be verified by examining the datasheets of common crystal oscillators.

Temperature-controlled crystal oscillators are sometimes used in systems that require more stability. (The Maxim DS3231 series, for example, guarantees operation to < +/-2ppm from 0 to 70°C.) However, to the best of my knowledge, these are not often used in personal computers.

Ehud Gavron | Sun, 02 May 2010 14:03:35 UTC

This is a well-written article, with sufficient science to be useful and sufficient plain-English analysis to explain it. I'm going to recommend it to my system administration colleagues.

Ehud Tucson AZ USA

Leave this field empty

Post a Comment:

© 2018 ACM, Inc. All Rights Reserved.