(newest first)

  • Yevgeniy Tovshteyn | Thu, 06 Jul 2017 20:13:05 UTC

    Since you mentioned Jewish religion, in Hebrew calendar there are 5 versions of Leap years (another words, 6 versions of year length) and it is so flexible to adjust for any accumulated drift from lunar-solar 19 year cycle.
  • Bob Frankston | Mon, 06 Jun 2011 19:34:49 UTC

    How many seconds in a minute? If you answered 60 you are wrong.
    Since we don't keep track of which minute we can't answer that question. That's the problem with leap seconds. It breaks the contract we've made with minutes, hours and days. We don't have such a contract with years and can't convert intervals to years without knowing which interval. And we rarely track which interval  this is a fundamental problem of representation and not a problem of timekeeping nor timescale.
    The solution is to keep a separate representation, a correction factor, for those who care. We can then unwind the leap second.
    At some point we'll need to address changes in the rotational speed for the earth but for now let's make sure that 60*minutes is correct without knowing which minute.
  • WB6BNQ | Thu, 26 May 2011 01:00:39 UTC

    Wait just a damn minute !  We already have a name for the "NON" leap second version of a time scale.  It is called "TAI" meaning Time Atomic International.  This is the unmitigated and unmutilated time interval event defined by the accepted (until we change it) Cesium oscillation.  As that abbreviation (TAI) does not roll off the tongue very easily, how about calling it UTA meaning Universal Time Atomic ?  That rolls off the tongue much more smoothly, has a similar sound to UTC and thus would be less stressful to the general populace.  However, the general populace may raise a question of why we would want to back up to a presumed previously discarded standard.  Seeing as how c comes after a and denotes c as being a new revised version in common thought, it would be an acceptable question from the unwashed masses.  Disclaimer : I only play time keeper on TV; my day job is a retired person.
  • Poul-Henning Kamp | Mon, 02 May 2011 13:47:00 UTC

    John, a lot of ATC systems are really old technology still, but many upgrades are in the pipeline.  I have from one person on watch in CPH during the last leapsecond that "it showed us (alarm-)lights we didn't know we had".  But ask in Tokyo, Hong Kong or SFO, they get leap seconds during the day.
  • John | Mon, 02 May 2011 12:42:05 UTC

    Interesting that you assert that ATC operators have so little confidence in the reliability of their software and systems.  However, as these leap seconds have coincided with year-end rollovers and in one instance Y2K, this may be prudent.  ATC is very cautious.  Has anyone ever noticed anything awry?
  • Poul-Henning Kamp | Sat, 30 Apr 2011 22:49:11 UTC

    Mike,  You clearly have not considered that leap-seconds happen at 23:59:60 UTC time, not local time.  Just because we party or sleep through them here in Europe doesn't mean that the rest of the world does.  In Tokyo they happen in the middle of the morning and in San Francisco in late afternoon.
  • Mike Zorn | Sat, 30 Apr 2011 01:13:52 UTC

    "A one-second hiccup in input data from the radar is not trivial in a tightly packed airspace around a major airport."
    Easily solved:  No aircraft in the air between +/- 10 minutes of 12:00:00.000
    Another simple fix:  save up leap seconds until the Earth is 30 seconds off from "real time".  (Since only machines seem to worry about "real time", the inconvenience is minor. People, you'll remember, got on quite nicely for a few hundred years before Pope Gregory XIII stirred things up and convinced people that if they just listened closely, Easter would not be falling in December.  Unfortunately, people whose debt payments fell due during the missing week were not amused.)
    Mischief may occur if my network is a second or two off from yours, so I appeal again to the "aircraft principle".  Everybody gets a New Year's  holiday.
    "... I would miss leap seconds. They are quaint and interesting, and their present rate of one every couple of years makes for a wonderful chance to inspire young nerds ..."
    That's an excellent point.  The big problem is, most people are in their sleep cycle during these momentous events.  It wold be a boon to both society and science is these leap seconds were to occur at noon.  Local noon.  After a day, the world would be back in sync.  And nobody does anything significant on New Year's Day, anyway.  A heresy may arise in insisting that this be done in June, not December, but this can be easily dealt with, in the manner that Middle Age heresies were dealt with.
  • Matt S | Thu, 28 Apr 2011 15:20:39 UTC

    You may not be aware of this, but time synchronization predates computer networks and even computers. At the turn of the 19th Century (20th? Anyway, around 1900) synchronizing clocks on trains systems was a big deal. Figuring out how to get the clocks in Marseille to show the same time as Paris was important. And it was one of the factors that set off Mr Einstein to thinking about the relationship between the speed of light and time itself. I suspect we will have no such scientific revolutions stemming from handling leap seconds. 
  • Clive Page | Thu, 28 Apr 2011 10:16:19 UTC

    A small correction to my earlier posting: having checked, my recollection that a spacecraft launch was postponed to avoid the leap second issue seems to be faulty, as the dates don't match.  I do vaguely recall something being postponed because of this, but maybe just a software upload to an operating satellite.  Sorry about that. 
    I agree with Rob Seaman that the arguments based on clocks on other bodies are extremely weak.  I think the decision ought to be made on a global cost-benefit benefit analysis: are the costs of having leap seconds more than the costs of abandoning the current system?  I really don't know for sure, and there seem to be few hard facts available.  From what little I know and from assertions made e.g. by bodies involved in telecommunications, air traffic control, etc., my guess is that the cost of leap seconds is significantly higher than the cost of abandoning them.  Of course without them solar and civil time will eventually diverge, but that is a problem we can safely leave to our descendants, in my opinion.  I doubt if they will be all that put out by a leap minute in say 60 years time, or eveb a leap hour in a few thousand years.
  • Rob Seaman | Wed, 27 Apr 2011 19:22:11 UTC

    Checked back after a week to find that you guys are still chatting away.  ("Golly!", to quote Gomer Pyle.)
    On the contrary, "certain people" with larger ground-based apertures point out that UTC should remain a kind of Universal Time like the name says.  Changing this fact will certainly cost astronomers a lot of time and money.  Leap seconds are a means to an end, by all means discuss other ways to meet the UTC project requirements.
    But if your idea of the strongest argument against leap seconds is clocks on other rocks, this has been refuted time-and-again.  The Martian rover missions keep local Martian *solar* time precisely because even robots respond to diurnal timekeeping requirements.  Clocks-on-rocks may not be one of the strongest arguments, but they are one of the oldest (  By all means attach an appendix to the ITU proposal discussing how eliminating leap seconds permits keeping a "rough link between solar and clock time" on multiple planets.  Rather, the ITU simply wants to wish the problem away.
    If there is a secretive assembly here, it is the International Telecommunications Union.  Between Curie, Pasteur, Laplace and Lavoisier, French speaking scientists have done pretty well for themselves :-)
  • Poul-Henning Kamp | Wed, 27 Apr 2011 12:35:03 UTC

    Clive, thanks for your insight.  I too am somewhat sceptical about the claim that leap-seconds are crucial for ground based telescopes, but certain people with bigger apperture than my 125mm Meade claim so.  I my mind the strongest argument for totally removing the leap second is the prospect of human settlements on other rocks than this one, and the fact that it would make the sun-synchronization of civil timescales a problem for duly elected governments, accountable to their population, rather than a secretive assembly of (mostly french-speaking) scientists.
  • Clive Page | Tue, 26 Apr 2011 15:55:18 UTC

    I think you have convinced me, Poul-Henning, that the leap second should simply be abolished, and that your proposed compromise isn't really necessary.  I used to be in favour of their preservation, after all I'm an astronomer and I've written code to cope with leap seconds more than once, and if I can do it, anyone can surely.  
    But it's now clear to me that a huge number of computer systems are badly programmed, and that as a result leap seconds cause widespread inconvenience and maybe, in things like air traffic control, serious risk.  In space observatories we find that leap seconds are a serious nuisance, since it takes a lot of human effort to check and re-synchronise the times in the clocks at the ground stations and on each satellite.  The launch of at least one space observatory was delayed by a few days until after a leap second had occurred, simply to reduce the risk of serious effects.   As far as I can see, most of those who want leap seconds preserved are from ground-based astronomical observatories who now make up only a tiny minority of users of accurate time: I feel sure that those who maintain their software will be clever enough to cope with the absence of leap seconds before the effects become serious.  For the world as a whole, I'm convinced that leap seconds do more considerably more harm than good.
    In the long term, of course, we need to keep some sort of rough link between solar and clock time, but most people seem to adapt readily enough to gross distortions of it such as daylight-saving time, so solving this will become urgent only a few thousand years hence.  To my mind, this is a problem even less urgent than global warming.
    I may be the only person reading this who attended the IAU meeting in Brighton in 1970 at which the leap second system was defined.  I wandered in to the session on time scales by accident, and was fascinated to find that a revolution was in progress, with lots of votes.  Not only that, many of the contributions were given in French, because of the presence of so many from the Bureau Internationale de l'Heure.  It was the first and the last time that I'd heard a language other than English used at a major scientific conference.  The major dispute, as far as I can remember, was not whether leap seconds were necessary, but what maximum error could be tolerated between UTC and solar time before a leap second had to be introduced.
  • Poul-henning kamp | Tue, 26 Apr 2011 06:53:46 UTC

    With respect to ATC, the SOP here in Europe seems to be to suspend take-off and landings, ensure good in-air separation and announce that "all planes are on their own until further notice" and "wait until the lightshow ends" as one air traffic controller expressed it.  This is a workable solution because leap seconds happen at UTC midnight where air traffic in Europe is very light.  I have not found out what they do in Asia when they get a leap-second in rush-hour the 30th june.  I'm not sure they know either:  The last june 30th leap second was in 1997.
  • John | Mon, 25 Apr 2011 21:39:24 UTC

    The Toshiba power glitch incident tells us nothing about split-second operations.  It tells us not to interrupt the power supply to a semiconductor fab.  Certainly some processes are timed to milliseconds, but semiconductor tools are less concerned about absolute time (i.e. UTC) than what the machines next to them in their cluster are doing (relative time).  Refer to SEMI E148  Specification for Time Synchronization and Definition of the TS-Clock Object (the Guidelines are at  Leap seconds are addressed (section 6.1) in just three paragraphs of the 44 page document.  Note (in the Appendices) that the requirement for absolute accuracy is 5 seconds (i.e. much longer than a leap second) while relative accuracy is specified in milliseconds.  Fabs are already designed to take leap seconds in their stride.  Can you really not find some more convincing examples to support your case?
  • John | Mon, 25 Apr 2011 21:36:26 UTC

    Really?  Are you telling us that ATC systems are taken offline and all aircraft grounded for leap second events?  That fabs, chemical plants and refineries stop processing?  That the ISS and hundreds of satellites go offline?  That hospitals turn off life support systems and medical information systems?  That power stations and transmission SCADA systems go offline?  Broadcasting systems?  Telephony systems?  Credit card payment and financial systems?  Train signalling systems?  Vehicle engine management and navigation systems pause as we park by the roadside to let a leap second pass?  No, these systems (perhaps naively) are kept running.  Your arguments are crumbling before our eyes!  
  • Poul-Henning Kamp | Sun, 24 Apr 2011 18:30:59 UTC

    The reason why we don't have any documented leap-second incidents yet, is that people who are aware that their process is vulnerable tend to take it entirely offline during that time window.  Apart from being an expensive work-around, there is no way to ensure that people who should actually are aware that their process is vulnerable.  I used the Toshiba incident to document that in modern manufacturing a second can be a long time, and I stand by that example.  I don't think prudence in engineering is waiting for the first confirmed kill, before we do something about leap-seconds.
  • John | Sat, 23 Apr 2011 14:22:37 UTC

    So you concede that the Toshiba fab power outage incident is a specious example with regard to leap seconds.
    Toshiba's Yokkaichi Fab 3 and Fab 4 plants are like other 300mm fabs built around 2003-7.  The main problems would have been caused not by timing synchronisation issues but by the interruption of power to the tools mid process.
    The only remarkable thing is that the power conditioning and UPS did not prevent a short glitch having any effect.  There is more to this than has been reported, but it is not an issue about leap seconds or timing synchronisation.
    Process plant has to be designed to be tolerant of timing variations, because the clocks in local controllers tend to drift (by much more than the rotation of the earth) and must be synchronised periodically according to carefully considered master-slave protocols.  If the plant has been designed competently, these adjustments do not happen in the middle of a time critical process step.
    Engineers avoid building complex continuous production facilities that rely on split-second end-to-end timing, precisely because of the real world difficulties of ensuring synchronisation.  The challenges of, say, start-up or product changeover are far greater than making a few small time corrections.
    Can you support your arguments with a concrete example of any incident caused by leap second (or any other timing synchronisation) adjustments?  
  • Poul-Henning Kamp | Sat, 23 Apr 2011 08:06:09 UTC

    John, I never claimed that Toshibas problems were due to a leap-second, I used them as example to point out how streamlined and synchronized modern production facilities often are.  (I don't think your description of a semiconductor production facility is entirely up to date with such high-volume fabs as Toshibas, but that is a separate matter).  Toshibas incident is unique only by the fact that they told us about it. 
  • John | Thu, 21 Apr 2011 14:29:57 UTC

    I checked the story about the Toshiba fab power outage.  It is the only concrete example of a loss due to leap second or time synchronisation issues that you cite.
    At 5:21AM on December 8, 2010, a momentary drop in voltage in the power supply from Chubu Electric Power Co caused a stoppage of part of Toshibas Yokkaichi NRAM fabrication facility.  A refinery run by Cosmo Oil Co at Yokkaichi also had operations disrupted by the same outage.  The cause of the fall in voltage was not immediately clear.
    Contrary to your account, which states that 20 percent of the products scheduled to ship in January and February 2011 had to be scrapped, all the reports I can find (including Reuters and Toshibas press release) refer up to 20% of production.  Up to is clearly not the same as equals.
    In fact, Toshiba quickly reported that production at the plant was close to 100% normal operation by December 10.  I can find no further reports of lost production in the companys accounts or press releases, so I presume that they were small.  Events have since been overshadowed by the earthquake and tsunami.
    Semiconductors are not produced in complex continuous production facilities, but by discrete tools linked by buffers and automated material handling systems.  Generally each process will have its own standalone controller, with supervisory systems controlling the scheduling of partially finished wafers between process steps.  While many manufacturing processes are time critical, these are normally timed locally by the controllers on the individual machines.  While the sequencing and location of wafers between processes is critical, generally the timing is not.
    A power outage leaves all the wafers that are mid process in an unknown condition.  It takes some effort to sort out the state of each wafer, and whether it can be used or must be scrapped.  Hence the precautionary announcements to stock markets.  
    This incident has nothing to do with leap seconds or time synchronisation.  It is simply shows the effect of interrupting the power supply to a semiconductor fab mid process.  
    Exaggerating the effects of an unrelated power glitch incident is the brown M&M in your article.
  • Rob Seaman | Wed, 20 Apr 2011 23:45:04 UTC

    This is your column, so I suppose it's ok for you to claim the last word.  However, after stating not once, but twice, the wish to avoid rehashing the issues you yourself are rehashing tired old talking points that have been "already refuted".  Interested parties can follow the several links embedded in the comments.  Or here's another:
  • Poul-Henning Kamp | Thu, 14 Apr 2011 07:49:01 UTC

    Steve, lets not try to rehash all the old tired and already refuted arguments here.  The important letter in UTC is 'C' -- Coordinated.  All the contracts, treaties and technical specifications that mandate UTC, have chosen it because it is the timescale everybody can agree shows the same time.  The frequency, scheduling-horizon or lack of leap-seconds does not change this property.  Quite the contrary:  Right now a very large fraction of the worlds computers looses time-coordination every time a leap-second happens.  That is more or less exactly the problem that needs to be solved.  The "you must change the name" argument is just an attempt to derail the discussion.
  • Steve Allen | Wed, 13 Apr 2011 06:05:40 UTC

    One thing that could make the interpretation of computer time scales more confusing than it already is would be to change the definition while keeping the name UTC.  The practicality is that the machinery does not care about the name humans call the broadcast time signals.  Not so the humans, implementing algorithms, taught by professors, and using old books, who will take a generation before they all notice a change.  During that time more and more systems will find that they are using an input which is not what the math inside their code was expecting.  And the idea to change the name of the broadcasts is not my "spanner in the works."  All this was pointed out to the ITU-R when they called a colloquium of international experts to Torino in 2003.  The result said "If you change the definition then you should change the name."  This was presented to the ITU-R in document R03-WP7A-C-0011, but every subsequent document has denied that result.  The part the colloquium did not recognize was that POSIX requires that systems be able to handle leap seconds, and that the "right" zoneinfo files already use a scheme like that, so a technological compromise is already implemented and is tested.
  • Warner Losh | Sat, 09 Apr 2011 22:51:03 UTC

    Yet another compromise would be to announce the leap second 10 years in advance.  To do this, one would have to accept that the model might not match reality and that there'd be more than 1s delta between UTC and UT1 at times, but in the long run, it wouldn't grow without bound.  The 10 year announcement horizon would allow embedded devices to have the seconds encode into them for the useful life of many devices.  It would also make the need to test leap seconds more visible to management years in advance instead of the 6 month surprise we have now.  It would also allow people to simulate them more easily, as everybody knows when there will be leap seconds, setting time ahead will results in everybody apply them (and those that don't will stand out like a sore thumb).
    However, even with that compromise in mind, I think eliminating them would be better. I'm doubtful that any effort to define a parallel time scale to UTC will ever get traction.  Eliminating leap seconds would need to happen in the official time scale and while one can quibble about the name, it would replace UTC for nearly all applications.  Those applications that care about earth orientation would need to get more accurate data from somewhere that could now exceed 1s, but changes there are in the fine tradition of those needing a feature having to pay the freight for that feature rather than everybody else subsidize them.
    Note to SouLShadow: such a unit already exists.  It is called the SI second.  The mean solar second cannot have a non-observational definition and is constantly changing, making it unsuitable for scientific work outside of earth orientation aware applications.
  • Rob Seaman | Sat, 09 Apr 2011 21:44:33 UTC

    For those wishing to hash, rather than to rehash, the leapsecs mailing list is at, with the list archives before 2007 available from
  • Poul-Henning Kamp | Sat, 09 Apr 2011 21:20:56 UTC

    As Rob has mentioned, we have had this discussion for better part of a decade on the leapsecs mailing list, and I don't think there is any good reason to rehash 10 years of disagreements here, so I won't.
  • Rob Seaman | Sat, 09 Apr 2011 18:57:17 UTC

    It is simply fact that time-of-day and interval timekeeping are two different things.  In "You're doing it wrong" ( you argued quite persuasively that relying on an oversimplified conceptual model (see your Fig. 7) will inevitably result in mistaken deductions regarding the resulting systems.  Pretending time-of-day is a free parameter is similarly incorrect in the real world.  Which is to say, "you're doing it wrong, Poul-Henning".
  • Poul-Henning Kamp | Sat, 09 Apr 2011 18:12:23 UTC

    Why would it be more reasonable to force 99.9999% of the users to switch to a new timescale, rather than incovenience the 0.0001% who mistakenly use a timescale as a earth rotation approximation ?
  • Bart Smit | Sat, 09 Apr 2011 16:36:39 UTC

    What is being ignored in the article is the issue that, while it makes perfect sense to want to get rid of leap seconds, it might not be so reasonable to try to accomplish this by redefining an existing time scale (UTC) instead of switching to one that lacks leap seconds.
  • Rob Seaman | Fri, 08 Apr 2011 23:24:09 UTC

    Benjamin Franklin began each day with "Rise, wash, and address Powerful Goodness; contrive day's business and take the resolution of the day; prosecute the present study; and breakfast."  Mr. Kamp and I have been discussing issues surrounding leap seconds for more than a decade.  I will turn a blind eye to the many foibles in Kamp's description of the issues here; I applaud Warner Losh's "Possible Compromise" - current art could already increase the scheduling horizon by a factor of six without otherwise changing the UTC standard; and I will instead address "Powerful Goodness".
    Leap seconds are a means to an end.  Mr. Kamp refers to that end - to synchronize two different kinds of clocks.  The fundamental flaw in the ITU's "conspiracy" (Kamp's word, not mine) is to seek to confuse those two clocks.  Interval time, as kept by atomic clocks, is simply different than time-of-day - as observed by Franklin and the nearly seven billion humans alive today.  Time-of-day is "Mean Solar Time" - that is, "universal time" or "UT".  That the adjective "universal" doesn't take into account the Vulcan home world is not a strong argument for undermining UT.  
    What is time-of-day?  It is simply the actual sidereal rate of the Earth - the planet's rotation relative to the stars - adjusted by one (the integer "1") day per year as we lap the Sun.  By eliminating leap seconds the ITU is seeking to redefine time-of-day.  Is this a big deal?  Depends on one's point of view, perhaps, but it certainly isn't under the jurisdiction of the International "Radiocommunications" Union.  Time-of-day affects vast numbers of systems and stakeholders and deserves a broader hearing and an in-depth analysis of risks.
    ...and, oh yeah!  Redefining UTC would break virtually every astronomical software system and application on the planet.  Y2K writ supersize.  Astronomers quite reasonably have assumed that UTC would remain a flavor of Universal Time.  One second of time is 15 seconds of arc on the sky - a huge error in telescope pointing.  If the ITU wishes a timescale without leap seconds, call it something other than UTC.  How about "GPS", for instance?  Leave UTC for backwards compatibility.  Surely this is a better fall-back position, should Mr. Losh's quite reasonable compromise position not win the day in January 2012.
    Those who wish to know more about the leap second issue should click on over to:
  • Poul-Henning Kamp | Fri, 08 Apr 2011 13:12:47 UTC

    Sure, but why make the task impossible by insisting that we invent a new timescale which all countries need to adopt as the basis of their legal timekeeping ?   It's not like they have all successfully managed the "GMT" to "UTC" transition yet.   Renaming UTC is just Steves attempt to throw a spanner in the works.
  • SouLShadow | Fri, 08 Apr 2011 11:09:41 UTC

    First of all, I must admit after re-reading the first comment, I seem to have envisioned the same idea only using different words. Second, this would not amount to a huge change. I believe, if done correctly, it would be largely transparent to most people and would allow most existing software to continue unmodified. Finally, as to your question of "why" [should this be done], I believe that issue itself is the basis of your entire article. Simply put, the current system is broken and this us mearly another possible solution.  Ad a side note, let's not forget the added benefit of becoming truly universal time, which can carry not only from country to country, but planet to planet and beyond.
  • Poul-Henning Kamp | Thu, 07 Apr 2011 22:42:56 UTC

    Given that neither governments nor The Open Group as POSIX-custodian have shown any tendency to do, or even want to do, a decent job getting the paperwork around timescales right, I think any "solution" depending on them improving their track record is doomed from the outset.   Furthermore, neither of you two astronomers have given even a hint of _why_ we should go to this extraordinary effort, just to avoid changing the definition of a timescale called UTC.
  • SouLShadow | Thu, 07 Apr 2011 20:09:33 UTC

    Similar to the previous comment, create a new time keeping unit based solely on a fixed measurable interval. I'll call this Absolute Time. This unit continues counting without adjustment and is exactly the same regardless of location. All interval and precision measurements are made using Absolute Time. Then, all current time systems become Relative Time (or Local Time, Display Time, etc...). This can be changed and localized using existing, unmodified systems. Relative time is synced with Absolute Time and adjusted by local time zones or any other arbitrary modifiers. Now Absolute Time becomes the measurement if time's passage and Relative Time becomes our casual time of day reference.
    As for computer implementation, Absolute Time should be written from scratch with new calls added at the system level. Relative Time uses the current system, with minor modifications at the system level and is mearly a translation of the Absolute Time.  The idea being that all code currently in use continues to function properly while implementation details silently change and "new" features are added.
    I know this needs some refinement, but hopefully I outlined the basic logic and mechanics to illustrate the concept.
  • Steve Allen | Thu, 07 Apr 2011 18:04:34 UTC

    An alternative compromise is to remove leap seconds from the internationally approved broadcast time scale, change the name of the broadcast time scale to something other than UTC (the name TI was suggested at the ITU-R's colloquium in Torino in 2003), change the POSIX spec to say that time_t counts seconds of TI, and then follow the lead of the "right" zoneinfo files which put the leaps not in time_t (where the kernel has to handle them) but instead make UTC itself into a time zone where leap seconds can be handled just like the other arbitrary decrees that we must reset our clocks.
Leave this field empty

Post a Comment:

(Required - 4,000 character limit - HTML syntax is not allowed and will be removed)