September/October 2018 issue of acmqueue The September/October issue of acmqueue is out now

Subscribers and ACM Professional members login here



Performance

  Download PDF version of this article PDF

Error 526 Ray ID: 47c03cd12949c5d2 • 2018-11-19 05:22:10 UTC

Invalid SSL certificate

You

Browser

Working
Newark

Cloudflare

Working
deliverybot.acm.org

Host

Error

What happened?

The origin web server does not have a valid SSL certificate.

What can I do?

If you're a visitor of this website:

Please try again in a few minutes.

If you're the owner of this website:

The SSL certificate presented by the server did not pass validation. This could indicate an expired SSL certificate or a certificate that does not include the requested domain name. Please contact your hosting provider to ensure that an up-to-date and valid SSL certificate issued by a Certificate Authority is configured for this domain name on the origin server. Additional troubleshooting information here.

acmqueue

Originally published in Queue vol. 13, no. 5
see this item in the ACM Digital Library


Tweet


Related:

Richard L. Sites - Benchmarking "Hello, World!"


Noor Mubeen - Workload Frequency Scaling Law - Derivation and Verification
Workload scalability has a cascade relation via the scale factor.


Theo Schlossnagle - Monitoring in a DevOps World
Perfect should never be the enemy of better.


Ulan Degenbaev, Jochen Eisinger, Manfred Ernst, Ross McIlroy, Hannes Payer - Idle-Time Garbage-Collection Scheduling
Taking advantage of idleness to reduce dropped frames and memory consumption



Comments

(newest first)

Frank Ch. Eigler | Mon, 08 Jun 2015 15:15:47 UTC

Overbrief summary: superlinear effects explained by baseline platform suffering errors (making it slower than it should've been). Lesson: when measuring scaling up, compare to apples-to-apples in terms of QoI.


steve jenkin | Sun, 07 Jun 2015 08:16:02 UTC

0. Dr Gunther has seen, solved and explained an extremely important result, "super-linear" scaling that has not been documented in the 50+ years of multi-processing until recently.

1. The analysis is wholly in terms of Throughput, with the absolute value 'normalised' relative to T1, eqn 1, as a 'speedup', Sp. Until "super-linear" performance was found, no additional equations were needed.

Solving the USL (eqn 2) for maximum Sp (d/dp Sp = 0), provides a "Never Exceed" bound for a system. Attempting to process higher demands leads to lower Throughput, not what we want.

2. There are many physical processes that follow exactly these curves, one that's well known is power output of petrol engines.

Two curves, vs RPM (i.e. p), are always provided to describe engine performance: - the brake horsepower produced (equiv to Tp, or the normalised value, Sp), and - the specific, or normalised, output per Revolution: the Torque (equiv to Sp x p)

3. Peak Torque is the maximum economic speed, RPM, of the engine. For minimum fuel use, or maximum range, vehicles or machinery need to operate at this speed.

Similarly, the point when (Sp x p) peaks, [d/dp = 0] is the maximum economic area of operation of a system. System designers should question routinely exceeding the maximum specific processor performance.

4. There is a third point of interest in the USL curve, where p is large and Sp <= 1, and continues falling. This is where the system is slower than a single processor, hence all the additional processors aren't just busy doing nothing, but creating non-useful work.

The system is spending more processor effort on its internal operations than on productive work.

Beyond peak Throughput, this is always true, but below Unity Speedup, the system needs to radically triage its operations and revert to a single processor.

This is very close to the boundary definition of Virtual Memory Thrashing and the system response, load-shedding, very similar.

5. The application of Dr Gunther's USL is similar Peter Denning et al's "Working Sets" in automatically controlling Virtual Memory thrashing: - it defines a well specified, single performance metric and what, if any, hardware support is needed to capture the data, - the "Never Exceed" bounds of the metric, and - the action for the system to take on nearing or exceeding the Bound.

It took Denning et al a decade to complete the full theory after proving the Working Set Theory explained and enabled avoidance of Thrashing. Potentially much of that Theory can be reused, or tested in this new application.


Leave this field empty

Post a Comment:







© 2018 ACM, Inc. All Rights Reserved.