January/February 2018 issue of acmqueue

The January/February issue of acmqueue is out now

Kode Vicious


  Download PDF version of this article PDF

ITEM not available


Originally published in Queue vol. 8, no. 12
see this item in the ACM Digital Library


Follow Kode Vicious on Twitter
and Facebook

Have a question for Kode Vicious? E-mail him at [email protected]. If your question appears in his column, we'll send you a rare piece of authentic Queue memorabilia. We edit e-mails for style, length, and clarity.


Yonatan Sompolinsky, Aviv Zohar - Bitcoin's Underlying Incentives
The unseen economic forces that govern the Bitcoin protocol

Antony Alappatt - Network Applications Are Interactive
The network era requires new models, with interactions instead of algorithms.

Jacob Loveless - Cache Me If You Can
Building a decentralized web-delivery model

Theo Schlossnagle - Time, but Faster
A computing adventure about time through the looking glass


(newest first)

H Langeveld | Mon, 20 Dec 2010 01:08:12 UTC

The problem with NFS is not really the 32k block at a time processing - you can actually have many outstanding requests open simultaneously. With NFS over TCP, you quickly run into the mice/elefants issue. A lot of delay in NFS comes from the frequent attribute checking, just to check the file's consistency. That's a lot of mice. But when each mouse has to wait in line for a gap between to elephants, that's a long delay.

Just try to do a find across nfs with a big file transfer going on and without.

I've seen massive improvements by forcing a separate (set of) tcp connection(s) for each client mount - even when different mounts refer to the same backend file system. Ideally, you'd want to give each process it's own pool of tcp connections, so that different sessions do not get in each others way.

NFS over SCTP might help here by not imposing an arbitrary order on all transactions between two systems. The idea has been around a few times, but I'm not aware of anyone who's actually tried to build it.

KV | Thu, 16 Dec 2010 19:08:39 UTC


This type of caching and acceleration, while interesting, still doesn't get around the basic problem I'm pointing out, although it might speed things up a bit, it's not going to make accessing a trans oceanic server anywhere near usable. To be honest, the real answer to these sorts of problems usually lies with understanding what data needs to be where, and distributing it properly. People always want a silver bullet, but they're very rare.


joshduffek | Thu, 16 Dec 2010 07:16:57 UTC

kv, you should look into doing a riverbed demo. the products are fairly expensive, but they can optimize the TCP traffic over WANs so that it doesn't have to wait for ACKs from the other side before sending more data. they also can store your most used data...if one person opens a file the data from that file is stored in a datastore. if another person opens the same file, it is downloaded from the local riverbed's datastore instead of going across the WAN. if a piece of the file is changed, the changes will be sent but only those.

here is a cifs xfer demo vid to get an idea: http://www.youtube.com/watch?v=TjkOm07GbWk

Leave this field empty

Post a Comment:

© 2018 ACM, Inc. All Rights Reserved.