Mailing List Archive
tlug.jp Mailing List tlug archive tlug Mailing List Archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]Re: [tlug] Re: Talk about fast HTTP
- Date: Sat, 18 Oct 2008 19:21:49 +0900
- From: Curt Sampson <cjs@example.com>
- Subject: Re: [tlug] Re: Talk about fast HTTP
- References: <48F850AA.705@bebear.net> <13249286.664581224234776140.JavaMail.root@gla1-mail1> <20081017230648.GD1501@lucky.cynic.net> <c0f4e2b00810171843g6dcd5367k15c011f4f76bb18b@mail.gmail.com>
- User-agent: Mutt/1.5.17 (2007-11-01)
On 2008-10-18 10:43 +0900 (Sat), Bruno Raoult wrote: > This could be interesting if you remove the "HTTP" word. I believe > that most companies are looking for performance in networking (I > really mean system and software tuning here, not physical limits of > the hardware/lines), whatever the TCP protocol is. I disagree. Most systems I see these days are easily capable of saturating a couple of Gigabit Ethernet interfaces with programs that generate, send and receive simple test data, and use little CPU while doing so. Thus, in real applications, the bottleneck is elsewhere than the networking stack. > On Sat, Oct 18, 2008 at 8:06 AM, Curt Sampson <cjs@example.com> wrote: > > > ...bandwidth from the disk is often the system bottleneck... > > Sorry, I don't really understand what you mean: What do they saturate exactly? The disks simply can't read data fast enough to send data over the network at the rate at which the CPU, memory, and network interfaces are capable of. A trivial calculation will show why this is so. With a standard 1U server, I have a pair of GigE interfaces giving me the ability to send something theoretically approaching 200 MB/sec. Yet a pair of modern disks, in the best case (sequential reads and few or no seeks) will let me read only 160-180 MB/sec. In the practical case, of course, if the working set you're sending doesn't fit into cache, you'll be doing a lot of seeking, and probably getting considerably less than that. And it's even worse if the read load is not evenly distributed across both disks. Our experience, with a server serving less than half a terrabyte of various files, more or less randomly accessed, is that we can't even read enough from the disks to saturate one GigE interface. > There is always a few bottlenecks, which are, in order: Bottlenecks are of course application- and system-specific, and change from year to year as you change your software and hardware. > [Inifiniband, etc.] is exactly what we could discuss here. Not really of interest to me, I'm afraid; I can scale much more cheaply than that. cjs -- Curt Sampson <cjs@example.com> +81 90 7737 2974 Mobile sites and software consulting: http://www.starling-software.com
- Follow-Ups:
- Re: [tlug] Re: Talk about fast HTTP
- From: Christian Horn
- Re: [tlug] Re: Talk about fast HTTP
- From: Bruno Raoult
- References:
- [tlug] Re: Talk about fast HTTP
- From: Edward Middleton
- Re: [tlug] Re: Talk about fast HTTP
- From: Sach Jobb
- Re: [tlug] Re: Talk about fast HTTP
- From: Curt Sampson
- Re: [tlug] Re: Talk about fast HTTP
- From: Bruno Raoult
Home | Main Index | Thread Index
- Prev by Date: Re: [tlug] Child abuse :-(
- Next by Date: Re: [tlug] Re: Talk about fast HTTP
- Previous by thread: Re: [tlug] Re: Talk about fast HTTP
- Next by thread: Re: [tlug] Re: Talk about fast HTTP
- Index(es):
Home Page Mailing List Linux and Japan TLUG Members Links