|Subject:||Lots of 150+ TCP Connections / Excessive CPU loading|
|Posted by:||Tim Hyde (timhy…@c21technology.com)|
|Date:||Thu, 31 Aug 2006|
I am using Indy to implement a streaming server - a bit like SHOUTcast.
Everything works fine except the I feel that the CPU load is a bit
I am using a 2GHz Celeron, on Server 2003. I can get 150 clients (all
TCP/IP connections) running simultaneously each transfering about 8KB/s
giving a total data rate of 1200KB/s. I am using Indy 7.
Although this seems okay, if I want many more clients then I am stuck, as
going to a multi-core machine does not really make any difference to the max
total number of clients that can connect. I have also discovered also less
bandwidth allows more clients.
The basic structure of the program is that there is one thread stuffing data
into all the other threads created by the Indy TCP Server class, so there is
an appreciable amount of thread synching going on. At first I though that
this may be a memory manager problem, but I have tried the BigBrain Pro mem
manager from Digital Tundra and there is no difference (may even be
I have optimised all my buffering and again this made very little
So now I am back to Indy. Why do I feel that Indy is the cause well, as
part of my optimisation I have discovered that using the write buffers slows
things down significantly (OpenWriteBuffer). Also allowing the OnExecute
function (of each TCP connection thread) to go back to Indy also slows
things down significantly (I only allow a trip back to the Indy internals
every half second or if the thread Terminate flag is set).
As a benchmrk I tried to determin the number of clients that Shoutcast can
support. I could only manage to get about 180 instances of WinAmp started
on another machine, but I estimate that this only used about 10 percent CPU
usage over have no connection to the Shoutcast server. But, it is important
to point out that Shoutcast services all of its client connections using a
So to sum up, I seem to be stuck at a max data rate that I can acheive kind
of independant of the number of clients. I have already squeezed extra
performance out by not letting Indy have its way through restricting the
frequency of returns from the OnExecute handler. Use of Indy buffering is a
definate no no. Shoutcast can acheive significantly more connection - maybe
5-10 times the amount.
So any relevant suggestions from anyone would be appreciated.
- Is there any better support in later versions of Indy?
- Is there a way to get all the connections serviced by a single thread,
thus reducing the number of point of synchronisation (though I am not
convinced that a couple hundred threads is the problem as Server 2003 seems
to come up with almost 700).
- Is this all down to the blocking nature of Indy?
- Why is the stuff that calls OnExecute waste so much CPU time acheiving