File: BENCHMARKS

package info (click to toggle)
vsftpd 3.0.3-12
  • links: PTS
  • area: main
  • in suites: bullseye, buster, sid
  • size: 2,548 kB
  • sloc: ansic: 16,632; sh: 267; makefile: 51; python: 18
file content (70 lines) | stat: -rw-r--r-- 2,908 bytes parent folder | download | duplicates (10)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
- See also SPEED

Update 2nd Nov 2001
ftp.redhat.com ran vsftpd for the RedHat 7.2 release. vsftpd achieved 4,000
concurrent users on a single machine with 1Gb RAM. Even with this insane user
count, bandwidth remained totally saturated. The user count could have been
higher, but the machine ran out of processes.

--
Below are some quick benchmark figures vs. wu-ftpd. This is an untuned BETA
version of vsftpd (0.0.10)

The executive summary is that wu-ftpd got a thorough thrashing. The most
telling statistic is wu-ftpd typically failing to sustain 400 users, whereas
vsftpd copes with 1000 with room to spare.

A 2.2.x kernel was used. A 2.4.x kernel should make vsftpd look even better
relative to wu-ftpd thanks to the sendfile() boosts in 2.4.x. A 2.4.x kernel
with zerocopy should be amazing.

Many thanks to Andrew Anderson <andrew@redhat.com>
--

Here's some benchmarks that I did on vsftpd vs. wu-ftpd.  The tests were
run with "dkftpbench -hftpserver -n500 -t600 -f/pub/dkftp/<file>".  The
attached file are the summary output with time to reach the steady-state
condition.

The interesting things I noticed are:

- In the raw test results, vsftpd had a much higher peak on the x10k.dat
transfer run than wu-ftpd did.  Wu-ftpd peaked at ~150 connections and
bled down to ~130 connections, while vsftpd peaked at ~400 connections and
bled down to ~160 connections.  I tend to believe the peaks more than the
final steady-state that dkftpbench reports, though.

- For the other tests, our wu-ftpd setup was limited to 400 connections,
but in about half of the x100k/x1000k runs could not even sustain 400
connections, while vsftpd handled 500 easily on those runs.

- During the peak runs at x10k, the machine load with vsftpd looked like
this (I don't have this data still for the wu-ftpd runs):

01:01:00 AM       all      4.92      0.00     21.23     73.85
03:31:00 AM       all      4.89      0.00     19.53     75.58
05:11:00 AM       all      4.19      0.00     16.89     78.92
07:01:00 AM       all      5.61      0.00     22.47     71.92

The steady-state loads were more in the 3-5% user, 10-15% system. For the
x100/x1000 loads with vsftpd, the system load looked like this:

x100k.dat:
09:01:00 AM       all      2.27      0.00      9.79     87.94

x1000k.dat:
11:01:00 AM       all      0.42      0.00      5.75     93.83

Not bad -- 500 concurrent users for ~7% system load.

- Just for kicks I ran the x1000k test with 1000 users.  At peak load:

X1000k.dat with 1000 users:
04:41:00 PM       all      1.23      0.00     46.59     52.18

Based on what I'm seeing, it looks like if a server had enough bandwidth,
it could indeed sustain ~2000 users with the current 2 process model
that's implemented in vsftpd.  I did notice that dkftpbench slowed down
the connection rate after 800 connections.  I'm not sure if that was a
dkftpbench issue, or if I ran into something other limit.