Benchmarking is measuring perfomance.
A chain is only as strong as its weakest link. Analogous over-all perfomance my be decreased by any signle component:
In the rest of this section we will measure all these components separated.
In this test we will see what is the maximum perfomance you can get if you have disk with unlimited speed (this test measures only your CPU):
$ bctool benchmark
Benchmark results:
DES CBC 37.21 MB/s
DES LRW 31.63 MB/s
3DES CBC 12.81 MB/s
3DES LRW 12.67 MB/s
CAST CBC 77.71 MB/s
CAST LRW 47.11 MB/s
GOST CBC 32.24 MB/s
GOST LRW 28.25 MB/s
IDEA CBC 32.54 MB/s
IDEA LRW 29.79 MB/s
BLOWFISH CBC 66.75 MB/s
BLOWFISH LRW 52.01 MB/s
RIJNDAEL CBC 90.48 MB/s
RIJNDAEL LRW 60.93 MB/s
RIJNDAEL XTS 92.59 MB/s
TWOFISH CBC 69.00 MB/s
TWOFISH LRW 75.67 MB/s
TWOFISH XTS 74.12 MB/s
BLOWFISH-128 CBC 66.22 MB/s
BLOWFISH-128 LRW 50.05 MB/s
BLOWFISH-448 CBC 90.67 MB/s
BLOWFISH-448 LRW 58.71 MB/s
CPU and RAM are usually the most high-speed components of a computer. Encryption speed is often limited by hard drive speed.
$ dd if=/dev/zero of=/tmp/zero.bin bs=4k count=100k ; rm -f /tmp/zero.bin
102400+0 records in
102400+0 records out
419430400 bytes (419 MB) copied, 18.3354 s, 22.9 MB/s
Now let's check writting speed for pseudo-random (not easy to compress) data. As kernel /dev/urandom is slow we will use openssl random generator:
$ dd if=/dev/urandom of=/tmp/random.bin bs=4k count=100k ; rm -f /tmp/random.bin
102400+0 records in
102400+0 records out
419430400 bytes (419 MB) copied, 38.559 s, 10.9 MB/s
$ openssl rand 419430400 | dd of=/tmp/random.bin bs=4k ; rm -f /tmp/random.bin
102400+0 records in
102400+0 records out
419430400 bytes (419 MB) copied, 20.6644 s, 20.3 MB/s
If you are using SSD disk the result between writting zeroes and pseudo-random data may be significant differ. This is because SSD disk use a compression algoritm which just compress or reject at all actually writting zeroes.
A disk drive itself stores nothing but just set of bytes. It is a filesystem driver that makes sence of files/directories of it. When you store a file on disk not only its data is written. The filesystem driver also changes a number of structures on disk which store file name and its location.
Practically when you store a single large file filesystem driver takes a little place in it. But when you store a large number of small files then write perfomance can be decreased.
In following test we will create unencrypted container with FAT filesystem:
$ dd if=/dev/zero of=unencrypted_container.bin bs=1M count=400
$ mkfs.vfat unencrypted_container.bin
$ mkdir container_mount_point
$ sudo mount -o loop,uid=`id -u` unencrypted_container.bin container_mount_point
$ cd container_mount_point
$ time for i in `seq 10000`; do openssl rand 4096 > /dev/null ; done
real 4m46.432s
user 1m16.420s
sys 2m52.572s
$ time for i in `seq 10000`; do openssl rand 4096 >> test_file.bin ; done ; rm -f test_file.bin
real 4m56.148s
user 1m15.262s
sys 2m44.934s
$ mkdir test_dir ; time for i in `seq 10000`; do openssl rand 4096 > test_dir/file_$i.bin ; done ; rm -rf test_dir
real 5m56.396s
user 1m18.773s
sys 3m49.781s
The same amount of data (about 40 Mb) is rejected via /dev/null, written in a single file, written in multiple files. In case of