First, some crypto-relevant info theory:
Encrypted data is uniformly distributed, i.e., has maximal entropy per symbol;
Raw, uncompressed data is typically redundant, i.e., has sub-maximal entropy.
Suppose you could measure the entropy of the data to- and from- your network interface. Then you could see the difference between unencrypted data and encrypted data. This would be true even if some of the data in “encrypted mode” was not encrypted---as the outermost IP header must be if the packet is to be routable.
Ueli Maurer's “Universal Statistical Test for Random Bit Generators”( MUST) quickly measures the entropy of a sample. It uses a compression-like algorithm. The code is given below for a variant which measures successive (~quarter megabyte) chunks of a file.
We also need a way to capture the raw network data. A program called tcpdump(1) lets you do this, if you have enabled the Berkeley Packet Filter interface in your kernel's config file.
The command:
tcpdump -c 4000 -s 10000 -w dumpfile.bin
will capture 4000 raw packets to dumpfile.bin. Up to 10,000 bytes per packet will be captured in this example.