Tutorial: Performance and Interface et al

Performance issues

Running T2 on an interface can be achieved by using the -i command line option. Nevertheless, you might face problems with certain memory, OS or library pecularity, e.g. libpcap. So you might also need input buffering and flow timeout controls, which we will discuss in this tutorial. The discussed performance enhancements are also pertaining to operations on large pcaps.

Preparation

In order to assure that no old or unnecessary plugins are being loaded please clean your plugin directory and rebuild standard plugins

$ t2build -e
Are you sure you want to empty the plugin folder '/home/wurst/.tranalyzer/plugins' (y/N)? y
Plugin folder emptied
$ t2build basicFlow basicStats tcpStates txtSink
...
$

Basic Interface ops

In order to collect flows from any interface you need to become root, the command st2 will help you out:

$ st2 -i interface -w -
[sudo] password for wurst:
...

the -w - denotes to route the flow output to stdout. You could pipe that output now into a netcat and send it somewhere. If you invoke t2stat in another window you will see the statistics on the interface:

$ st2 -i interface -w - | netcat 127.0.0.1 6666
[sudo] password for wurst:
...

                                    @      @
                                     |    |
===============================vVv==(a    a)==vVv===============================
=====================================\    /=====================================
======================================\  /======================================
                                       oo
USR1 A type report: Tranalyzer 0.8.6 (Anteater), Tarantula. PID: 34007

...
Number of processed   packets/flows: 21.15
Number of processed A packets/flows: 14.40
Number of processed B packets/flows: 57.55
Number of processed total packets/s: 242009.68 (242.01 K)
Number of processed A+B packets/s: 242009.68 (242.01 K)
Number of processed A   packets/s: 138976.19 (138.98 K)
Number of processed   B packets/s: 103033.49 (103.03 K)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Number of average processed flows/s: 11442.27 (11.44 K)
Average full raw bandwidth: 1532333568 b/s (1.53 Gb/s)
Average snapped bandwidth : 1531812352 b/s (1.53 Gb/s)
Average full bandwidth : 1531986304 b/s (1.53 Gb/s)
Max number of flows in memory: 70051 (70.05 K) [26.72%]
Memory usage: 0.18 GB [0.26%]
...

This is a line which has 1.5GBit on the average. Now let’s assume that you have a 10+ GBit line. As T2 has basically a lean single threaded packet collector, there is a limit on interface operation. It can be extended, but to go beyond 10GBit is currently single threaded. Nevertheless, there will be a parallelized version, which then can consume more traffic but also more memory and will not be able to do some of the wonderful things, such as interwork L4 with L3 information as in connStat.

Nevertheless, but with 26% of the flow memory used and total memory used 0.3% we don’t need to worry at that time.

Until the -c option invoking several T2 on different cores on several interfaces behind a regenearation tap is an option we also use in practise.

In the following all the options to make Anteater faster will be discussed.

Performance configurations

Most of practitioners only want a monitoring or a simple flow aggregation as defined in netflow5-7 or have almost no boundaries on the memory. Then there are several method to influence the the performance of T2 in time and memory:

Issue Parameter file
Core traffic Peaks ENABLE_IO_BUFFERING ioBuffer.h
Short flows, high # flow/s ENABLE_IO_BUFFERING, socket, -c option tranalyzer.h
# of flows in memory HASH, -f option, HASH functions hashTable.h
Flow memory limitations FLOW_TIMEOUT tranalyzer.h
Maximum flow release time FDURLIMIT tranalyzer.h
Redirect output, block buffer BLOCK_BUF tranalyzer.h
Remove clutter VERBOSE tranalyzer.h
Binary output: binSink GZ_COMPRESS binSink.h
# and configuration of plugins s. plugin doc
Parallelization parallelization tutorial

All theses files reside under the src directory of T2 core, so move to it.

$ tranalyzer2; cd src
$ ls
binaryValue.c  global.h  hashTable.c  hdrDesc.c  ioBuffer.c  linktypes.h    loadPlugins.h  main.h    Makefile.am  networkHeaders.h  outputBuffer.h   packetCapture.h  tranalyzer.h
binaryValue.h  hash      hashTable.h  hdrDesc.h  ioBuffer.h  loadPlugins.c  main.c         Makefile  Makefile.in  outputBuffer.c    packetCapture.c  proto
$

In the following we will look at these files and discuss the diffent options.

Buffering

So let’s look at the buffering system of t2, defined in ioBuffer.h:

$ vi ioBuffer.h

Look for Input Buffering

ENABLE_IO_BUFFERING enables the input buffer. IO_BUFFER_FULL_WAIT_MS denotes the polling time of the write process of the core if the input queue is full. The IO_BUFFER_SIZE should be choosen according to the amount of plugins loaded, the expected traffic bandwidth and the average pkts/flow. The lower the pkts/flow distribution

$ t2conf tranalyzer2 -D ENABLE_IO_BUFFERING=1
$ t2build tranalyzer
...
$

Hash chain table size, Hash Autopilot, hash functions

The Anteater keeps all flow information and the ones of the plugins in memory until a flow timeouts, then he reuses the so called flow bucket. The number of flow buckets if defined in tranalyzer.h by the constant HASHCHAINTABLE_BASE_SIZE.

If choosen as multiple of two, makes the hash very performant, as the modulo becomes a simple bitwise &. The required hash space to address the buckets is defined by HASHTABLE_BASE_SIZE and in order to avoid hash collisions it is good practise to choose this value eight times the flow buckets amount, as indicated in the listing below.

The multiplication factor depends on the type of traffic, so the factor used in T2 is good practise according to our experience. You may increase the ration between hash buckets and hash space, but then you will use a bit more memory. It is a trade off and depends on the type of HW you are using.

Now open tranalyzer.h and search for HASHTABLE_BASE_SIZE.

$ vi tranalyzer.h

// The sizes of the hash table
#define HASHFACTOR        1        // default multiplication factor for HASHTABLE_BASE_SIZE and HASHCHAINTABLE_BASE_SIZE if no -f option
#define HASH_CHAIN_FACTOR 2
#define HASHTABLE_BASE_SIZE       (HASHCHAINTABLE_BASE_SIZE * HASH_CHAIN_FACTOR)
#define HASHCHAINTABLE_BASE_SIZE  262144UL // 2^18

#define HASH_AUTOPILOT 1 // 1: avoids overrun of main hash, flushes oldest NUMFLWRM flow on flowInsert, 0: disable hash overun protection
#define NUMFLWRM       1 // number of flows to flush when main hash map is full

#endif // __TRANALYZER_H__

You may choose these values as you wish, if you do not want to optimize run time performance, as on an interface or when dealing with very large pcaps. Note, that the compiler, HW and the memory layout is also a factor, a large increase of the HASH_CHAIN_FACTOR can then be non beneficial. If you don’t want to edit .h files here are the commands to enlarge it by a factor of 8.

$ t2conf tranalyzer2 -D HASH_CHAIN_FACTOR=8
$ t2build tranalyzer
...
$

To increase the amount of flow hash space and the hash as well the HASHFACTOR has to be changed. To make your life easier we added the -f option in the command line, which multiplies both hash constants with the f value you supply. If you want to test it on your interface the command line would be:

$ st2 -i interface -w - -f 8
[sudo] password for wurst:
...
$

You can also try it on one of our pcaps.

If t2 runs out of hash buckets, because you underestimated the number of flows in a pcap, or on an interface, the hash autopilot automatically removes the oldest NUMFLWRM flows, reports the required hash factor and continues.

This survival function avoids a rerun if the pcap is very large. So you get a result and for the next pcap or start on the interface you can change the proposed hash factor in the -f option.

If performance is too low then look first in the t2 end report at the following lines

Max number of flows in memory: 70051 (70.05 K) [26.72%]
Memory usage: 0.18 GB [0.26%]

If the first is above 30% and the second below 40% you may increase the number of HASH space and flow memory by a large factor to avoid hash collisions. Otherwise try it by a factor of 2 steps.

There is another last resort, the hash function. T2 provides a choice of several hash functions in hashTable.h. Look for T2_HASH_FUNC.

$ vi hashTable.h
...
#define T2_HASH_FUNC       8 // Hash function to use:
                             //   0: standard
                             //   1: Murmur3 32-bits
                             //   2: Murmur3 128-bits (truncated to 64-bits)
                             //   3: xxHash 32-bits
                             //   4: xxHash 64-bits
                             //   5: CityHash64
                             //   6: MUM-V2 64-bits
                             //   7: hashlittle 32-bits
                             //   8: wyhash 64-bits
                             //   9: FastHash32
                             //  10: FastHash64
...

Default is the wyhash 64-bits, which produced best results in tests in our domain of work. So you may choose another one. Each has its pros and cons. If you change the constant, don’t forget to t2build tranalyzer2.

Flow timeout and flow duration

The FLOW_TIMEOUT defines the maximum amount of time after the last packet received until a flow will be removed from memory and send to the appropriate output channel.

The standard flow timeout is set to 182 seconds, so more than 3 minutes, which is an empirical value covering most of the keep alive packets. It can be changed according to the users disgression. This value is overwritten by any plugin which takes configures of the flow timeout engine in the core, such as tcpStates. So TCP flows encountering a RST or FIN should timeout immediately instead of wasting memory. If the value of FLOW_TIMEOUT is very small, more flows can be created and hence more output, resulting in higher delays.

The FDURLIMIT value controls the maximum Flow duration until released from memory. This option is independent of FLOW_TIMEOUT. A value > 0 activates the feature. If it is choosen small, then large numbers of flows are created, resulting also in higher delays, because more output is generated per time.

As FDURLIMIT is often used to minimize the time until a flow becomes visible in the output. It uses the mechanics of FORCE_MODE

// Maximum lifetime of a flow
#define FDURLIMIT 0 // if > 0; forced flow life span of n +- 1 seconds;

// The standard timeout for a flow in seconds
#define FLOW_TIMEOUT 182 // flow timeout after a packet is not seen after n seconds

Verbose

In order to reduce the clutter being transfered to a socket or stdout T2 can be muted. In trnalyzer.h the variable VERBOSE controls T2’s reporting.

$ tranalyzer2
$ vi tranalyzer.h

...
/*
 * The verbose level of final report:
 * 0: no output
 * 1: Basic pcap report
 * 2: + full traffic statistics
 * 3: + info about frag anomalies
 */
#define VERBOSE 2
...

so set VERBOSE to 0 by using t2conf and recompile with the -R option, as plugin also implement VERBOSE:

$ t2conf tranalyzer2 D VERBOSE=0
$ t2build -R
...
$

Now no output is generated to stdout anymore.

Packets/Flow distribution

In order ton idea about performance problems is to look at the endreport of T2 as already discussed above. Often a detailed view of the distribution or the calculation of the center of mass of the distribution helps to assess the situation. The script flowstat under tranalyzer2/scripts comes to the rescue.

$ tran; cd scripts
$ ./flowstat --help
Usage:
    flowstat [OPTION...] <FILE_flows.txt>

Optional arguments:
    -c col        Column name or number
    -d dir        Select one flow direction only (A or B)
    -m max        Maximum value to bin (requires -s option)
    -s size       Bin size (requires -m option)
    -t            gnuplot mode
    -0            Suppress 0 counts

    -h, --help    Show this help, then exit
$

It computes the packet distribution in selected bins and produces global parameters such as maxima and the center of mass. Except the latter, all statistics is computed by T2 end report. Let’s look at the flows from the pcap annoloc3_flows.txt we already generated in earlier scenarios.

$ ./flowstat ~/results/annoloc2_flows.txt -c numPktsSnt -s 1 -m 9000 -0
% bin	count	relCnt
1-1	6673	37.9083
2-2	2402	13.6454
3-3	1403	7.97023
4-4	1136	6.45345
5-5	926	5.26047
6-6	472	2.68136
7-7	366	2.07919
8-8	379	2.15304
9-9	207	1.17594
10-10	139	0.789638
...

total col sum, max_col_value, #flows, cent_mass, 50%:	23601	1219015	17603	66.523	1.93972
$

I chose binning 1 and collected all flows up to 9000 packets to have a precise result of the cent_mass value. It is at 66 packets/flow, but 50% of all flows have 1-2 packets per flow. So 50% of all packets processed by t2 produce a new flow, which is a serious performance issue. If you look at the first 100 packet/flow bins below, you see clearly the exponential decay, which is very typical of todays traffic in the wilde.

$ t2plot ~/results/annoloc2_flows.txt -o numPktsSnt -H 1 -sx 0:100 -r
packet/flow distribution 100 lin

But if you up to 2000 in a logarithic scale you see the long tail which increases the center of mass of the distribution.

$ t2plot ~/results/annoloc2_flows.txt -o numPktsSnt -H 20 -sx 0:2000 -r -ly
packet/flow distribution 2000 log

So what does it mean. 50% of every packet processed by t2 produce a new flow, which is a performance issue if running on an interface a lot of content plugins are loaded and the bitrate exceeds the processing power of the core, e.g. > 5-6 GBit/s.

Dissector optimization

If you already know that certain encapsulation protocols such as L2TP or GRE are not in your traffic, why having code for it, switch them off. So you can configure the dissector in the core. The switches are residing in tranalyzer.h, search for Protocol stack:

...
// Protocol stack
#define AYIYA           1 // AYIYA processing on: 1, off: 0
#define GENEVE          1 // GENEVE processing on: 1, off: 0
#define TEREDO          1 // TEREDO processing on: 1, off: 0
#define L2TP            1 // L2TP processing on: 1, off: 0
#define GRE             1 // GRE processing on: 1, off: 0
#define GTP             1 // GTP processing on: 1, off: 0
#define VXLAN           1 // VXLAN processing on: 1, off: 0
#define IPIP            1 // IPv4/6 in IPv4/6 processing on: 1, off: 0
#define ETHIP           1 // Ethernet over IP on: 1, off: 0
#define CAPWAP          1 // CAPWAP processing on: 1, off: 0
#define LWAPP           1 // LWAPP processing on: 1, off: 0
...

If you are sure that in your traffic is not L2TP, GRE, TEREDO or IPIP then switch them off, and save code run time.

$ t2conf tranalyzer2 -D TEREDO=0 -D L2TP=0 -D GRE=0 IPIP=0
$ t2build tranalyzer2
...
$

BPF filtering

If you are only interested in certain traffic, e.g. UDP or http then prefilter it via BPF. Why generate flows for something you do not need, right? That is a splendit performance enhancer.

$ t2 -r ~/data/annoloc2.pcap -w ~/results/ udp and port 53
================================================================================
Tranalyzer 0.8.6 (Anteater), Tarantula. PID: 24653
================================================================================
[INF] Creating flows for L2, IPv4, IPv6
Active plugins:
    01: basicFlow, 0.8.6
    02: basicStats, 0.8.6
    03: tcpStates, 0.8.6
    04: txtSink, 0.8.6
[INF] basicFlow: IPv4 Ver: 4, Rev: 01102019, Range Mode: 0, subnet ranges loaded: 312983 (312.98 K)
[INF] basicFlow: IPv6 Ver: 4, Rev: 01102019, Range Mode: 0, subnet ranges loaded: 21495 (21.50 K)
Processing file: /home/wurst/data/annoloc2.pcap
[INF] BPF: udp and port 53
Link layer type: Ethernet [EN10MB/1]
Dump start: 1022171701.692718 sec (Thu 23 May 2002 16:35:01 GMT)
[WRN] snapL2Length: 54 - snapL3Length: 40 - IP length in header: 65
Dump stop : 1022171726.636776 sec (Thu 23 May 2002 16:35:26 GMT)
Total dump duration: 24.944058 sec
Finished processing. Elapsed time: 0.122558 sec
Finished unloading flow memory. Time: 0.127228 sec
Percentage completed: 0.25%
Number of processed packets: 2928 (2.93 K)
Number of processed bytes: 158112 (158.11 K)
Number of raw bytes: 276323 (276.32 K)
Number of pcap bytes: 83586990 (83.59 M)
Number of IPv4 packets: 2928 (2.93 K) [100.00%]
Number of A packets: 1459 (1.46 K) [49.83%]
Number of B packets: 1469 (1.47 K) [50.17%]
Number of A bytes: 78786 (78.79 K) [49.83%]
Number of B bytes: 79326 (79.33 K) [50.17%]
Average A packet load: 54.00
Average B packet load: 54.00
--------------------------------------------------------------------------------
basicStats: Biggest L3 Talker: 138.212.18.252 (JP): 464 [15.85%] packets
basicStats: Biggest L3 Talker: 138.212.191.105 (JP): 30828 (30.83 K) [19.50%] bytes
--------------------------------------------------------------------------------
Headers count: min: 3, max: 3, average: 3.00
Number of UDP packets: 2928 (2.93 K) [100.00%]
Number of UDP bytes: 158112 (158.11 K) [100.00%]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Number of processed   flows: 396
Number of processed A flows: 199 [50.25%]
Number of processed B flows: 197 [49.75%]
Number of request     flows: 198 [50.00%]
Number of reply       flows: 198 [50.00%]
Total   A/B    flow asymmetry: 0.01
Total req/rply flow asymmetry: 0.00
Number of processed   packets/flows: 7.39
Number of processed A packets/flows: 7.33
Number of processed B packets/flows: 7.46
Number of processed total packets/s: 117.38
Number of processed A+B packets/s: 117.38
Number of processed A   packets/s: 58.49
Number of processed   B packets/s: 58.89
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Number of average processed flows/s: 15.88
Average full raw bandwidth: 88622 b/s (88.62 Kb/s)
Average snapped bandwidth : 50709 b/s (50.71 Kb/s)
Average full bandwidth : 88610 b/s (88.61 Kb/s)
Max number of flows in memory: 396 [0.15%]
Memory usage: 0.06 GB [0.08%]
Aggregate flow status: 0x0000100200004000
[WRN] L3 SnapLength < Length in IP header
[WRN] Consecutive duplicate IP ID
[INF] IPv4
$

The end report shows only UDP traffic, aka dns traffic.

%dir  flowInd  flowStat            timeFirst          timeLast           duration   numHdrDesc  numHdrs  hdrDesc       srcMac             dstMac             ethType  ethVlanID  srcIP            srcIPCC  srcIPWho                       srcPort  dstIP            dstIPCC  dstIPWho                       dstPort  l4Proto  numPktsSnt  numPktsRcvd  numBytesSnt  numBytesRcvd  minPktSz  maxPktSz  avePktSize  stdPktSize  minIAT  maxIAT    aveIAT        stdIAT        pktps      bytps     pktAsm        bytAsm        tcpStates
A     2        0x0000000200004000  1022171701.830893  1022171701.830893  0.000000   1           3        eth:ipv4:udp  00:00:ab:91:1f:f8  00:d0:02:6d:78:00  0x0800              138.212.189.209  jp       "ASAHI KASEI CORPORATION"      4541     138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       17       1           1            36           98            36        36        36          0           0       0         0             0             0          0         0             -0.4626866    0x00
B     2        0x0000000200004001  1022171701.830914  1022171701.830914  0.000000   1           3        eth:ipv4:udp  00:d0:02:6d:78:00  00:00:ab:91:1f:f8  0x0800              138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       138.212.189.209  jp       "ASAHI KASEI CORPORATION"      4541     17       1           1            98           36            98        98        98          0           0       0         0             0             0          0         0             0.4626866     0x00
A     4        0x0000000200004000  1022171701.967777  1022171701.967777  0.000000   1           3        eth:ipv4:udp  00:48:54:7a:23:90  00:d0:02:6d:78:00  0x0800              138.212.188.61   jp       "ASAHI KASEI CORPORATION"      2226     138.212.218.199  jp       "ASAHI KASEI CORPORATION"      53       17       1           0            34           0             34        34        34          0           0       0         0             0             0          0         1             1             0x00
A     5        0x0000000200004000  1022171701.967782  1022171701.967782  0.000000   1           3        eth:ipv4:udp  00:48:54:7a:23:90  00:d0:02:6d:78:00  0x0800              138.212.188.61   jp       "ASAHI KASEI CORPORATION"      2227     138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       17       1           1            34           125           34        34        34          0           0       0         0             0             0          0         0             -0.572327     0x00
B     5        0x0000000200004001  1022171702.434285  1022171702.434285  0.000000   1           3        eth:ipv4:udp  00:d0:02:6d:78:00  00:48:54:7a:23:90  0x0800              138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       138.212.188.61   jp       "ASAHI KASEI CORPORATION"      2227     17       1           1            125          34            125       125       125         0           0       0         0             0             0          0         0             0.572327      0x00
A     6        0x0000000200004000  1022171701.975597  1022171701.975597  0.000000   1           3        eth:ipv4:udp  00:01:02:b7:f1:cf  00:d0:02:6d:78:00  0x0800              138.212.187.120  jp       "ASAHI KASEI CORPORATION"      64366    138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       17       1           1            38           224           38        38        38          0           0       0         0             0             0          0         0             -0.7099237    0x00
B     6        0x0000000200004001  1022171702.073014  1022171702.073014  0.000000   1           3        eth:ipv4:udp  00:d0:02:6d:78:00  00:01:02:b7:f1:cf  0x0800              138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       138.212.187.120  jp       "ASAHI KASEI CORPORATION"      64366    17       1           1            224          38            224       224       224         0           0       0         0             0             0          0         0             0.7099237     0x00
A     7        0x0000000200004000  1022171702.299027  1022171702.299027  0.000000   1           3        eth:ipv4:udp  00:00:ab:91:1f:f8  00:d0:02:6d:78:00  0x0800              138.212.189.209  jp       "ASAHI KASEI CORPORATION"      4545     138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       17       1           1            34           147           34        34        34          0           0       0         0             0             0          0         0             -0.6243094    0x00
B     7        0x0000000200004001  1022171702.299046  1022171702.299046  0.000000   1           3        eth:ipv4:udp  00:d0:02:6d:78:00  00:00:ab:91:1f:f8  0x0800              138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       138.212.189.209  jp       "ASAHI KASEI CORPORATION"      4545     17       1           1            147          34            147       147       147         0           0       0         0             0             0          0         0             0.6243094     0x00
...

Binary Output

If you need to write to disk or a slow medium, then the only option is to reduce the volume by choosing binary coding. The worst you could choose is JSON. THis format was probably invented by a sadist who likes to torture HW and programmers. So the plugin binSink is your best and non-sadistic choice for optimal performance.

The configuration in binSink.h gives you the choice to compress the binary even more and whether you allow to split binary files when the -W option is used. You have to test whether the computing overhead is a lesser pain than the additional bytes written to a slow medium or a channel with low bandwidth.

All constants can be configured by t2conf binSink -D ....

As an exercise add the binSink and run t2 on annoloc2.pcap and decode the binary flow file with t2b2t. The latter will be automatically compiled when binSink is built. You can do a diff between the txtSink file and the t2b2t file if you wish. And if you really insist, convert it to json crap. Here are some sample commands to use.

$ t2build binSink
...
$ t2 -r ~/data/annoloc2.pcap -w ~/results
...
$ t2b2t -r ~/results/annoloc2_flows.bin | head
%dir  flowInd  flowStat            timeFirst          timeLast           duration   numHdrDesc  numHdrs  hdrDesc       srcMac             dstMac             ethType  ethVlanID  srcIP            srcIPCC  srcIPWho                       srcPort  dstIP            dstIPCC  dstIPWho                       dstPort  l4Proto  numPktsSnt  numPktsRcvd  numBytesSnt  numBytesRcvd  minPktSz  maxPktSz  avePktSize  stdPktSize  minIAT  maxIAT    aveIAT        stdIAT        pktps      bytps     pktAsm        bytAsm        tcpStates
A     2        0x0000000200004000  1022171701.830893  1022171701.830893  0.000000   1           3        eth:ipv4:udp  00:00:ab:91:1f:f8  00:d0:02:6d:78:00  0x0800              138.212.189.209  jp       "ASAHI KASEI CORPORATION"      4541     138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       17       1           1            36           98            36        36        36          0           0       0         0             0             0          0         0             -0.4626866    0x00
B     2        0x0000000200004001  1022171701.830914  1022171701.830914  0.000000   1           3        eth:ipv4:udp  00:d0:02:6d:78:00  00:00:ab:91:1f:f8  0x0800              138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       138.212.189.209  jp       "ASAHI KASEI CORPORATION"      4541     17       1           1            98           36            98        98        98          0           0       0         0             0             0          0         0             0.4626866     0x00
A     4        0x0000000200004000  1022171701.967777  1022171701.967777  0.000000   1           3        eth:ipv4:udp  00:48:54:7a:23:90  00:d0:02:6d:78:00  0x0800              138.212.188.61   jp       "ASAHI KASEI CORPORATION"      2226     138.212.218.199  jp       "ASAHI KASEI CORPORATION"      53       17       1           0            34           0             34        34        34          0           0       0         0             0             0          0         1             1             0x00
A     5        0x0000000200004000  1022171701.967782  1022171701.967782  0.000000   1           3        eth:ipv4:udp  00:48:54:7a:23:90  00:d0:02:6d:78:00  0x0800              138.212.188.61   jp       "ASAHI KASEI CORPORATION"      2227     138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       17       1           1            34           125           34        34        34          0           0       0         0             0             0          0         0             -0.572327     0x00
B     5        0x0000000200004001  1022171702.434285  1022171702.434285  0.000000   1           3        eth:ipv4:udp  00:d0:02:6d:78:00  00:48:54:7a:23:90  0x0800              138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       138.212.188.61   jp       "ASAHI KASEI CORPORATION"      2227     17       1           1            125          34            125       125       125         0           0       0         0             0             0          0         0             0.572327      0x00
A     6        0x0000000200004000  1022171701.975597  1022171701.975597  0.000000   1           3        eth:ipv4:udp  00:01:02:b7:f1:cf  00:d0:02:6d:78:00  0x0800              138.212.187.120  jp       "ASAHI KASEI CORPORATION"      64366    138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       17       1           1            38           224           38        38        38          0           0       0         0             0             0          0         0             -0.7099237    0x00
B     6        0x0000000200004001  1022171702.073014  1022171702.073014  0.000000   1           3        eth:ipv4:udp  00:d0:02:6d:78:00  00:01:02:b7:f1:cf  0x0800              138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       138.212.187.120  jp       "ASAHI KASEI CORPORATION"      64366    17       1           1            224          38            224       224       224         0           0       0         0             0             0          0         0             0.7099237     0x00
A     7        0x0000000200004000  1022171702.299027  1022171702.299027  0.000000   1           3        eth:ipv4:udp  00:00:ab:91:1f:f8  00:d0:02:6d:78:00  0x0800              138.212.189.209  jp       "ASAHI KASEI CORPORATION"      4545     138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       17       1           1            34           147           34        34        34          0           0       0         0             0             0          0         0             -0.6243094    0x00
B     7        0x0000000200004001  1022171702.299046  1022171702.299046  0.000000   1           3        eth:ipv4:udp  00:d0:02:6d:78:00  00:00:ab:91:1f:f8  0x0800              138.212.18.252   jp       "Asahi Kasei Networks Corpor"  53       138.212.189.209  jp       "ASAHI KASEI CORPORATION"      4545     17       1           1            147          34            147       147       147         0           0       0         0             0             0          0         0             0.6243094     0x00
$ ./t2b2t -r ~/results/annoloc2_flows.bin -j -w annoloc2_flows.jsoncrap
$

Plugins and configuration

The more plugins are loaded the slower t2 gets, as more SW does not produce more performance on a given HW. If a Softie tells you otherwise, he talks bullshit.

So think first what you want to see in your flows or summary files, then load the plugins. Each plugin can be tailored for a given task, so Code can be switched off, increasing performance.

E.g. If you do not need the windows size engine or the checksum calculation in tcpFlags, switch it off.

If you are not interested in a detailed IAT statistics, switch BS_IAT_STATS and BS_STDDEV off in basicStats.

If you are interested in detecting botnets with dnsDecode, then switch DNS_MODE 1, as all the answer and aux records are not necessary. Switch DNS_REQA 1 to avoid storing the same req record multiple times.

Run T2 with protoStat on the interface and look which protocols are present and unload plugins which are not needed. Or take a sample pcap and run all plugins on it and unload plugins who do not produce output in the endreport.

Sometimes you want to detect whether there is unencrypted traffic in your corporate network, then use pwX and protoStat alone. If pwX produces output, you are in trouble, so fix that first, before you load other plugins.

Parallelization

The last line is always parallelization, which is a bit different as in other tools, as you have control over the whole process. So continue here