Instead of storing the entire notice in Notice::suppressing, just store the time the notice should be suppressed until.
This has the same functionality, except that end_suppression can no longer be generated.
This has the effect of greatly reducing the memory usage on a bro cluster that is raising a lot of suppressed notices. This can happen if suppression is enabled, but the suppression id is too specific and multiple notices are raised anyway.
This problem is exacerbated on cluster nodes that are running 10 workers, since the suppression information is duplicated across all workers ( and then across all nodes )
For a stress test of a pcap that raises 38609 notices:
Without the patch
147255296 maximum resident set size
With the patch
49586176 maximum resident set size
On the real cluster, I was seeing memory usage growing at the rate of 2 megabytes/second or so. Even with 24G of ram the nodes were OOMing after a few hours. Bro workers would crash, eventually resync the data, and crash again.