Unintentional Excessive Garbage Collection

We all know by now that .Net has a garbage collector, and have hopefully also learned about the 2 modes of GC and the proper use of Dispose() and object finalization. Great! So we took care of early as possible release of resources and maybe take advantage of some object pooling or some other memory reuse.

Now we go ahead and write a whole bunch of code, but when we fire up performance monitor we see that the % time in GC is high, or spikes a lot, or that the application has occasional “hangs” where the CPU spikes together with GC and wonder - what is happening there?

It’s possible that you are encountering excessive garbage generation. Worse off, you are doing it by trusting the very reason you like the .Net framework - those nice collections and list structures that come with it or StringBuilder which you use so diligently after hearing that strings are immutable and repetitive catenation is bad. Here is the whole point of this article: Please use the overloaded constructor that takes (int Capacity) as a parameter.

So instead of

// optimistic, often wrong
StringBuilder sbNaive = new StringBuilder();
ArrayList myListNaive = new ArrayList();


// realistic, better guess than waste
StringBuilder sb = new StringBuilder(83);
ArrayList myList = new ArrayList(798);

If you don’t know the exact number of items, and the number of items can be bigger than 16, guess!

Why? Most all collections in System.Collections.* use an algorithm whereby the underlying array of objects grows to double its previous size when the initial “vanilla” capacity is exceeded. Well, it doesn’t actually grow. A new array is created whose size is double the existing one. Then the object references from the original array are copied to the new bigger array and then the old array is discarded. So each time the structure has to grow, a fixed length array is released to GC’s graces. GC kicks in every x number of bytes worth of abandoned memory accumulates It would also kick in if the fragmentation of your heap generation is such that a new object won’ fit. Excessive generation of temporary useless objects would induce GC more often. I call the discarded bytes useless because the sole reason for their existence was to hold references along the way but they are not eventually used and do not serve your application.

Allocation of memory on the managed heap is fast in human terms, but is a waste - GC inducing waste. This phenomenon can manifest if your application has high transaction rate and data objects keep popping into existence, loaded from databases, middle tiers into web pages, services or what have you. It also can manifest if you build many strings on the fly and are not careful about it.

Let’s examine System.Text.StringBuilder. It’s recommended as an alternative to repetitive catenation of strings (as well it should be!). It happens to start with and underlying 16 bytes length string. If you were to actually concatenate a string of length 33 byte by byte, you would have allocated the underlying string 3 times, creating 2 objects for the GC to collect:

16 bytes, then 32 bytes then 64 bytes. Total allocation used 33. Waste = (16 + 32 + 64) - 33 = 79 bytes. In this example, 16 + 32 bytes were total waste. 31 bytes were allocated but not used, so memory consumption is higher than it should have been but you wasted more than you over allocated.

If we wanted a string for some cache key that is for example: “MID1234561|CID102|SID445|R789235612|WHO_KNOWS_WHAT|2006-08-08-1245”

We would now use StringBuilder to generate the string, because we know strings are immutable. We don’t set a capacity on StringBuilder constructor and just use .Append () method to get various properties of our object into the string. Total string length 66. The allocations would be 16, 32, 64, and 128. That’s 3 wasteful GC objects and 128 - 66 = 62 bytes that were allocated along the way and were never used. Do this 10 times on a page that gets 10 million hits a day and you create a constant stream of 7kb per second, and you’d hit GC’s threshold no later than once every 35 seconds or so.

One wants to be conservative with resources, so a concern is not to allocate too much memory unless we need it. The bigger concern here is grabbing memory unintentionally and making no use of it. GC should be minimized if you want to maintain performance, or else it will block your thread and freeze your app. If you start seeing memory get recycled on you because memory is scarce, then you want to tackle over allocation waste as well. But memory is cheap these days, so I say: Spend $100 and get another gigabyte of RAM.

In the beginning of this article, I recommended that if you don’t know how long your list is or eventual string is or something, you should take a stab and guess. Guessing and overestimating may beat the alternative often.

Say you have a collection of 18 - 24 widgets. You guess you need 20. Since the underlying structure of an ArrayList and most collections is a regular Object [] (straight c# array), you create ArrayList(20). If you end up only adding 18 items to your collection, you are out a total of 2 * sizeof(int). Not too bad, especially considering that the widget sizes are what would take most of the memory. And no need to re-allocate the underlying array, so no object was created for nothing. The widgets themselves, typically classes (reference objects), would be allocated on heap anyway so no gain, no pain either. But you allocated once, and GC didn’t get an unused wasted object. ArrayList allocates Object [16] by default, so if you didn’t guess and add 18 objects, you would have wasted 1 array of size 16 * sizof(int), because the original one was not enough, and the content of the original gets copied over to the new one.

When high item count or bytes are at play, guessing gets better. The table below shows:

  1. Structure Length: a structure size (be it bytes or items in a list)

  2. GC Objects: The minimum number of abandoned objects to GC (that is allocation that got chucked to GC because a bigger structure was needed)

  3. GC Objects size: the cumulative sizeof(int) wasted by the GC objects

  4. Avg. Eventual Item count - a simple average of items that would have resulted in such a structure if no initial allocation was made.

Structure Length GC Objects GC Objects size Avg. Eventual Item Count
16 0 0 0
32 1 24 24
64 2 64 48
128 3 144 96
256 4 304 192
512 5 624 384
1024 6 1264 768
2048 7 2544 1536
4096 8 5104 3072
8192 9 10224 6144
16384 10 20464 12288
32768 11 40944 24576
65536 12 81904 49152
131072 13 163824 98304
262144 14 327664 196608
524288 15 655344 393216
1048576 16 1310704 786432
4194304 18 5242864 3145728

So consider a structure of about 700 items. You didn’t know exactly how many you would have, you guess 500. You populate it with 700, 500 = wasted, 1000 - 700 = 300 over allocated. GC called once for nothing. You guess 900 (still off by 200 from reality, but to the plus size this time): You populate it with 700, 200 over allocated, no GC until your useful object is done.

Using MemoryStream? That’s just a byte array. And it grows by doubling - more or less. If you don’t specify the size, it allocates no bytes. Then on first hits, it allocates enough bytes for the write. If you write more data but still less than 100 bytes, 100 bytes get allocated. Past 100 bytes, it’s the double the size thing all over again.

HashTable? Starts off with a size that’s a prime number larger than some load factor float thing, and then grows by finding the next prime number that’s larger than (twice the current # of buckets) + 1. This is even worse, since the prime series can have large “holes” between the 2 next primes and might end up allocating some arbitrarily larger array because of that. OK, it’s a bit more complex than this, and there is some fancy pointer manipulation under the covers for the copying and reference swapping, but no great news as far as predicting how much memory you really needed to begin with. Also, both growth of the underlying bucket list and insertion of a colliding items cause a re-hash of keys, which is not GC but costs you time.

So by now hopefully you are convinced or mildly interested in specifying a size for your structures when calling its constructor. Does this really affect you? Where do we see all these objects created and populated on the fly? A common approach to tier separation is that the DB or DAL layer reads data tables (horrendously wasteful objects, unless you really need them, but more on that in a different article) and then populates a strongly typed collection / class from the table by reading each row, creating the strongly typed object and shoving it into a HashTable or dictionary or something of the sort. Tier separation is a good thing. Just remember to specify the capacity. You can get it from Rows.Count() or its equivalent.

String concatenation is very common for dynamic content on websites. Many developers use on the fly strings mad up of static data and some session, profile or transient data. This includes shopping carts, personalization aspects and others. If the eventual string is displayed as HTML or body text, I’d actually advise to defer any string concatenation and override the Render() event. There you can append to the actual output stream and waste no memory at all. Output stream under IIS has a buffer of about 30kb initially if I recall so render away! By creating a whole bunch of properties that are strings and assigning them to literals you waste memory in the creation of these strings, and the base class Render method would just use the eventual string. The standing recommendation of using StringBuilder is very sound - all you need to do is remember to initialize it to a sane capacity initially.

In conclusion, beware and be aware. Not everyone encounters GC problems, and the benefits of RAD are numerous. If you do, however, have data intensive applications with high transaction rates, consider reducing the amount of objects GC has to handle by any means that make sense in your application.