+N Consulting, Inc.

Upon Reflection - C# yields statement enumeration helper

The new yields statement sounds very convenient. In the past, one had to write a significant amount of code to implement the IEnumerator interface and expose an enumerator. That included considerations of concurrency, a loop variable bound to the instance or other methods to maintain current loop value during enumeration.

Fret no more, a new syntax is in town - the yields statement.

With the yields statement, IEnumerator implementation folds down to a scan one liner:

public class MyCollection : IEnumerable
{
public IEnumerator GetEnumerator()
{
foreach (string s in new string[] { "Larry", "Moe", "Curley" })
{
yield return s + " is a stooge";
}
}
}

You can also provide an enumerator returning static values or “hard coded” number of values:

public class VerySimple : IEnumerable
{
public List<DateTime> _Things;
public IEnumerator GetEnumerator()
{
yield return 1;
yield return 7;
yield return 11;
}
}

So that sounds great! No pesky Reset(), MoveNext() etc., no private index to hold on to, and even options to do more fancy things, like exposing only some of your items to enumeration:

public class Person
{
public string Name;
public bool IsPublic;
public Person(string name, bool isPublic)
{
this.Name = name;
this.IsPublic = isPublic;
}
}
public class People : IEnumerable
{
private Person[] _Peeps = new Person[] {
new Person("James Brown", true),
new Person("John Lenon", true),
new Person("Johnny Doe", false)
};
public IEnumerator GetEnumerator()
{
foreach (Person dude in _Peeps)
{
if (dude.IsPublic)
{
yield return dude.Name + " is a well known";
}
}
}
}

That was easy, and pretty useful too. You get to have an easy syntax for emitting each value, and you get exact control over which item is exposed without implementing a whole sub class just for the enumeration.

Looking at this keyword and the simplicity of exposing enumerator one might be tempted to think there is some magic new framework for enumerating a collection with hooks and generic loops or something. To find out, I looked at the IL generated for the MyCollection class we just created.

As expected, we find the class has a method named GetEnumerator(). It’s implementation is seemingly simple, instantiate some cryptically named class and return it.

public IEnumerator GetEnumerator()
{
< GetEnumerator > d__0 d__ = new < GetEnumerator > d__0(0);
d__.<> 4__this = this;
return d__;
}

When you look at the implementation of the enumerator class itself, you get quite a few lines of code:

private sealed class <GetEnumerator>d__0 : IEnumerator<object>, IEnumerator, IDisposable
{
// Fields
private int <>1__state;
private object <>2__current;
public MyCollection<>4__this;
public string[] <>7__wrap2;
public int <>7__wrap3;
public string <s>5__1;
// Methods
public <GetEnumerator>d__0(int <>1__state)
{
this.<> 1__state = <> 1__state;
}
private bool MoveNext()
{
try
{
switch (this.<> 1__state)
{
case 0:
this.<> 1__state = -1;
this.<> 1__state = 1;
this.<> 7__wrap2 = new string[] { "Larry", "Moe", "Curley" };
this.<> 7__wrap3 = 0;
while (this.<> 7__wrap3 < this.<> 7__wrap2.Length)
{
this.< s > 5__1 = this.<> 7__wrap2[this.<> 7__wrap3];
this.<> 2__current = this.< s > 5__1 + " is a stooge";
this.<> 1__state = 2;
return true;
Label_0098:
this.<> 1__state = 1;
this.<> 7__wrap3++;
}
this.<> 1__state = -1;
break;
case 2:
goto Label_0098;
}
return false;
}
fault
{
this.Dispose();
}
}
void IEnumerator.Reset()
{
throw new NotSupportedException();
}
void IDisposable.Dispose()
{
switch (this.<> 1__state)
{
case 1:
case 2:
this.<> 1__state = -1;
break;
}
}
// Properties
object IEnumerator<object>.Current
{
get
{
return this.<> 2__current;
}
}
object IEnumerator.Current
{
get
{
return this.<> 2__current;
}
}
}

So what is really going on here is that when you type out yield return x; the compiler transforms this into a method stub, implants your loop logic in the MoveNext() method of a new shiny enumerator class, and provides the standard requisite functions of IEnumerable interface which support the foreach statement.

Is this good or bad? Certainly it serves well in many instances. For most of your daily uses for an enumerator this should work quite well. It’s strongly typed to the list item and uses your class’s values referenced directly.

What can be sub optimal about this? Multithreaded applications need to implement locking at the class level. Some collections in .NET implement an internal version number such that if the collection chances during enumeration an exception gets thrown to the enumerating thread. Not so here. If you want that behavior you’d have to implement it yourself.

You should note that the loop itself and any of your conditions get transformed by the compiler. The transformation, I trust, is functionally equivalent. The transformation result will vary slightly based on the collection being iterated, or if you are using a static chain of yield statements. In the case of hard coded yielded values, no concurrency issues should arise, but that is fairly rare in my humble experience.

Besides that, I think it’s pretty cool. You get to write less code, the compiler take care of your code generation.

On a side note, when decompiling your code, don’t get too caught up in Reflector’s code rendering. For one, IL decompiled to your language of choice is not a symmetric operation. For that reason and due to compiler optimizations and inlining, certain language constructs may come up reflected as GOTO label but were not necessarily coded this way originally in the higher level language.

Reign in your web parameters

As web developers we are tempted to be general about our use or the request object and submitted parameters. The temptation to access Request["someKey"] is high because it frees us from wondering whether MyPage.aspx was posted to using POST or GET and we might also convince ourselves that it’s more flexible because it means that both POST and GET would work. Well, it would do something, that’s for sure. But do we always get consistent results? Consider:

string val = Context.Request["MyKey"];         

Where does val come from? HttpRequest class scans QueryString, Form, Cookies and ServerVariables of the request object, in that order. First match gets you an answer. This also means that if a query string param named “SID” exists and a form field named “SID” exists, you will get the query string value (GET). The implementation essentially is

// implementation detail
public string this[string key]
{
get
{
string returnValue = this.QueryString[key];
if (returnValue != null)
{
return returnValue;
}
returnValue = this.Form[key];
if (returnValue != null)
{
return returnValue;
}
HttpCookie cookie1 = this.Cookies[key];
if (cookie1 != null)
{
return cookie1.Value;
}
returnValue = this.ServerVariables[key];
if (returnValue != null)
{
return returnValue;
}
return null;
}
}

Note here that as a last resort, the key is looked up against the ServerVariables collection. That’s makes me a bit uneasy. Not that I ever wanted a form variable named “USER_AGENT” but now that I know it scans for it, I’ll be careful about my variable naming. Moving on, consider: string val = Context.Request.Params["MyKey"];

Where does val come from? You wouldn’t necessarily know. Params is built on first request, and accessed throughout the lifetime of the HttpRequest object. Building it is done by creating a new collection, which includes all 4 request sources, as listed below. Since it is a Collections.Specialized.NameValueCollection type, if a key exists in more than one of the 4 sources, the value would be appended as a comma separated list. So if “PID” was both a query string GET parameter (say value 123) and a form POST variable (say value 456), then (Request.Params["PID"] == "123,456") == true;

private void FillInParamsCollection()
{
//_params is the underlying collection supporing the Params property
this._params.Add(this.QueryString);
this._params.Add(this.Form);
this._params.Add(this.Cookies);
this._params.Add(this.ServerVariables);
}

What can we conclude from these observations?

  1. If you called Request.Params, even once, you now created a new collection, allocating enough memory to hold ALL parameters from the various sources. If you know what the parameter’s source is, you would be more efficient using that source collection directly. If you or your buddy working on the same code tree called Request.Params, however, the hit is taken, and subsequent references would not re-allocate collections.

  2. Precedence of parameters applies when you call Request[key], but not when you call Request.Params[key] .

  3. The effect of Params collection sporting a comma separated list is a double whammy:

    1. Upon insert, (Request.Params[“myKey”] = “myvalue”) an arraylist is created and appended to (repetitive allocation for each value).

    2. Upon assignment from Request.Params:

      1. If the string [] GetValues() method is used, the ArrayList gets converted to a string [] just for you each time (it’s not cached or preserved as an internal variable).

      2. If the string Get() is used, private static string GetAsOneString(ArrayList list), a string builder object is created just for you to concatenate the values and return them to you.

Although both methods are coded as efficiently, neither gets you a reference to an existing object, and both have to create another object and copy data to it on the fly each time. For that reason, it’s less efficient than if you knew exactly the source of your parameter and used that collection as the source.

This issue can manifest if your application combines things like flash movies, remote static HTML forms submitting to your site (think affiliate marketing programs) and various hard coded links which were created to a specific form from menus, site maps and other corners of your application. When you work with forms designed by visual studio and sue asp form controls you generally don’t encounter this because then you have access to strongly typed properties. More often than not I have run across a hybrid of asp form controls and hand coded links from CMS parts of your app or other rouge text links so it’s worth knowing and paying attention. To reign in the parameters and ensure no rouge parameters exist ever, I’d recommend creating a utility class that would wrap the Request object and use a set of well known parameters only which would encompass all usable parameter names. In conclusion, for both efficiency and un-ambiguity you would benefit from using concise parameter sources such as Request.QueryString["name"] and Request.Form["name"], and not rely on the “catchall” of Params or Reqest["name"] as shorthand. If you do find the need, inspect and ensure that parameter names do not collide so that you don’t end up with a peculiar value.

/// ParamMarshal is a sample utility which wraps the Request object. 
/// It provides an easy way to eliminate ambiguity and define the
/// source of POST and GET parameters for large web projects so that
/// no parameter collision happens. Search your solution for any
/// reference to the "Request[" or "Request." and replace it with a
/// call to ParamMarshal.GetValue()
///
namespace NH.Web.Utilities
{
public class ParamMarshal
{
public static string GetValue(WellKnownParam param)
{
string result = string.Empty;
switch (param)
{
// the first 5 are POST variables. No ambiguity.
case WellKnownParam.FirstName:
case WellKnownParam.LastName:
case WellKnownParam.Password:
result = HttpContext.Current.Request.Form[param.ToString()];
break;
// the following 2 are GET variables. No ambiguity either.
case WellKnownParam.AffiliateID:
case WellKnownParam.Keywords:
result = HttpContext.Current.Request.QueryString[param.ToString()];
break;
default:
/// this should never happen., because you should
/// take care of every member of the WellKnownParameter
/// Specifically, do NOT put
/// return HttpContext.Current.Request[param.ToString()]..
throw new Exception(
"Programmer forgot to handle " + param.ToString());
break;
}
return result;
}
public enum WellKnownParam
{
FirstName,
LastName,
Password,
AffiliateID,
Keywords
}
}
}

Unintentional Excessive Garbage Collection

We all know by now that .Net has a garbage collector, and have hopefully also learned about the 2 modes of GC and the proper use of Dispose() and object finalization. Great! So we took care of early as possible release of resources and maybe take advantage of some object pooling or some other memory reuse.

Now we go ahead and write a whole bunch of code, but when we fire up performance monitor we see that the % time in GC is high, or spikes a lot, or that the application has occasional “hangs” where the CPU spikes together with GC and wonder - what is happening there?

It’s possible that you are encountering excessive garbage generation. Worse off, you are doing it by trusting the very reason you like the .Net framework - those nice collections and list structures that come with it or StringBuilder which you use so diligently after hearing that strings are immutable and repetitive catenation is bad. Here is the whole point of this article: Please use the overloaded constructor that takes (int Capacity) as a parameter.

So instead of

// optimistic, often wrong         
StringBuilder sbNaive = new StringBuilder();
ArrayList myListNaive = new ArrayList();

Consider:

// realistic, better guess than waste         
StringBuilder sb = new StringBuilder(83);
ArrayList myList = new ArrayList(798);

If you don’t know the exact number of items, and the number of items can be bigger than 16, guess!

Why? Most all collections in System.Collections.* use an algorithm whereby the underlying array of objects grows to double its previous size when the initial “vanilla” capacity is exceeded. Well, it doesn’t actually grow. A new array is created whose size is double the existing one. Then the object references from the original array are copied to the new bigger array and then the old array is discarded. So each time the structure has to grow, a fixed length array is released to GC’s graces. GC kicks in every x number of bytes worth of abandoned memory accumulates It would also kick in if the fragmentation of your heap generation is such that a new object won’ fit. Excessive generation of temporary useless objects would induce GC more often. I call the discarded bytes useless because the sole reason for their existence was to hold references along the way but they are not eventually used and do not serve your application.

Allocation of memory on the managed heap is fast in human terms, but is a waste - GC inducing waste. This phenomenon can manifest if your application has high transaction rate and data objects keep popping into existence, loaded from databases, middle tiers into web pages, services or what have you. It also can manifest if you build many strings on the fly and are not careful about it.

Let’s examine System.Text.StringBuilder. It’s recommended as an alternative to repetitive catenation of strings (as well it should be!). It happens to start with and underlying 16 bytes length string. If you were to actually concatenate a string of length 33 byte by byte, you would have allocated the underlying string 3 times, creating 2 objects for the GC to collect:

16 bytes, then 32 bytes then 64 bytes. Total allocation used 33. Waste = (16 + 32 + 64) - 33 = 79 bytes. In this example, 16 + 32 bytes were total waste. 31 bytes were allocated but not used, so memory consumption is higher than it should have been but you wasted more than you over allocated.

If we wanted a string for some cache key that is for example: “MID1234561|CID102|SID445|R789235612|WHO_KNOWS_WHAT|2006-08-08-1245”

We would now use StringBuilder to generate the string, because we know strings are immutable. We don’t set a capacity on StringBuilder constructor and just use .Append () method to get various properties of our object into the string. Total string length 66. The allocations would be 16, 32, 64, and 128. That’s 3 wasteful GC objects and 128 - 66 = 62 bytes that were allocated along the way and were never used. Do this 10 times on a page that gets 10 million hits a day and you create a constant stream of 7kb per second, and you’d hit GC’s threshold no later than once every 35 seconds or so.

One wants to be conservative with resources, so a concern is not to allocate too much memory unless we need it. The bigger concern here is grabbing memory unintentionally and making no use of it. GC should be minimized if you want to maintain performance, or else it will block your thread and freeze your app. If you start seeing memory get recycled on you because memory is scarce, then you want to tackle over allocation waste as well. But memory is cheap these days, so I say: Spend $100 and get another gigabyte of RAM.

In the beginning of this article, I recommended that if you don’t know how long your list is or eventual string is or something, you should take a stab and guess. Guessing and overestimating may beat the alternative often.

Say you have a collection of 18 - 24 widgets. You guess you need 20. Since the underlying structure of an ArrayList and most collections is a regular Object [] (straight c# array), you create ArrayList(20). If you end up only adding 18 items to your collection, you are out a total of 2 * sizeof(int). Not too bad, especially considering that the widget sizes are what would take most of the memory. And no need to re-allocate the underlying array, so no object was created for nothing. The widgets themselves, typically classes (reference objects), would be allocated on heap anyway so no gain, no pain either. But you allocated once, and GC didn’t get an unused wasted object. ArrayList allocates Object [16] by default, so if you didn’t guess and add 18 objects, you would have wasted 1 array of size 16 * sizof(int), because the original one was not enough, and the content of the original gets copied over to the new one.

When high item count or bytes are at play, guessing gets better. The table below shows:

  1. Structure Length: a structure size (be it bytes or items in a list)

  2. GC Objects: The minimum number of abandoned objects to GC (that is allocation that got chucked to GC because a bigger structure was needed)

  3. GC Objects size: the cumulative sizeof(int) wasted by the GC objects

  4. Avg. Eventual Item count - a simple average of items that would have resulted in such a structure if no initial allocation was made.

Structure Length GC Objects GC Objects size Avg. Eventual Item Count
16 0 0 0
32 1 24 24
64 2 64 48
128 3 144 96
256 4 304 192
512 5 624 384
1024 6 1264 768
2048 7 2544 1536
4096 8 5104 3072
8192 9 10224 6144
16384 10 20464 12288
32768 11 40944 24576
65536 12 81904 49152
131072 13 163824 98304
262144 14 327664 196608
524288 15 655344 393216
1048576 16 1310704 786432
4194304 18 5242864 3145728

So consider a structure of about 700 items. You didn’t know exactly how many you would have, you guess 500. You populate it with 700, 500 = wasted, 1000 - 700 = 300 over allocated. GC called once for nothing. You guess 900 (still off by 200 from reality, but to the plus size this time): You populate it with 700, 200 over allocated, no GC until your useful object is done.

Using MemoryStream? That’s just a byte array. And it grows by doubling - more or less. If you don’t specify the size, it allocates no bytes. Then on first hits, it allocates enough bytes for the write. If you write more data but still less than 100 bytes, 100 bytes get allocated. Past 100 bytes, it’s the double the size thing all over again.

HashTable? Starts off with a size that’s a prime number larger than some load factor float thing, and then grows by finding the next prime number that’s larger than (twice the current # of buckets) + 1. This is even worse, since the prime series can have large “holes” between the 2 next primes and might end up allocating some arbitrarily larger array because of that. OK, it’s a bit more complex than this, and there is some fancy pointer manipulation under the covers for the copying and reference swapping, but no great news as far as predicting how much memory you really needed to begin with. Also, both growth of the underlying bucket list and insertion of a colliding items cause a re-hash of keys, which is not GC but costs you time.

So by now hopefully you are convinced or mildly interested in specifying a size for your structures when calling its constructor. Does this really affect you? Where do we see all these objects created and populated on the fly? A common approach to tier separation is that the DB or DAL layer reads data tables (horrendously wasteful objects, unless you really need them, but more on that in a different article) and then populates a strongly typed collection / class from the table by reading each row, creating the strongly typed object and shoving it into a HashTable or dictionary or something of the sort. Tier separation is a good thing. Just remember to specify the capacity. You can get it from Rows.Count() or its equivalent.

String concatenation is very common for dynamic content on websites. Many developers use on the fly strings mad up of static data and some session, profile or transient data. This includes shopping carts, personalization aspects and others. If the eventual string is displayed as HTML or body text, I’d actually advise to defer any string concatenation and override the Render() event. There you can append to the actual output stream and waste no memory at all. Output stream under IIS has a buffer of about 30kb initially if I recall so render away! By creating a whole bunch of properties that are strings and assigning them to literals you waste memory in the creation of these strings, and the base class Render method would just use the eventual string. The standing recommendation of using StringBuilder is very sound - all you need to do is remember to initialize it to a sane capacity initially.

In conclusion, beware and be aware. Not everyone encounters GC problems, and the benefits of RAD are numerous. If you do, however, have data intensive applications with high transaction rates, consider reducing the amount of objects GC has to handle by any means that make sense in your application.

Generics - delegate and custom event args

When you want to create custom events, often you need to pass an event argument, and as often you need only to pass in one parameter - your object. So what you used to do is:

    public event MyCustomEventHandler MyCustomEvent;
public delegate void MyCustomEventHandler(MyCustomEventArg evt);
public class MyObject
{
public string foo;
public int count;
public DateTime when;
public MyObject()
{
foo = "hello world";
count = 42;
when = DateTime.Now;
}
public override string ToString()
{
return string.Format("{0}\t{1}\t{2}",foo,count,when);
}
}
public class MyCustomEventArg : EventArgs
{
private MyObject _value;
public MyObject Data
{
get { return _value; }
set { _value = value; }
}
public MyCustomEventArg(MyObject value)
{
_value = value;
}
}

The not so pretty thing about it is that you had to mind-numbingly create a class which derives from EventArgs just to be able to pass your single event argument MyObject .

Now come generics and Microsoft has provided the generic type: EventHandler<TEventArgs> and with it, you can now code


public event EventHandler<MyCustomEventArgs> MyCustomEvent;
// Not "necessary" anymore..
// public delegate void MyCustomEventHandler(MyCustomEventArg evt);

public class MyObject{ /* ... */}

public class MyCustomEventArg : EventArgs
{
private MyObject _value;
public MyObject Data
{
get { return _value; }
set { _value = value; }
}

public MyCustomEventArg(MyObject value)
{
_value = value;
}
}

Well, that saved me **one ** line of code! I can ditch work early today :-)

Wouldn’t it by nice if I could cut out the whole MyCustomEventArg class? that would save some more lines of code,

and prevent my head from hitting the desk and busting my skull on an upside-down thumb tack.

Well, that’s pretty easy to do: create a new class using generics. It supports a single object as a parameter which

can be passed in at construction.

using System;
/// <summary>
/// Encapsulates a single object as a parameter for simple event argument passing
/// <remarks>Now you can declare a simple event args derived class in one wrap</remarks>
/// <code>public void delegate MyCustomeEventHandler(TypedEventArg&lt;MyObject&gt; theSingleObjectParameter)</code>
/// </summary>
public class TypedEventArg<T> : EventArgs
{
private T _Value;
public TypedEventArg(T value)
{
_Value = value;
}

public T Value { get; set;}
}

Now the code can look like

public event MyCustomEventHandler MyCustomEvent;

public delegate void MyCustomEventHandler(TypedEventArg<MyObject> evt);

public class MyObject { /* ... */ }

//This whole thing now goes away..
//public class MyCustomEventArg : EventArgs
//{
// private MyObject _value;
// public MyObject Data { get; set; }
//
// public MyCustomEventArg(MyObject value)
// {
// _value = value;
// }
//}

And life is good. I do have to declare the delegate though, so you daring enough to type 2 nested angled brackets, you can go for the gold:

public event EventHandler<TypedEventArg<MyObject>> MyCustomEvent;
// public delegate void MyCustomEventHandler(TypedEventArg<MyObject> evt);

public MyObject Data { get; set; }

This lets you dispense with 1 extra line of code so for those of us typing with 1 or 2 fingers, life is better. Readability of nested generic references vs. the declaration of a delegate is a matter of taste mostly.

Taste would also guide you as to whether you like the declaration on the event consumer side better:

In case of a full delegate declaration:

Eventful eventSource = new Eventful();
eventSource.MyCustomEvent += new Eventful.SomethingHappenedEventHandler(OnSomethingHappenedEvent);

But if a generics EventHandler is used:

eventSource.MyCustomEvent += new EventHandler<TypedEventArg<MyObject>>(classWithEvent2_SomethingHappendedEvent);

In conclusion, by creating a simple generic type around EventArgs, I can save a few keystrokes and learn something while at it. Left as a future excercise : look at adding a constraint to the argument type such that the argument type is serializeable.

C# Generics - Interface declaration

I was challenged recently with a question about generics with constraints.

The claim was that it’s only a compile flaw that allows your to declare an interface for a generic type with a constraint. Namely that the syntax

public interface ISomething <T> where T: SomeClass

would pass compilation but would not be useful (runtime) because you can’t declare a variable

ISomething myVar = new ISomething<SomeClass>();

or something to that extent. I went home feeling a bit uneasy about the discussion, then coded this up the way I see it. While it is completely true that you can’t ‘new’ an interface, _using _an interface that has a generic type is completely possible and legal.

Here it is in all it’s g(l)ory.


using System;
using System.IO;
using System.Text;
/// Demo of generic interface declaration with constraint.
/// Showing compilation and runtime feasibility.
namespace NH.Demo
{
class Demo
{
static void Main(string[] args)
{
ITryInstance o1 = new GenericInstanceA();
ITry<MyDerivedType> o2 = new GenericInstanceB();
Console.WriteLine("Generic instance 1 " + o1.ProperT.SomeField);
Console.WriteLine("Generic instance 2 " + o2.ProperT.SomeField);
}
}

interface ITry<T> where T : MyBaseType
{
T ProperT{ get; }
}

public class MyBaseType
{
public string SomeField;
}

public class MyDerivedType : MyBaseType
{
public MyDerivedType(string arg)
{
base.SomeField = arg;
}

}

interface ITryInstance : ITry<MyDerivedType>
{
}

/// <summary>
/// this will fail. cosntraint violation
/// "The type 'string' must be convertible
/// to 'NH.Demo.MyBaseType' in order to use it as
/// parameter 'T' in the generic type or
/// method 'NH.Demo.ITry<T>'"
/// </summary>
//interface IFail : ITry<string> { }
public class GenericInstanceA : ITryInstance
{
MyDerivedType ITry<MyDerivedType>.ProperT
{
get { return new MyDerivedType("hi there!"); }
}
}

public class GenericInstanceB : ITry<MyDerivedType>
{
MyDerivedType ITry<MyDerivedType>.ProperT
{
get { return new MyDerivedType("hi there! again"); }
}
}
}

Workflow Custom Activity - Designer re-hosting

In developing some workflow stuff, ran across the need to create custom activities that “aggregate” other atomic activities into a larger activity. The idea is that

  1. Programmer codes some atomic activities, exposes appropriate Dependency Properties

  2. Non programmer

  3. Uses these atomics and creates processes using the atomic compiled activities

  4. Saves these composite activities as a new custom activity3. Can expose specific parameters for inner activities, but otherwise the activity is “locked” for the end user.

  5. End user

  6. Uses the composite activities to define a workflow and run it.

  7. Can bind runtime parameters to the composite activities.

  8. Runtime picks up an end user’s activity and runs it.

Gotcha’s:

  1. Designer re-hosting is a bit more complex than I would like it to be..

  2. Had to fool around with designer emitted dependency properties “Promote Bindable Properties” and ensure it would do the trick. This is the best way I found so far to expose inner properties of the atomic activities to the “surface” of the composite activities and allow the end user to assign values to them.

  3. Had to add to the compilation a ToolboxItem attribute (the re-hosting examples don’t do that, and since the non programmer does NOT have access to the code beside file, you have to add it within the designer compilation of the designer activity. The exact magic incantations are:

// add these lines to the workflow loader:
CodeAttributeDeclaration attrdecl = new CodeAttributeDeclaration(
"System.ComponentModel.ToolboxItem",
new CodeAttributeArgument(new CodePrimitiveExpression(true))
);

ctd.CustomAttributes.Add(attrdecl);

CodeCommentStatement nurisComment = new CodeCommentStatement(
new CodeComment("ToolboxItem decoration should do the trick.."));

ctd.Comments.Add(nurisComment);
// end added lines

Workflow custom Loop Activity

I was doing some workflow work and wanted to create a custom loop activity. The project needs it, and it’s a great way to learn what (not) to do.

The activity is a container that would loop through a list of discrete items (think foreach(string currentValue in ValueList)) and exposes the current loop variable via a bindable DependencyProperty.

The basics of the activity are to keep the container in “Executing” mode until all child activities are done. The tricky part is that the whole ActivityExecutionContext and ExecutionContextManager need to create for a new context each loop iteration. The hookup of the synchronization is done by using Activity.RegisterForStatusChange(.., OnEvent) on each child executed, then in the OnEvent() unregister the activity from further notice. I don’t love it, but it works.

Here goes:


using System;
using System.ComponentModel;
using System.ComponentModel.Design;
using System.Drawing;
using System.Workflow.ComponentModel;
using System.Workflow.ComponentModel.Compiler;
using System.Workflow.ComponentModel.Design;
namespace NH.Workflow
{
[Designer(typeof(SequenceDesigner),
typeof(IDesigner)),
ToolboxItem(typeof(ActivityToolboxItem)),
Description("Loop Activity - iterate over discrete list of items."),
ActivityValidator(typeof(LoopActivityValidator))]
public sealed class LoopActivity : CompositeActivity, IActivityEventListener<ActivityExecutionStatusChangedEventArgs>
{
private int currentIndex = 0;
private string[] valueList = { };
protected override ActivityExecutionStatus Cancel(ActivityExecutionContext executionContext)
{
if (base.EnabledActivities.Count == 0)
return ActivityExecutionStatus.Closed;
Activity firstChildActivity = base.EnabledActivities[0];
ActivityExecutionContext firstChildContext = executionContext.ExecutionContextManager.GetExecutionContext(firstChildActivity);
if (firstChildContext == null)
return ActivityExecutionStatus.Closed;
if (firstChildContext.Activity.ExecutionStatus == ActivityExecutionStatus.Executing)
firstChildContext.CancelActivity(firstChildContext.Activity);
return ActivityExecutionStatus.Canceling;
}
protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext)
{
if (this.PerformNextIteration(executionContext))
return ActivityExecutionStatus.Executing;
else
return ActivityExecutionStatus.Closed;
}
void IActivityEventListener<ActivityExecutionStatusChangedEventArgs>.OnEvent(object sender, ActivityExecutionStatusChangedEventArgs statusChangeEvent)
{
ActivityExecutionContext originContext = sender as ActivityExecutionContext;
statusChangeEvent.Activity.UnregisterForStatusChange(Activity.ClosedEvent, this);
ActivityExecutionContextManager ctxManager = originContext.ExecutionContextManager;
ctxManager.CompleteExecutionContext(ctxManager.GetExecutionContext(statusChangeEvent.Activity));
if (!this.PerformNextIteration(originContext))
originContext.CloseActivity();
}
private bool PerformNextIteration(ActivityExecutionContext context)
{
if (((base.ExecutionStatus == ActivityExecutionStatus.Canceling)
|| (base.ExecutionStatus == ActivityExecutionStatus.Faulting))
|| currentIndex == valueList.Length)
{
return false;
}
this.CurrentValue = valueList[currentIndex++];
if (base.EnabledActivities.Count > 0)
{
ActivityExecutionContext firstChildContext = context.ExecutionContextManager.CreateExecutionContext(base.EnabledActivities[0]);
firstChildContext.Activity.RegisterForStatusChange(Activity.ClosedEvent, this);
firstChildContext.ExecuteActivity(firstChildContext.Activity);
}
return true;
}
public static DependencyProperty ValueListProperty = System.Workflow.ComponentModel.DependencyProperty.Register("ValueList", typeof(string[]), typeof(LoopActivity));
/// <summary>
/// The list of values to iterate over. Child activities would be executed for each value in this list, and would be able to access the current value via the CurrentValue property.
/// </summary>
[Description("The values to iterate over")]
[Category("Other")]
[Browsable(true)]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
public string[] ValueList
{
internal get
{
return valueList;
}
set
{
valueList = value;
}
}
public static DependencyProperty CurrentValueProperty = System.Workflow.ComponentModel.DependencyProperty.Register("CurrentValue", typeof(string), typeof(LoopActivity));
/// <summary>
/// The current value of the loop variable. This value changes each iteration and is used by child activities interested in the iteration value.
/// </summary>
[Description("The current loop value. Child activities should bind to this value if they are using the loop variable.")]
[Category("Other")]
[Browsable(true)]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
public string CurrentValue
{
get
{
return ((string)(base.GetValue(LoopActivity.CurrentValueProperty)));
}
private set
{
base.SetValue(LoopActivity.CurrentValueProperty, value);
}
}
}
/// <summary>
/// Validator for the loop activity.
/// Check that the list of discrete items to iterate over is valid.
/// </summary>
public class LoopActivityValidator : ActivityValidator
{
public override ValidationErrorCollection ValidateProperties(ValidationManager manager, object obj)
{
ValidationErrorCollection errors = new ValidationErrorCollection();
LoopActivity activityToValidate = obj as LoopActivity;
if (activityToValidate.Parent != null) // prevent compile time checking.
{
if (activityToValidate == null)
errors.Add(new ValidationError("object passed in is not a LoopActivity", 1));
if (activityToValidate.ValueList == null)
errors.Add(new ValidationError("Value List not provided (it is null). Please provide a list of values to iterate over.", 2));
if (activityToValidate.ValueList.Length == 0)
errors.Add(new ValidationError("Value List not provided (it is empty). Please provide a list of values to iterate over.", 3));
}
return errors;
}
}
}

FasterTemplate c# code

So I wrote the thing up . Compared with a fairly common alternative, a run of the more type safe and render friendly template took 260 ms to merge 10k iterations vs. 480 ms for the alternative. In addition, the alternative went through almost twice the allocations (meaning more GC would result).
I’m kind of pleased that this is faster than the alternative, but am going to do some more thinking of a better way. A 46% reduction is nice, but it’s just under twice as fast. Still, the benefits of a cached template and token/key checking beats the alternative.
Toying with the idea of a strongly typed template leads to a dead end pretty much. Sure, one could dynamically emit a class that exposes the token names as properties, but then how would you bind to these in compile time? since the tokens are NOT known at compile time, the programmer using the class would have to populate a name value pair in some fashion, so that at run time the binding could be made.


//------------------------------------
namespace Nuri.FasterTemplate
{
public interface IFasterTemplate
{
string ID { get; }
void Render(System.IO.TextWriter writer, IRuntimeValues runtimeValues);
IRuntimeValues GetRuntimeValuesPrototype();
}
}

//------------------------------------
namespace Nuri.FasterTemplate
{
public interface IRuntimeValues
{
bool AddValue(string tokenName, string value);
bool AddValues(IEnumerable values);
string[] GetAllowedValues();
IRuntimeValues GetCopy();
bool IsAlowed(string tokenName);
string this[string tokenName] { get; }
string ID { get; }
}
}

//------------------------------------
namespace Nuri.FasterTemplate
{
public static class FasterTempateFactory
{
public static IFasterTemplate GetFasterTemplate(TokenConfiguration config, ref string template)
{
string id = Guid.NewGuid().ToString();
TemplateParts templateParts = processTemplateParts(id, config, ref template);
IRuntimeValues runtimeValuesPrototype = processTokens(id, templateParts);
IFasterTemplate result = new FasterTemplate(id, templateParts, runtimeValuesPrototype);
return result;
}
private static IRuntimeValues processTokens(string id, TemplateParts templateParts)
{
RuntimeValues result = new RuntimeValues(id);
for (int i = 0; i < templateParts.Count; i++)
if (templateParts[i].IsToken)
result[templateParts[i].Value] = string.Empty;
return result;
}
private static TemplateParts processTemplateParts(string id, TokenConfiguration config, ref string template)
{
TemplateParts result = new TemplateParts();
char currentChar;
bool isToken = false;
StringBuilder sbText = new StringBuilder(template.Length / 2);
StringBuilder sbToken = new StringBuilder(64);
for (int idx = 0; idx < template.Length; idx++)
{
currentChar = template[idx];
if (currentChar == config.TokenStartMarker)
{ isToken = true; result.Add(new TemplatePart(sbText.ToString(), false)); sbText.Length = 0; }
else if (currentChar == config.TokenEndMarker) { isToken = false; result.Add(new TemplatePart(sbToken.ToString(), true)); sbToken.Length = 0; } else { if (isToken) sbToken.Append(currentChar); else sbText.Append(currentChar); }
} if (isToken == true) throw new ArgumentException("Template has unclosed token marker"); if (sbText.Length > 0) result.Add(new TemplatePart(sbText.ToString(), false)); return result;
}
}
}

//------------------------------------
namespace Nuri.FasterTemplate
{
internal class FasterTemplate : IFasterTemplate
{
private string _ID;
private TemplateParts _TemplateParts;
private IRuntimeValues _RuntimeValuesPrototype;
internal FasterTemplate(string ID, TemplateParts templateParts, IRuntimeValues runtimeValuesPrototype) { _ID = ID; _TemplateParts = templateParts; _RuntimeValuesPrototype = runtimeValuesPrototype; } public IRuntimeValues GetRuntimeValuesPrototype() { return _RuntimeValuesPrototype.GetCopy(); } public string ID { get { return _ID; } }
public void Render(System.IO.TextWriter writer, IRuntimeValues runtimeValues)
{
if (runtimeValues.ID != this._ID) throw new ArgumentException("The runtime values supplied are not compatible with this template! Ensure you got the runtime values object from the template with ID " + this._ID); for (int i = 0, count = _TemplateParts.Count; i < count; i++)
{
TemplatePart part = _TemplateParts[i]; if (part.IsToken) { writer.Write(runtimeValues[part.Value]); }
else
{
writer.Write(part.Value);
}
}
}
}
}

//------------------------------------
namespace Nuri.FasterTemplate
{
internal class RuntimeValues : Nuri.FasterTemplate.IRuntimeValues
{
private Dictionary<string,string> _AllowedValues;
internal string _ID;
internal RuntimeValues(string ID, int capacity)
{
_AllowedValues = new Dictionary<string, string>(capacity);
_ID = ID;
}
internal RuntimeValues(string ID) : this(ID, 0x10) { }
internal Dictionary<string,string> AllowedValues
{
get { return _AllowedValues; }
}
public string[] GetAllowedValues()
{
string[] result = new string[_AllowedValues.Count];
int i = 0;
foreach (string key in _AllowedValues.Keys)
{ result[i++] = key; }
return result;
}
public bool IsAlowed(string tokenName)
{
return _AllowedValues.ContainsKey(tokenName);
}
public bool AddValue(string tokenName, string value)
{
if (_AllowedValues.ContainsKey(tokenName))
{
_AllowedValues[tokenName] = value;
return true;
}
else
return false;
}

public bool AddValues(IEnumerable values)
{
bool result = true;
foreach (KeyValuePair<string, string> pair in values)
{
result = result && this.AddValue(pair.Key, pair.Value);
}
return result;
}
public string this[string tokenName]
{
get
{
return _AllowedValues[tokenName];
}
internal set
{ _AllowedValues[tokenName] = value; }
}
public IRuntimeValues GetCopy()
{
RuntimeValues result = new RuntimeValues(this._ID, _AllowedValues.Count);
foreach (string key in _AllowedValues.Keys)
result.AllowedValues[key] = string.Empty;

return result;
}
public string ID
{
get { return _ID; }
}
}
}

//------------------------------------
namespace Nuri.FasterTemplate
{
internal struct TemplatePart
{
public bool IsToken;
public string Value;
public TemplatePart(string value, bool isToken)
{
this.Value = value;
this.IsToken = isToken;
}
}
}

//------------------------------------
namespace Nuri.FasterTemplate
{
class TemplateParts : List<TemplatePart> { }
}

//------------------------------------
namespace Nuri.FasterTemplate
{
public struct TokenConfiguration
{
public readonly char TokenStartMarker;
public readonly char TokenEndMarker;
public TokenConfiguration(char tokenStartMarker, char tokenEndMarker)
{
TokenStartMarker = tokenStartMarker;
TokenEndMarker = tokenEndMarker;
}
}
}
Notice

We use cookies to personalise content, to allow you to contact us, to provide social media features and to analyse our site usage. Information about your use of our site may be combined by our analytics partners with other information that you’ve provided to them or that they’ve collected from your use of their services. You consent to our cookies if you continue to use our website.