Tags: , , , | Categories: Unit Testing, Testing, Code Development Posted by nurih on 8/7/2009 3:12 PM | Comments (0)

It's been years now that unit testing frameworks and tools have grabbed our attention, made their way into our favorite IDE and sparked yet another wave of seemingly endless "my framework is better than yours" wars. And then there are the principal wars of whether TDD is better than Test After Development. And most excitingly the automated testing tools etc. Oh, and let's not forget mocks! So we have all the tools we need – right? Well, kind of, no.

I recently attended a talk by Llewellyn Falco and Woody Zuill the other day, and they used a little library called Approval Tests (http://approvaltests.com/). I immediately fell in love with the concept and the price ($0). Llewellyn is one of the two developers of the product, hosted on http://sourceforge.net/projects/approvaltests/.

What does it do that years of frameworks don't?

For me, a major nuisance is that you have a piece of code, and that piece of code produces a result (object, state – call it what you will). That result is commonly named "actual" in Visual Studio unit tests. The developer must then assert against that result to ensure that the code ran properly. In theory, unit tests are supposed to be small and only test one thing. In reality, functions which return complex objects rarely can be asserted with one assert. You'd typically find yourself asserting several properties or testing some behavior of the actual value. In corporate scenario, the unit tests might morph into some degree of integration tests and objects become more complex, requiring a slew of assert.this and assert.that().

What if you could just run the code, inspect the value returned, and – if it satisfies the requirements – set "that value" to be the comparison for all future runs? Wouldn't that be nice? Good news: Approval Tests does just that. So now, instead of writing so many asserts against your returned value, you just call ApprovalTests.Approve() and the captured state is compared against your current runtime. Neat huh?

But wait, there's more! What if your requirements are regarding a windows form, or a control? how do you write a unit test that asserts against a form? The answer is simple and genius: Capture the form as an image, and compare the image produced by your approved form (after you have carefully compared it to the form given to you by marketing – right?) and compare it to each subsequent run. ApprovalTests simply does a byte by byte comparison of the resultant picture of the form and tells you if it matches what you approved.

Let's break down the steps for installation:

  1. Add a reference to the ApprovalTests.dll assembly.

Yeah,it's that simple.

Ok. Let's break down the steps for adding an approval to your unit test

  1. Write your unit test (arrange, act – no asserts) as usual.
  2. Add a call to Approve()

Yeah, it's that simple.

How do you tell it that the value received is "the right one"? Upon first run, the Approve() call will always fail. Why? Because it has no stored approved value to compare the actual it received against. When it fails, it will print out a message (console or test report – depends on your unit test runner). That message will contain a path for the file that it received, complaining about a lack of approved file. The value captured from that run (and any subsequent run) is stored in a file named something like "{your unit test file path}\{your method}.received.xyz". If you like the result of the run -the image matches what the form should look like, or the text value is what your object should contain etc – then you can rename it to "{your unit test file path}\{your method}.approved.xyz". You should check the approved file into your source control. After all, it's content constitutes the basis of the assertion in your unit test!

Consider the very simple unit test code:

   1: [Test]
   2:         public void DateTime_Constructor_YMDSetDate()
   3:         {
   4:             DateTime actual;
   5:             actual = new DateTime(2038, 4, 1);
   6:             Approvals.Approve(actual.ToString());
   7:         }

The function under test here is the constructor for System.DateTime which takes 3 parameters – month day and year. Upon first run, the test will fail with a message resembling

"TestCase 'MyProject.Tests.DateTimeTest.DateTime_Constructor_YMDSetDate'
failed: ApprovalTests.Core.Exceptions.ApprovalMissingException : Failed Approval: Approval File "C:\….\MyProject\DateTimeTest.DateTime_Constructor_YMDSetDate.approved.txt" Not Found."

That's because this file was never created – it's your first run. Using the standard approver, this file will actually now be created but will be empty. In that same directory you will find a file named {…}.received.txt. This is the capture from this run. You can open the received value file, and inspect the text to ensure that the value returned matches your expected value. If it does, simply rename that file to .approved.text or copy it's content over and run the test again. This time, the Approve() method should succeed. If at any point in the future the method under test returns a different value, the approval will fail. If at any point in the future the specs change, you will need to re-capture a correct behavior and save it.

How do you easily compare the values from the last run and the saved approved value? The approved file is going to be a text file for string approvals, and an image file for WinForm approvals. As it turns out, you can instruct Approval Tests to launch a diff program with the 2 files (received, approved) automatically upon failure, or just open the received file upon failure or silently just fail the unit test. To control that behavior, you use a special attribute

   1: [TestFixture]
   2: [UseReporter(typeof(DiffReporter))]
   3: public class ObjectWriterTest
   4: {

The UserReporterAttribute allows you to specify one of 3 included reporters

  1. DiffReporter – which opens tortoisediff for diffing the received and approved files if the comparison fails.
  2. OpenReceivedFileReporter – which launches the received file using the application registered to it's extension on your system if the comparison fails.
  3. QuietReporter – which does not launch any program but only fails the unit test if the comparison fails.

When your have a CI server and run unit tests as part of your builds, you probably want to use the quiet reporter. For interactive sessions, one of the first 2 will probably be more suitable.

How are value comparisons performed? Under the standard built in methods, a file is written out and compared to byte by byte. If you wish to modify this or any behavior, you can implement your own approver or file writer or reporter by implementing simple interfaces. I ended up adding a general use object writer so that I can approve arbitrary objects. The effort was fairly painless and straightforward.

I did have to read the code to get the concept. If only my time machine worked: I could have read my own blog and saved myself 20 minutes. Yeah, right.

The project has some bells and whistles – plug ins for a few IDE's and there's a version for Java and Ruby. I have not reviewed these versions.

There you have it folks- a shiny new tool under your belt. I can see this saving my hours of mindless typing of Assert.* calls and I can go home early. Yeah, right.

Tags: , , , | Categories: Code Development, Testing, Web Posted by nurih on 1/16/2009 10:25 AM | Comments (0)

Just came back from another great SoCal Code Camp. I had some valuable insights and discussions about TDD and the use of Pex. Thank you attendees!

While developing the presentation for Pex, I ran into a situation where the Pex.Assume() did not seem to work at all:

Consider the function

public List<short> MakeList(short baseNum, short count) 
{ 
List<short> result = new List<short>(count); 
for (short i = 1; i <= count; i++) 
{ 
result.Add((short)(baseNum * i)); 
} 
return result; 
}

 

Pex correctly identifies a potential flaw where the multiplication (baseNum * i) would result in overflow.

Adding a filter

PexAssume.IsTrue(baseNum * count < short.MaxValue); 

 

Seems like it would do the trick – but it doesn't.

Several rebuilds, clean solution, shake heads and searches for bugs later I found the issue: The predicate provided to PexAssume.IsTrue(predicate) produced an overflow! So when pex explores it would have tripped the condition I was trying to avoid by evaluating the parameters I tried to filter out.

The fix was to rewrite the filter as:

 

PexAssume.IsTrue(short.MaxValue / count > baseNum); 

 

Here, the math would not produce an overflow. Combined with PexAssume(count>0) and PexAssume(baseNum>0) my now filters work as (I) expected.

 

The take home is pretty obvious – ensure the predicate does not throw – but identifying it took a bit of head scratching.

Tags: , , , , | Categories: Code Development, General, Testing Posted by nurih on 11/8/2008 3:32 PM | Comments (0)

If you ask the average developer what might be done to improve code, they would probably come up with "use design patterns" or "do code reviews" or even "write unit tests". While all these are valid and useful, it is rare to hear "measure it". It's odd, when you think about it, because most of us consider ourselves scientists or sorts. Some of use obtained a degree in computer science, and we view the coding practice and a deterministic endeavor. Why is it then that we don't measure our work using standard methodologies and objective tools and evidence?

For one, some of us are blissfully unaware of the existence of such methods. Indeed, the science of quality measurement of code has been the domain of university halls more so than practiced in the "real" world. Six Sigma and CMMI are probably the more familiar endeavors prescribing some sort of measure/improve into the coding practice but both include scant little in terms of measuring code itself. Rather they focus on the results of the software endeavor not on the "internal quality" of code.

Another reason for low adoption of code quality measurement is lack of tools. We have wealth of guidance instruments, but less so code quality focused. For example, FxCop and the addition of Code Analysis to VSTS have brought huge contribution to code reviewing and uniformity in coding among teams. But let's face it – with so much guidance, it's all too easy to either dismiss the whole process as "too picky" or focus too much on one aspect of coding style rather than the underlying runtime binary. This is to say that it is very possible that what would be considered "good style" may not yield good runtime, and vice-versa.

For a professional tool which enables you to view, understand, explore, analyze and improve your code look no further than NDepend. (www.ndepend.com). The tool is quite extensive and robust, and has matured in its presentation, exploration and integration capabilities becoming a great value for those of use interested digging deeper then the "my code seems to work" crowd.

The installation is fairly straightforward. You pretty much unpack the download and place your license file in your installation directory. Upon running the tool, you can chose to install integration to VS2005, VS2008 and Reflector (now a RedGate property btw).

Before using the tool for the first time, you can watch a few basic screen casts available from NDepend. The videos have no narration, so I found myself using the pause button if the text balloons flew by a bit quick. But that's no big deal with a 3-5 minute video. Once you get comfortable with the basics, you can almost immediately reap the benefits. Through a very relevant set of canned queries and screens you can quickly get a feel for how your code measures up. A graphic "size gram" presents methods, types, classes, namespaces or assemblies in varying sizes according to measures like lines of code (LOC – either the source itself or the resultant IL), Cyclometric Complexity and other very useful views of code cohesiveness and complexity. This visual let's you quickly identify or drill into the "biggest offender".

Once you chose a target for exploration, the view in the assembly-method tree, the graphic size-gram and the dependency matrix all work in tandem: you chose an element in one, and the focal point shifts or drills down in the other two. There is also a pane which acts like a context menu which displays the metrics numbers for the selected method, field, assembly etc. This allows you to get the summary very quickly at any given point of your exploration.

When you use the dependency matrix, method or types and their dependents are easily correlated. A measure of code quality is how tightly different types are coupled or dependent on each other. Theory is that if a dependency tree is too deep or too vast, change in a type will ripple through a lot of code whereas shallow or narrow dependency will have less dramatically affected by change. So it's a great thing to have a measure of your dependency relationships among your classes and assemblies. This measure tends to affect code most in the maintenance phase, but of course is as useful during initial prototype/refactor cycles pre-release.

Another great feature is a dependency graph, producing a visual map of dependencies among the assemblies analyzed. I have found it very useful when "cold reading" legacy code I was charged in maintaining. Using the visualization I could more quickly determine what's going on and understand how pieces of code work together rather than follow painstakingly with bookmarks and "follow the code" with a debugger.

As for the metrics themselves, you would probably choose your own policy regarding measures and their relevance. For one, the numbers are great as relative comparison of various code pieces. You may find that some dependencies are "very deep" – which in theory is "bad" – but that the indication points to a base class which you designed very well and serves as the base for everything. For an extreme example, most of us will agree that the "deep dependency" on System.String is well justified and doesn't merit change. It is important for the user to understand and digest the metrics in context, and draw appropriate conclusions.

The tool is built on an underlying query technology called CQL. Once a project is analyzed, the database of findings is exposed both through built in queries. These queries can be modified and new queries can be built to correlate your important factors. Quite honestly, I have not gotten to a point of need for customization yet. The existing presentations are very rich and useful out of the box. One instance where you might want to produce custom queries would be to exclude known "violations" by adding a where clause, thereby preventing code you already analyzed and mitigated from appearing or skewing the view of the rest of your code.

In summary, I found NDepend very useful in examining legacy and new code. It gave me insights beyond empirical style oriented rules. It is much more informative to me to have a complexity measure or IL-LOC rather than a rule like "methods should not span more than 2 screen-full". Microsoft does include code metrics in VS 2010, and code analysis in VSTS or testing editions. If that is not within your budget, then you can have NDepend today and gain valuable insight right away. I would advise taking it slow in the beginning because there is a slight learning curve to the tool usage and navigation, and ascribing relevant weight to the findings takes time. But once you get a hang of it, it becomes indispensible.

Tags: , , | Categories: Code Development, Testing Posted by nurih on 10/23/2008 10:08 AM | Comments (38)

You may be surprised to find that classes are serialized by WCF without any [DataContract] attribute attached to them. I certainly was!

When WCF came out, there was much fanfare of the new, improved, superior WCF serializer (justified IMHO). The main policy sizzle was that unlike the [Serializable] marking a POCO object and then [NonSerialized] attribute marking specific fields (opt-out), WCF will now use "opt-in": Only properties specifically decorated will be serialized.

This serialization policy was clearly expressed, and the side effect of it all is that suddenly I have a bunch of projects that fail some unit tests. Upon digging, I found that SP1 has introduced a new kink: if you don't decorate your class at all (omit / not add the [DataContract] attribute) then the object becomes "fully" serializable by WCF. All public properties will be automatically included.

This may seem as a huge step back to those that relied on the opt-in feature to hide certain classes from the serializer. This also may be a huge sigh of relief to those who cringed at the effort of (re)decorating all their message and value objects for WCF.

Note that now with SP1, if you do decorate an object with [DataContract] then the old rules apply – only properties with [DataMember] will be serialized. So to be selective, you may still decorate with [DataContract] and then selectively decorate only the properties you want.

I don't know what led to the exact decision, and the syntax nuance definitely walks a fine line. One could argue that classes without decoration now serialize without effort, but ones marked for WCF specifically still behave as previously advertised.

All in all, 2 hours of head scratching in disbelief, 1 hour to change some unit test expectations, not too awful. Long live refactoring tools in Visual Studio!

Tags: , , , | Categories: Code Development, Testing, Unit Testing Posted by nurih on 8/23/2008 7:42 AM | Comments (0)

It is generally considered a good thing to use unit tests these days. Often it is necessary to test a method which takes some complex type. So in the unit testing one has to painstakingly manufacture such object, and pass it in.
Before doing so, you would (should!) ensure the complex type itself produces an identity - that is to say that if you create an instance of type MyClass and assign / construct it with proper values your should "get back" what you gave it. This is especially true for object that get serialized and de-serialized.

What I often do is use some helper code.
First snippet allows for testing an object for serialization using WCF, ensuring "round trip" serialization-de-serialization works.
The second snippet uses reflection to ensure that the object your put through the mill came back with identical values to the initial assigned one. This save a LOT of Assert.AreEqual(expected.PropA, actual.PropA) etc.
Since the object is actually a reference type, other equality checks would not do at the root level (such as ReferenceEuqals and the like).

Structs or nested structs are handled via ensureFieldsMatch() method.

 

Note that complex types may not be handled correctly – generics have not been addressed specifically here.

Future enhancements may include passing in an exclusion list of properties to skip or an inclusion list of properties to match exclusively. I'm on the fence on these, because the whole idea was to say "An object A matches B if every property and public fields match in value", and if one has to explicitly provide all property names one could just as well Assert.AreEqual(a.x, b.x) them.

Updated 2008-11-07: Error in comparison fixed. (Thank you  Rich for pointing it out!)

using System;
using System.Reflection;
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace Nuri.Test.Helpers
{
public static class Equality
{
/// <summary>
/// Some properties are instance specific, and can be excluded for value matching (unlike ref equivalence)
/// </summary>
private static readonly string[] _ReservedProperties = { "SyncRoot" };
public static void EnsureMatchByProperties(this object expected, object actual)
{
ensureNotNull(expected, actual);
Type expectedType = expected.GetType();
Type actualType = actual.GetType();
Assert.AreEqual(expectedType, actualType);
if (expectedType.IsArray)
{
Array expectedArray = expected as System.Array;
Array actualArray = actual as System.Array;
Console.WriteLine(">>>*** digging into array " + expectedType.Name);
for (int i = 0; i < expectedArray.Length; i++)
{
Console.WriteLine("   ---   ---   ---");
EnsureMatchByProperties(expectedArray.GetValue(i), actualArray.GetValue(i));
}
Console.WriteLine("<<<*** digging out from array " + expectedType.Name);
}
else
{
ensurePropertiesMatch(expected, actual, expectedType, actualType);
}
}
public static void EnsureMatchByFields(this object expected, object actual, params string[] exclusionList)
{
ensureNotNull(expected, actual);
Type expectedType = expected.GetType();
Type actualType = actual.GetType();
Assert.AreEqual(expectedType, actualType);
if (expectedType.IsArray)
{
Array expectedArray = expected as System.Array;
Array actualArray = actual as System.Array;
Console.WriteLine(">>>*** digging into array " + expectedType.Name);
for (int i = 0; i < expectedArray.Length; i++)
{
Console.WriteLine("   ---   ---   ---");
expectedArray.GetValue(i).EnsureMatchByFields(actualArray.GetValue(i)); // recursion
}
Console.WriteLine("<<<*** digging out from array " + expectedType.Name);
}
else
{
ensureFieldsMatch(expected, actual, exclusionList);
}
}
private static void ensurePropertiesMatch(object expected, object actual, Type expectedType, Type actualType)
{
BindingFlags propertyExtractionOptions = BindingFlags.Public
| BindingFlags.NonPublic
| BindingFlags.Instance
| BindingFlags.Static
| BindingFlags.GetProperty;
foreach (PropertyInfo expectedProp in expectedType.GetProperties())
{
if (expectedProp.CanRead && !_ReservedProperties.Contains(expectedProp.Name))
{
if (expectedProp.PropertyType.IsValueType || expectedProp.PropertyType == typeof(String))
{
object expectedValue = expectedType.InvokeMember(expectedProp.Name,
propertyExtractionOptions,
null, expected, null);
object actualValue = actualType.InvokeMember(expectedProp.Name,
propertyExtractionOptions,
null, actual, null);
if (expectedValue == null && actualValue == null)
{
// both null - ok
Console.WriteLine("{0}: null == null", expectedProp.Name);
continue;
}
if (expectedValue == null || actualValue == null)
{
// one null the other not. Failure
Assert.Fail(expectedProp.Name + ": Expected Or Actual is null! (but not both)");
break;
}
Console.Write("{0}: {1} == {2} ?", expectedProp.Name, expectedValue.ToString(),
actualValue.ToString());
Assert.AreEqual(expectedValue, actualValue,
"Value of property doesn't match in " + expectedProp.Name);
Console.WriteLine(" true.");
}
else if (expectedProp.PropertyType.IsClass)
{
object expectedObject = expectedType.InvokeMember(expectedProp.Name,
propertyExtractionOptions,
null, expected, null);
object actualObject = actualType.InvokeMember(expectedProp.Name,
propertyExtractionOptions,
null, actual, null);
if (expectedObject != null
&& actualObject != null)
{
Console.WriteLine(">>>>>>>> digging into " + expectedProp.Name);
EnsureMatchByProperties(expectedObject, actualObject);
Console.WriteLine("<<<<<<<< back from dig of " + expectedProp.Name);
}
}
}
}
}
private static void ensureFieldsMatch(object expected, object actual, params string[] exclusionList)
{
Type expectedType = expected.GetType();
Type actualType = actual.GetType();
BindingFlags filedExtractionOptions = BindingFlags.GetField |
BindingFlags.NonPublic |
BindingFlags.Public |
BindingFlags.Instance;
foreach (FieldInfo expectedField in expectedType.GetFields(filedExtractionOptions))
{
if (!exclusionList.Contains(expectedField.Name))
{
if (expectedField.FieldType.IsValueType || expectedField.FieldType == typeof(String))
{
object expectedValue = expectedType.InvokeMember(expectedField.Name,
filedExtractionOptions,
null, expected, null);
object actualValue = actualType.InvokeMember(expectedField.Name,
filedExtractionOptions,
null, actual, null);
if (actual == null && expectedValue == null)
{
// both null - ok
Console.WriteLine("{0}: null == null", expectedField.Name);
continue;
}
if (expectedValue == null || actualValue == null)
{
// one null the other not. Failure
Assert.Fail(expectedField.Name + ": Expected Or Actual is null! (but not both)");
break;
}
Console.Write("{0}: {1} == {2} ?", expectedField.Name, expectedValue.ToString(), actualValue.ToString());
Assert.AreEqual(expectedValue, actualValue, "Value of filed doesn't match in " + expectedField.Name);
Console.WriteLine(" true.");
}
else if (expectedField.FieldType.IsClass)
{
object expectedObject = expectedType.InvokeMember(expectedField.Name,
BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, expected, null);
object actualObject = actualType.InvokeMember(expectedField.Name,
BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, actual, null);
if (expectedObject != null
&& actualObject != null)
{
Console.WriteLine(">>>>>>>> digging into " + expectedField.Name);
expectedObject.EnsureMatchByFields(actualObject);
Console.WriteLine("<<<<<<<< back from dig" + expectedField.Name);
}
}
}
}
}
/// <summary>
/// Ensures none of the values is null.
/// </summary>
/// <param name="parameters">The parameters to check for null.</param>
private static void ensureNotNull(params object [] parameters)
{
foreach( object obj in parameters)
if (obj == null)
{
throw new ArgumentNullException("at least one parameter is null");
}
}
}
}