Save The Date: time_t 1234567890

You know you are a geek when things like this seem of any importance.

It may comfort you to know that there are others who are excited about the buzz. It may disturb you. Make of it what you will:

On Friday, February 13th 23:31:30 UTC 2009 the time_t structure would contain the the number 1234567890

The time_t is measures seconds past midnight of January 1 1970 (see Unix Epoc, POSIX time etc)

You might celebrate UTC time - those of you not stuck in traffic in New York or at work in California. Or just do it according to your time zone: raise a toast, save some doughnuts from doughnut-Friday or look at the sky or do triple click an icon or something.

Here’s a little java script to sneak onto your website which shows the time and the countdown:

<script language="javascript" type="text/javascript">
function onTick()
{
var t = Math.floor( ((new Date()).getTime()) /1000);
var remaining = (1234567890 - t);
var display = "Now: " + t + ": " + remaining + " Sec. to 1234567890 (local time zone).";
document.getElementById("_ShowTime").innerHTML = display;
}
setInterval('onTick()', 1000);
</script>

Timing is everything

You know you have a problem when your app behaves badly.

You know you should do something about it when a customer complains

You know you are late when your competitor took away your customer.

Waiting for Pex to release

After seeing Pex in action at PDC 2008 I have caught the fever.

Since then, I gave it a whirle on my own and was pretty impressed. So much so, I chose it as a topic for one of my So-Cal Code Camp talks in January. Got some very good questions and concerns regarding the capabilities and place of Pex in the world of software development and vis-a-vis TDD.

Update: After more than 5 years, Pex technology is now available as an “automatic unit testing” feature in some editions (paid) of Visual Studio 2015

The Flash Myth

Frustrated with square looking web pages, many web designers look to Flash. In addition to freedom of graphic expression, Flash brings interaction and transition effects which are difficult or impossible to duplicate. Why then shouldn’t all websites be Flash laden? Several reasons pop to mind:

  1. Loading speed: Way too often, flash movies load complex graphics, forcing the viewer to stare at some progress bar. For people who want to quickly access information this is a huge turn off. A “skip intro” sometimes viewed as lame.

  2. SEO: Flash based sites rarely are digestible or well ranked by search engines. You use flash - you lose top slots in natural search results. It’s that simple

  3. Form over content: Flash designs - as beautiful and engaging as they may be - all too often sacrifice text and information quantity in order to not disturb the layout or fit within the containing graphics. Again, ranking will suffer (low authority), viewers seeking comprehensive data will navigate away or become frustrated attempting to dig up more details by clicking every button and link in the movie.

  4. Cost: Although many good web designers can produce Flash movies, the good ones are fewer and more expensive. The problem compounds with natural workplace attrition. The ability to modify the original design, extend the site or give the site a “facelift” becomes constrained or expensive. All too often the original design does not lend itself well to rearrangement or extension.

  5. QA: Automated testing of Flash UI sites is difficult or impossible because the results may not be capture-able by testing tools - and that’s in the good case where the inputs are automatable. Unit testing for ActionScript is available by supporting frameworks these days, but finding implementations which include continuous integration is still rare.

  6. Ease of use: Often, the design focus reflects the author’s perspective. As long as the viewer wants to view information the way the designer does, everything flows well. Once the viewer wants to navigate in a different way, all bets are off.

  7. Stickiness: By far, one of the more compelling reasons for having used a Flash based site was to engage the viewer and strengthen the brand. The delivery on the promise is a bit mixed. Although some interaction does increase the stickiness of the site, other designs may suffer from slow load and skimpy content actually detracting from the overall perceived quality.

If while reading the points mentioned above you think to yourself “It seems that many of the points indicate poor design - not a technology limitation” you are right, and I couldn’t agree with you more.

Yet a good amount of sites out there suffer from these issues. Most of these sites would have been better served producing an HTML / DHTML / CSS + Ajax etc. In fact, my motivation for writing this came from a restaurant website I visited today. While scanning through listings and attempting to learn more, I was forced to watch tab after tab of progress bar latency in order to see the menu, the location, the about us etc. That site would have been cheaper to upgrade by anybody, rank higher on search engines, load faster and create less frustrated visitors. The KISS principle does apply. The pictures were beautiful; the layout was very appealing, the information organized - but the experience utterly annoying. Is that the best impression the business owner could impart on a potential customers?

In another instance, I was searching for a specialty product, and found it about 40 miles away. After having spent the time driving there and making a purchase, I discovered a location much closer to my house. What made the different? The far store had a website with full online catalog in HTML. The close by store had a flash only site. Snazzy - but missed a sale.

I have been to some sites that make great use of Flash though, and it would be foolish to discount the technology for its potential pitfalls. On the contrary - its adoption and application should be studied and considered when new design projects com up. There are many sites containing laundry lists of good vs. bad design. This article is no such list. But when is Flash well suited?

  1. Where a custom tool is created for order processing. Photo order sites, custom printing, fashion model virtualization etc. In these cases Flash serves an interactive function and is typically not the whole site.

  2. Where little textual information is required. Walkthroughs, product 360 views, design concept sites targeting highly interested viewers who don’t mind waiting for content and where alternative information sources are not available.

  3. For isolated rich interaction: Games, presentations and doodads that keep the visitor happy watching and engaged - as a single section of the site.

  4. Offline (DVD, CD) or downloadable media presentations, manuals etc.

  5. Anywhere that you can do a better job at winning the mind of the visitor without losing another 3 along the way.

Bottom line - use it wisely. If you suspect that any of the problems mentioned above might harm your online business, consider alternatives. And yes, by all means - do use a flash player to play video clips on the net. It is very well suited for that.

Pex Gotcha - watch your predicate expressions

Just came back from another great SoCal Code Camp. I had some valuable insights and discussions about TDD and the use of Pex. Thank you attendees!

While developing the presentation for Pex, I ran into a situation where the Pex.Assume() did not seem to work at all:

Consider the function

public List<short> MakeList(short baseNum, short count)
{
List<short> result = new List<short>(count);
for (short i = 1; i <= count; i++)
{
result.Add((short)(baseNum * i));
}
return result;
}

Pex correctly identifies a potential flaw where the multiplication (baseNum * i) would result in overflow.

Adding a filter

PexAssume.IsTrue(baseNum * count < short.MaxValue);

Seems like it would do the trick - but it doesn’t.

Several rebuilds, clean solution, shake heads and searches for bugs later I found the issue: The predicate provided to PexAssume.IsTrue(predicate) produced an overflow! So when pex explores it would have tripped the condition I was trying to avoid by evaluating the parameters I tried to filter out.

The fix was to rewrite the filter as:

PexAssume.IsTrue(short.MaxValue / count > baseNum);

Here, the math would not produce an overflow. Combined with PexAssume(count>0) and PexAssume(baseNum>0) my now filters work as (I) expected. The take home is pretty obvious - ensure the predicate does not throw - but identifying it took a bit of head scratching.

Code better – measure your code with NDepend

If you ask the average developer what might be done to improve code, they would probably come up with “use design patterns” or “do code reviews” or even “write unit tests”. While all these are valid and useful, it is rare to hear “measure it”. It’s odd, when you think about it, because most of us consider ourselves scientists or sorts. Some of use obtained a degree in computer science, and we view the coding practice and a deterministic endeavor. Why is it then that we don’t measure our work using standard methodologies and objective tools and evidence?

For one, some of us are blissfully unaware of the existence of such methods. Indeed, the science of quality measurement of code has been the domain of university halls more so than practiced in the “real” world. Six Sigma and CMMI are probably the more familiar endeavors prescribing some sort of measure/improve into the coding practice but both include scant little in terms of measuring code itself. Rather they focus on the results of the software endeavor not on the “internal quality” of code.

Another reason for low adoption of code quality measurement is lack of tools. We have wealth of guidance instruments, but less so code quality focused. For example, FxCop and the addition of Code Analysis to VSTS have brought huge contribution to code reviewing and uniformity in coding among teams. But let’s face it - with so much guidance, it’s all too easy to either dismiss the whole process as “too picky” or focus too much on one aspect of coding style rather than the underlying runtime binary. This is to say that it is very possible that what would be considered “good style” may not yield good runtime, and vice-versa.

For a professional tool which enables you to view, understand, explore, analyze and improve your code look no further than NDepend. (www.ndepend.com). The tool is quite extensive and robust, and has matured in its presentation, exploration and integration capabilities becoming a great value for those of use interested digging deeper then the “my code seems to work” crowd.

The installation is fairly straightforward. You pretty much unpack the download and place your license file in your installation directory. Upon running the tool, you can chose to install integration to VS2005, VS2008 and Reflector (now a RedGate property btw).

Before using the tool for the first time, you can watch a few basic screen casts available from NDepend. The videos have no narration, so I found myself using the pause button if the text balloons flew by a bit quick. But that’s no big deal with a 3-5 minute video. Once you get comfortable with the basics, you can almost immediately reap the benefits. Through a very relevant set of canned queries and screens you can quickly get a feel for how your code measures up. A graphic “size gram” presents methods, types, classes, namespaces or assemblies in varying sizes according to measures like lines of code (LOC - either the source itself or the resultant IL), Cyclometric Complexity and other very useful views of code cohesiveness and complexity. This visual let’s you quickly identify or drill into the “biggest offender”.

Once you chose a target for exploration, the view in the assembly-method tree, the graphic size-gram and the dependency matrix all work in tandem: you chose an element in one, and the focal point shifts or drills down in the other two. There is also a pane which acts like a context menu which displays the metrics numbers for the selected method, field, assembly etc. This allows you to get the summary very quickly at any given point of your exploration.

When you use the dependency matrix, method or types and their dependents are easily correlated. A measure of code quality is how tightly different types are coupled or dependent on each other. Theory is that if a dependency tree is too deep or too vast, change in a type will ripple through a lot of code whereas shallow or narrow dependency will have less dramatically affected by change. So it’s a great thing to have a measure of your dependency relationships among your classes and assemblies. This measure tends to affect code most in the maintenance phase, but of course is as useful during initial prototype/refactor cycles pre-release.

Another great feature is a dependency graph, producing a visual map of dependencies among the assemblies analyzed. I have found it very useful when “cold reading” legacy code I was charged in maintaining. Using the visualization I could more quickly determine what’s going on and understand how pieces of code work together rather than follow painstakingly with bookmarks and “follow the code” with a debugger.

As for the metrics themselves, you would probably choose your own policy regarding measures and their relevance. For one, the numbers are great as relative comparison of various code pieces. You may find that some dependencies are “very deep” - which in theory is “bad” - but that the indication points to a base class which you designed very well and serves as the base for everything. For an extreme example, most of us will agree that the “deep dependency” on System.String is well justified and doesn’t merit change. It is important for the user to understand and digest the metrics in context, and draw appropriate conclusions.

The tool is built on an underlying query technology called CQL. Once a project is analyzed, the database of findings is exposed both through built in queries. These queries can be modified and new queries can be built to correlate your important factors. Quite honestly, I have not gotten to a point of need for customization yet. The existing presentations are very rich and useful out of the box. One instance where you might want to produce custom queries would be to exclude known “violations” by adding a where clause, thereby preventing code you already analyzed and mitigated from appearing or skewing the view of the rest of your code.

In summary, I found NDepend very useful in examining legacy and new code. It gave me insights beyond empirical style oriented rules. It is much more informative to me to have a complexity measure or IL-LOC rather than a rule like “methods should not span more than 2 screen-full”. Microsoft does include code metrics in VS 2010, and code analysis in VSTS or testing editions. If that is not within your budget, then you can have NDepend today and gain valuable insight right away. I would advise taking it slow in the beginning because there is a slight learning curve to the tool usage and navigation, and ascribing relevant weight to the findings takes time. But once you get a hang of it, it becomes indispensible.

Code generator for Visual Studio - Denum

Announcing a newly added a Codeplex project “Denum” code generator.

The Denum is a class / pattern for representing fairly static metadata from a database in an in-memory structure.

The structure behaves much like an Enum, but contains static members for each data member so that compile time type checking helps in transparency and coherency of your application logic and the build itself against the database version.