Doron Ben-David

Just a few of my beats and bites...

AJAX *IS* Secure

A few days ago, Mano (Emanuel) Cohen-Yashar, Sela's international lecturer, wrote a post in his blog under the title "AJAX is not secure".


Please note that this post was written as a technical reply, and does not by any how represent anything else. I'm subscribed to Mano's blog, and reading it always anxiously. I was the first one to reply his post (In Hebrew, please accept my apology, but it was published on an Israeli platform), and I appreciate Mano's opinion, even if I oppose it.


Claiming a technology is not safe is somewhat ignorance (and I must admit I used to do it back then...). Saying "AJAX is not safe" is like saying "Dogs are not safe". Adopt one - and it's nice at the beginning.. Don't take it to the vet - it'll get sick and contagious; most important: If you won't treat it with "respect", it'll bite you.


It's not AJAX which is not safe. Some of its implementations might not be safe.


I've decided to dedicate this post to break some misconceptions regarding not only AJAX, but any other technology referenced as "insecure". Also, I'll try to provide some basic rules of thumb for any development of software. Web/Win, Microsoft/Mac/Linux, C#/Java etc..


I've quoted Mano's post, with my assertions inside.


AJAX is the new Hot technology concerning web application. It allows the client to do much more than it used to and to achieve a much better user experience.

So first, AJAX is maybe a Hot technology, but it sure is not new (at least, it's concept is not new, nor it's implementation...). Unless C# is new, AJAX (formerly known by other names, such as XMLHTTP ActiveX etc.) is not new. It's here for over a decade.


It's true that its real hotness was discovered in the past 5 years, since Gmail took it one step further in implementation, but it's sure not new. I, myself, created a dynamically loaded XML sites using simple JS calls on IE 6.0, on 2002.


AJAX is based on XmlHttpRequests that the browser creates while the page is presented on the browser. The client does not know that under the cover so many requests are being sent. Ajax is a java script technology running mostly on the client side and on the server the following question arises: will the average AJAX-enabled web-application be able to tell the difference between a real and a faked XmlHttpRequest?
The answer is NO. AJAX is a client side technology and we all know that the client should not be trusted.

The answer is - YES, or at least - No difference from regular HTTP Requests (e.g. HTML/ASPX/ASP/PHP etc..).


Web developers had the problems if forms impersonation since ever. That's the reason why technologies like CAPTCHA were adopted by the community, to avoid bots which are replacing the original form with an automatic POST/GET generator.


Although, we always had (and will always have) problems (due to the Real IP Barrier) authenticating the client with the server (Cookies, HTTP headers, Sessions and other keys can always be impersonated by sniffers and other HTTP altering mechanism) - Using any HTTP request-respond we can ensure, ONLY, that a specific IP made the requests. In case of a NAT or other IP sharing techniques, we should accept cases where packets were intercepted by a sniffer, and impersonated later on. That's why, for instance, we might prefer working with some basic encryption or authentication mechanisms (e.g. SSL, encrypted passwords and other POST strings etc..).


Anyhow, I guess what I'm trying to say is that the same problem is feasible both for AJAX and regular synchronous HTTP requests. AJAX changed nothing, nor added any new complication for that method.


This makes AJAX a much more difficult technology to protect. We all know how difficult is to bring application security to traditional server application. For AJAX it is double the effort.

How does it make it more difficult to protect than regular ASP.NET? Why is it doubling the effort? Using, for instance, HTTPS as the infrastructure for the web-services which are being consumed by the AJAX is as safe as using HTTPS as the infrastructure for a regular ASP.NET finance application.


However, let's don't forget why programmers, developers and software engineers still hold positions in enterprises. Our job is to predict those security weaknesses, and to provide an answer. Since day 1 in the academy, programmers are being told - Never trust input from the user. Input should be double checked. Once, on the client side (and it does not matter if the client is a web application or a win application) - And once again on the server side, before starting to analyze the data. Speaking more strictly, you should check your data THREE times. Third time is before inserting the data into a Database, by using Stored Procedures and strict validation on INSERTs and UPDATEs.


It does not make the AJAX more complex. It makes our life more complex. If you can't stand the heat, get out of the kitchen.


AJAX is advancing rapidly and new frameworks are introduced frequently but none can secure the AJAX application in a reasonable manner.

Choosing an infrastructure/framework is an important process in every project. However, it seems it is sometimes neglected.


When an organization performs his checks prior to choosing an infrastructure for his application (Whether it may be Java versus .NET, Windows vs. Linux, DevExpress vs. Infragistics etc..), its technological experts should be familiar with the internal requirements (In terms of Performance, Scalability, Security etc.). It's true that a failure in the infrastructure may cause a lot of trouble later on. However, when choosing the infrastructure you should ask yourself some WH questions, and do the math:


WHO

  • Who is the vendor behind it? This might be the most important question one should ask himself. Sometimes, the open community is just not good enough. I prefer paying for a service, knowing I'll get the best support available (Even if it will cost the vendor with a dedicated programmer for a given period of time). When I am choosing an open-source infrastructure, I make sure first that its license fits my needs, and that I do have the amount of support to feel confident in case things go wrong.
  • Who is going to assimilate it from my organization? It took me years to realize that people make software, and not vice versa. Therefore, if my developer prefers a certain infrastructure he feels comfortable with, I might give it a chance.

HOW

  • How am I going to use it? Is it going to be my application's base tier, or is it going to feel a very specific need? Am I going to use it 'as is', or will I need to customize it? Will I use 1% of it's features, or am I going to fully utilize it?
  • How tightly coupled am I going to be with it? Will I be able to switch to another infrastructure later on, or is it going to be a catholic wedding? Will I be able to move from one platform to another?
  • How am I going to deal with missing features? Will I have to develop workarounds, or use another complementary infrastructure? Am I going to change the code of the infrastructure itself?

WHAT

  • What are the alternatives? Is there any other tool providing the same (80:20) functionality? Should I develop my own infrastructure on premises? Maybe I... just don't need it?
  • What support do I have in case something goes wrong? What is the obligation of the vendor? Do I have any sources of information to consult? Books? Professionals? Web communities?
  • What if it fails? Am I doomed? Is there any data loss? Can I quickly move to an alternative? Is it a show stopper?

WHEN

  • When this product was first announced? What is its legacy?
  • When was the current version released? What were the bugs fixed in this version, and what are the feedbacks from other users? What was the last major release? Maybe the current version is already obsolete?
  • When will the next version be released? Is the product still being developed? Any future?

WHY

  • Why, in the first place, do I need it? It's often software engineers are using a utility because it's neat. Do I really need it? Maybe it's a solution for a problem I don't really have?

After answering those questions and other questions you might ask yourself, you should be able to spot the not. If you decided to go with an infrastructure after filling this questionnaire, you should at least know what your chances are, and what you should do in case things went wrong.


From a business and an architectural point of view unfortunately today we have to make a tradeoff. Security versus User Experience.

I sure DO NOT think that's true. I don't see why User Experience should come in place of Security. This statement is not only too generic (Does the fact the login screen in Windows 7 is much more colorful and supportive (including the password hint feature..) makes it any less secured than Windows 2000's?), but it is also outrageous! Binding User Experience with Security is like binding Beer with over weight. The fact you consume the first, won't necessarily cause the other. Sure, you can exaggerate, but then again - you can always burn those calories at the gym.


Proper design can use AJAX in a sandbox. This means less sensitive areas of the application can enjoy AJAX but around the sensitive business and information an "AJAX Firewall" is built. For example there will be no AJAX enabled Web Service that exposes sensitive information.

A Proper design should separate any user related logic, any UI logic, and business logic from each other. Most important - if you web-service knows it's being consumed from an AJAX call, dude - you're in deep shit. A web service should provide an output to a given input. The AJAX script should interpret that output for its needs.


I, for instance, am testing all of my web services using regular NUnit winform interface. My AJAX code is calling those services later on... goes without saying there is no difference between the input from the AJAX and the input from my testers.


If you do need to implement some AJAX remember that AJAX enables new XXS capabilities and so server validation must be much more strict. Not only the body of the http packet must be validated but all of its headers.

I don't get it. What can be done with AJAX which was not done with regular HTTP queries? The only difference is that with regular HTTP responses, the browser is checking the response code, while with AJAX you check the response code yourself. You should never assume a certain input. For instance, you should never assume that if you get a 4xx or a 5xx response from the server, the call failed.


It is possible that the web-service actually executed its code, but failed somewhere. That's why, when dealing with web or any other technology, you should take in consideration, while you are still during the design phase, the places where you should use transactional mechanism. Those are places where data is being altered. Define a variety of responses for your services. Errors should be followed with explanations. For instance - The given data already exists. The given Token was already used. Etc...


Working with Sessions and Tokens is important also in regular browsing. I cannot count the amount of bugs I've seen during my life, which had to do with a user pressing the back button in his browser, re-submitting a certain form, or a user double-clicking the submit button... Those things should be taken in consideration. But once again, it has nothing to do with AJAX.


Never trust a third party. AJAX applications fetch information from various untrusted sources such as feeds, blogs, search results. If this content is never validated prior to being served to the end browser, it can lead to dangerous cross-site exploitation.

This is a bit ambiguous for me.. AJAX as a technology, won't allow cross-site scripting. Earlier versions of both Internet Explorer and FireFox had several exploitations which allowed cross-domain/cross-site scripting. Today, with IE8/7 and all service packs, and FF3+, there is no way (that I'm aware of) to asynchronously call a service from another domain. That's why blogs, feeds, search results etc, should be wrapped with server-side code (running locally) first, and cannot be consumed by the XmlHttp object from the browser.


Also, as said before - content should always be verified whether incoming or outgoing, whether on the server side or on the client side. You should never trust your data, and you should never ever trust someone else's data.


With AJAX, a lot of the logic is shifting to the client-side. This may expose the entire application to some serious threats.

That's true. It should be taken in consideration during the top level design phase, and be treated both on the client side and server side, as described earlier.


The urge for data integration from multiple parties and untrusted sources can increase the overall risk factor as well: XSS, XSRF, cross-domain issues and serialization on the client-side and insecure Web services, XML-RPC and REST access on the server-side.

It's indeed true that misusage of AJAX might cause new types of XSS. However, even the simplest HTML form element might cause XSS. Regarding insecure Web-Services, RESTfull interfaces etc... those should be handled both on the server side scripts, just like any other ASP/PHP/Perl/... script - and, by the webserver itself.


Never underestimate the need of good IT infrastructure. Security does not start nor ends with secure source code, which is taking everything in consideration. Security is much wider. Server policies, permissions, users and groups, RWX file permissions... all of those might have to do with the integrity of your software.


Some web servers would allow HTTP DEL requests upon files with public 666 permissions (or any **6 **7).


Conversely, Ajax can be used to build graceful applications with seamless data integration. However, one insecure call or information stream can backfire and end up opening up an exploitable security hole. These new technology vectors are promising and exciting to many, but even more interesting to attack, virus and worm writers. To stay secure, first answer the question is AJAX really needed? Then make sure it lives in a sandbox and third make sure your developers are paying attention to implementation details and taking security into consideration.




C++/CLI, an Alpha in the disguise of a 4th release.


I must say it took me a while to choose a title for this post; I was pondering whether "C++/CLI Sux" would express my feelings better.

It all began half a year ago, when I got the responsibility of a new algorithmic infrastructure project for our systems, which by definition must be compliant with ANSI-C++ for HPUX/Linux and other native systems, Win32/Visual-C++ for abundant performance systems and of course CLS compliancy for user applications which are built over the .NET environment.

The architecture I've formulated was based on an ANSI solution with environmental compiler directives (e.g. extern for the legacy, and __declspec for the VC compiler). I was planning to provide an answer to the CLS compliancy demand with a proxy-like wrapper solution.


Choosing between static DLLImports and OO-trustworthy C++/CLI


Basically, I had two options for accessing my unmanaged environment from a C# application (That's after disqualifying any loosely-coupling solution due to certain deployment requirements).

  1. The first option was to provide static DLLImport based solution, which would staticly wrap all of my DLL entry points, and would flat my models; Also, choosing this wrapping technique would require massive marshaling treatment and would deprive us of software engineering decisions.
  2. The second option, which was my favorite, is to provide a wrapping solution based on the C++/CLI environment. By taking advantage of the unsafe calls which are prohibited in any other case, with the mscorlib members and the strength of the managed sandbox - I could maintain my engineering decisions, and provide a fully funcation Object-Oriented solution with a robustic model which can be scalled up to support future growth.

Oh boy.. How wrong was I...


So no wonder why I've decided to choose C++/CLI as my intermediate layer between pure .NET logics and no-so-pure unmanaged algorithms.

In an early stage, I thought I would supply a set of compiler directives for prefixing a class (e.g. 'public ref class' instead of 'class'), lists (IList or ICollection generics instead of STL's vector, queue, etc. templates) and I would compile with the /clr:oldSyntax flag. However, the warning which claims that oldSyntax is obsolete, and the fact I need to supply a package with support for 20 years, made me to give up on that one. Using the /clr with its new syntax would make me to provide more directives for pointers & references (managed ^ and % instead of unmanaged & and *) and other ugly stuff.

After a short trial and error, I've eliminated the directives option and decided to build a wrapper project for the dynamic and static libraries. Basically, the code became too ugly, and code-aware tools such as DocumentX and Enterprise Architect could not understand the directives and failed to explore my source.

I've started by modeling a simple Proxy-Pattern (with some constraints due to the lack of interfaces in some of the environments I had to support) solution.

All the public methods were reflected in an outer managed envelope, calling a private unmanaged member field (Similar to the Adapter-Pattern); And the tree was happy.

The next stage was to create a simple case study Mxxxxxx (Managedxxxxxx) project, and wrap one of the common entities in my model; And the tree was, yet, happy.

Now, after making this proof of concept, I've started implementing the hierarchical object graph for my project. Base classes with protected fields, public specific methods with the relevant castings, assimilation of .NET types (e.g. DateTime, TimeSpan instead of the unmanaged alternatives) in the signatures of the public methods; And the tree was happy, but then he pressed Ctrl+Shift+B...


Linker error LNK2022


This linker error is my favorite. Not only because Microsoft's fix to this was "To resolve this problem, contact Microsoft Product Support Services to obtain the hotfix" but because I was suffering of it twice, and I still cannot exactly tell how I got rid of it.

In my first encounter with LNK2022 error I do not know what exactly crossed my mind. I thought I could override a pure virtual member which was declared in an abstract class from within my derived class.

After dealing almost a day with this linker error and doing some irrational things, I've managed to get over it by ignoring the 'using namespace XXXXXXX' declarations at the top, and by adding 'XXXXXXX::MyClass::...' before the return types of methods which returns instances which are not sitting on the managed heap. You say voodoo? I say Baaaah.

My second encounter with the LNK2022 error was quite ambiguous... I've done nothing. The solution compiled and linked successfully.. I've rebooted my machine, and Kazam! LNK2022 error - "Inconsistent method declarations in duplicated types". For some reason, my constructor (or by its pet name - my .ctor) was misbehaving.. I've googled for an hour, and then came across this response by JulianJoseph:

Hi,

   Well a simple clean solution did the trick for me. Thank you Microsoft...

---------------------
Julian

And viola! Thank you Julian.. It works for me too!


Compiler Errors C2248 and C3767


For an unknown (at the time) reason, I was getting C2248 error over and over again.. I saw that I'm getting this error even when I was calling a public method of the base class. After doing my magic, the error was replaced by a new one which was by 1519 better than before, error C3767 was the first clue of the problem.

Well.. this one is my fault.. I did not pay enough attention to the list of breaking changes in the managed compiler. It seems that by default, when compiling with the /clr flag - all the unmanaged classes are being treated as private unless applying the public access modifier. Here I had to add a define which looks like this:

#ifdef _MANAGED
#define PUBLIC public
#else
#define PUBLIC
#endif

Next, I of course had to change all of my class declarations to support the new syntax.


Random errors - with no reference number


I'm not going to talk about all the other identified (or so) errors I've had to deal with. I've only mentioned these three because I've had hard time solving it. I had dozens of errors to deal with, both in the linker and the compiler.

I do want to mention the system-ghosts. All of those voodoo errors which had no numbers.. I had to deal with at least ten sudden-linker-death cases, which I could not reproduce.

That's of course without mentioning the sudden death of the IDE itself in some cases... But I can always blame the Clear-Case client for those.


For conclusion


After almost two weeks - My wrapper classes are working; Great even. The project is functioning, and I can finally pack my stuff and fly to LA to attend the upcoming PDC. However, I must warn you all - think, and rethink twice before using C++/CLI in your projects. I would refer to it as an Alpha or an early stage Beta of a concept demonstration. Think carefully before using it for an operational project with deadlines and human beings which need to later on maintain the code.

I hate C++/CLI. I really do.





Watch me live from Tech-Ed 2008 in Eilat, Israel (6/4/2008-8/4/2008).


Microsoft decided to send me along with other 24 Israeli bloggers to the Tech-Ed in Eilat in order to report live from the convention.

As an early adopter of new technology, Microsoft also asked me to perform live video blogging from the event using a 3rd generation cellular phone (Nokia N95) they are going to provide me during my stay.

I'm going to provide live broadcasts to this website and to my Hebrew blog which is hosted by Microsoft Israel. I'm going to use a live cellular-to-internet broadcasting system provided by the Israeli Start-Up company, Flixwagon.

Please stay tuned as I'm going to broadcast not only the sessions, but also interviews with local and foreign lecturers, and other event highlights.


Want me to cast you live in here?


If you attend the event and want to share your message, feel free to contact me directly to my cellular phone at +972-54-4859461, and I'll tell you where you can find me.


Live (and recorded) broadcasts



Watch all broadcasts from the event


Please feel free to visit event's official website and watch Microsoft's official broadcasts along with photos and other neat stuff from the event.

Also, You might want to watch other live broadcasts from the event.





Boosting .NET application's performance – The base class library


Brief


The BCL (Base Class Library) contains the fundemental types of the CLR (Common Language Runtime). Among it's namespaces we can find the System, System.Collections, System.CodeDom, System.Diagnostics, System.IO, System.Text etc.. Those types are located in the mscorlib.dll and the System.dll.
In this article I'm going to refer to some of those basic types and will highlight some misconceptions and ambiguous practices.
Code samples with benchmarking for all the issues which are being discussed in this article are available for download at the end.


Most of the cases I'll discuss in here were published by the CLR team in the MSDN Magazine's January 2006 issue, CLR Inside Out article by Kit George


The basic misconception


Due to its name - the BASE class library, most developers tend to assume that those types are the most efficient for its domain. Microsoft's professionals are publishing articles and code samples which are dealing with those types under the "Best Practices" term, sometime without refering counter-practices or when not to use those "Best Practices".


Cases where BaseType.Parse() should be used instead of BaseType.TryParse()


In .NET 2.0 we were introduced to TryParse, a new method for parsing strings into base types.
Normaly, that's the most efficient way to parse strings. In case where the string was successfully parsed, this method does not differ much from the regular Parse method, but in case that the parsing failed, this method provides better performance by far.
In order to understand the reason, we need to understand the excpetion models; generating an exception is one of the most resource-greedy operations. Therefore, preventing exceptions by designated logic is always better. That's exactly what TryParse does. TryParse checks whether the parsing would succeed, and if its not - it returns a Boolean with the value false. In case that the check was positive, it is parsing it just like the Parse method.


Yet there are some cases where we do want to handle the exception. The most obvious is when we are parsing input which we want to fail the process when it's invalid; the second most obvious is when we want to have a better clue on why the parsing failed by catching different exception types.


The Parse method may throw the following types of exceptions:

  1. OverFlowException - in case the numeric input is not between BaseType.MinValue and BaseType.MaxValue. For instace, trying to parse 32768 into an Int16.
  2. FormatException - in case the string does not contain any legal characters for the type we are trying to parse. For instance, trying to parse a String.Empty into an Integer, or any other non-numeric character.
  3. ArgumentNullException - Will always be thrown when a null was passed to the method.
  4. ArgumentException - This happens only for certain types, where one of the values is invalid. For instace, when trying to parse a string into an Enumeration.

Cases you might prefer a simple ArrayList over a generic List<>


We all know Generics are more efficient than objects. Using Generics provides both better performance by avoiding unneccesary boxing and unboxing of objects, and better debugging by providing strong-types compilation errors instead of runtime casting exceptions when wrong types are being casted.
Those two reasons wiped the usage of ArrayList for most .NET 2.0 and newer applications, while in .NET 1.0 and 1.1 it is still one of the most consumed collection. Yet there are still some cases where we would still prefer to use this elder collection.
For value types, generics would always be better. So in case you consider storing an array of Integers, you should not even consider using ArrayList. But for reference types, the ArrayList would perform better for data extraction.
In case your application is rich with Sort and Contains calls, you might want to consider using an ArrayList. As you can see from the samples attached to this article, using an ArrayList.Sort() for large amount of Strings is faster x10 times than using the List<String>.Sort(). A similar case is for the ArrayList.Contains(String) and List<String>.Contains(String).


Difference between SortedList<> and SortedDictionary<>


The System.Collections.Generic namespace contains various collections for different purposes. Among those we can find the List<>, LinkedList<>, Queue<>, Stack<>, Dictionary<> and the SortedList<> and SortedDictionary<>. Those are implementations of well known collection types, and they differ by the internal management of the data, and the way the data is being indexed.
The Dictionary<> for instance is based upon Nodes. Node is a reference type, and therefore fills the heap and might cause GarbageCollection overhead. Also, the fact that the nodes are actually linked to each other and in contrary to Arrays they are not sitting in the same memory range, it is possible that some nodes would not be loaded to the CPU cache while other would. This might cause some time consuming paging operations and sometime even page faults.
The SortedDictionary<> is an implementation of the Dictionary<> where sorting and extracting from a sorted array algorithms were attached to. By using a bubble sorting algorithm when inserting a new node and a binary search when extracting a node makes it the most efficient way to store sorted collections with the limitations of the Dictionary<>. However, if storing in an O(log n) is less important for you, and you need good performance only for the data extraction, you might prefer to use the SortedList<> collection, where the data is stored in a way which is similar to regular arrays. The sorted list storing would provide an O(n) performance.


Don't use localized DateTime instances when it's not needed.


Most of us discover features of object by intellisensing. This behavior sometimes fails us when a better alternative is available. The DateTime.Now is a classic sample. When getting the system time inside a business flow, you seldom need the local time. Local time is needed only when displaying time information to the user. Behind the scenes, you should always prefer to work with UTC (Universal Time Coordinated) format.


Not only that working with UTC provides a better way to synchronize time formats from several sources (Also, no DST (Daylight Saving Time) calculations need to be taken in consideration)� it also provides much better performance. Calling the DateTime.UtcNow getter is x10 times more efficient than calling the DateTime.Now property getter. That's because the getter method does not need to check for localization information for the object it is returning. When displaying the DateTime to the user, localize it once before displaying it. Keep the calculations at the backend in a unified format.


StringBuilder.Append() is not always better than String.Concat()


When you want to concatenate strings, you should examine the amount of strings which are being used in the process and their length. By far, it is true that StringBuilder.Append() performs better for tens and more iterations of string concatenattions - yet when you intend to concatenate less than ten strings, the penalty you pay for the construction of the StringBuilder just does not worth it. In those cases, you should still prefer the good and old String.Concat or in it's more common name: +=.


String.CompareOrdinal is good for most cases.


As I've said in the localzied DateTime section, sometimes the intellisense spoils our programming skills. Using the String.Compare (or String.CompareTo) routines which are the default completion by the intellisense is much more time consuming than String.CompareOrdinal. While String.CompareOrdinal does the most obvious and compares the ordinal values of the strings (just like comparing chars in C++), String.Compare takes in consideration also culture information about the strings which are being compared.
The only reason to use the String.Compare would be when comparing strings from different cultures, or when you would like to use one of the overloads of the Compare method which is not available for the CompareOrdinal method (such as Case insensetive comparisons etc.).


Using the File.ReadAll... routines pros and cons


Another great feature of the CLR since version 2.0 is the ability to read the entire content of a file by using one line of code.
The three methods (ReadAllLines, ReadAllText & ReadAllBytes) provided by the BCL team not only iterate the file for us, but also takes care for handler disposal. Therefore, for everyday activities you might consider using those methods.
Yet in case your file isn't just an accumulation of bytes and chars, and you want to parse records from each line - you might still want to consider using the old File.ReadLine() method.
While the IEnumerable method ReadAllLines provides a great way for doing so, File.ReadLine() is still more efficient for reading just one line at a time. Here there is no absolute recommendation, but benchmarking both is important and illuminative.


For conclusion


I've touched here only one thousandth of the complexity which can be taken in consideration when developing applications in the .NET environment. The bottom line is that each solution should be profiled and checked carefully for the project it is intended. The "Best Practices" provides basic "do"s and "don't"s which boosts the development process and provides, in most cases, better performance; yet remember that it is not constitutional and you should not blindly follow the yellow brick road which is being drawn by most professionals world-wide.


download"Click here to download the code for this article.





DesignMode in WPF

One of the most common task of the custom controls' developers, is to determine whether their control is being used in run-time or it's being used in design time. For instance, If we provide a MyLabel control, we might want to display it's outlines on design-time, so that the designer who's using our control will be able to resize it, move it or rotate it.


This functionality is of course otiose when the end user is running our application in runtime (though there are cases we would like to enable those features, mostly it'll be used only in design-time).


The control designers (aka custom designers) area was extremely moderated in WPF. My troubles began when I've tried to create my first custom WPF control. My new control was supposed to be manipulated using the Expression Blend by UI experts, so I had to provide a really simple designer. I've wanted to enhance its automatic behavior by adding automatic docking inside panels and grids, and influence other properties I could predict its values.


My first step was of course - determine whether the control is being invoked through a designer or in runtime.


A brief history:


At the beginning, every UserControl implemented a property called DesignMode, which returned true value in case the control was initialized in a designer. This implementation was imperfect due to some problems:


  1. It was impossible to get a valid result inside the constructor of the control. The DesignMode property could return its value only after the InitializeComponents was de-serialized by the designer.
  2. Nested control had problems returning their DesignMode.
  3. At that time, there was only one supported IDE for control designing, Visual Studio 2003/2005. WPF, for the first time, was supported by new tools (e.g. Visual Studio 2005 CTP extensions, Visual Studio 2008 aka Orcas, Expression Blend etc.). So the past implementation could not provide information about the designer, but only could give some basic orientation whether the control is being designed or not.

This is how it was done in WinForms:



class CustomButton : Button
{
	public CustomButton()
	{
		if (this.DeisgnMode)
		{
			this.Text= "In Design Mode";
		}
		else
		{
			this.Text= "Runtime";
		}
	}
}

On one hand, in one of its early CTPs, WPF could provide its accurate design mode using a global parameter which could be read as a DependencyProperty through the AppDomain. But on the other hand, It was filthy. (BTW, in WinForms the parameter could also be accessed globally using the LicenseManager..).


The code snippet which was published those days was something like this:



bool IsDesignMode 
{
	get 
	{
		DependencyProperty isDesignModeProperty = 
		(DependencyProperty)AppDomain.CurrentDomain
		.GetData("IsDesignModeProperty");
		
		return isDesignModeProperty == null ? false : 
		true.Equals(isDesignModeProperty.GetValue(this));
	}
}

As it seems, due to a talkback of some reader of UrbanPotato's blog (A Cider internal..), this was changed and a new way was introduced. A new class called DesignerProperties was injected to the ComponentModel. The DesignerProperties class implements the method GetIsInDesignMode, which returns a boolean indicating whether the application is running in design mode. Also, using this new class we can get further details about the designer.


This is how it looks today (VS2008 Beta 1 & VS2005 WPF extensions):



public class CustomButton : Button
{
	public CustomButton()
	{
		if (System.ComponentModel.DesignerProperties
		.GetIsInDesignMode(this))
		{
			Content = "In Design Mode";
		}
		else
		{
			Content = "Runtime";
		}
	}
}




Cannot get response by mail? Try the cell!

It is possible to reach me via E-mail, though due to the public popularity of my E-mail address (among spammers and other corrupted communities), you might prefer contacting me via cell phone (Especially when life or death issues are at stake).
My cellular phone number is +972-54-4859461. Please use it wisely...


newspaper





דורון בן-דוד
All rights reserved to Doron Ben-David, 1999-2008 ©

Valid XHTML | CSS